id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
4694090 | pes2o/s2orc | v3-fos-license | Topical local anesthesia: focus on lidocaine–tetracaine combination
In recent years, the popularity of aesthetic and cosmetic procedures, often performed in outpatient settings, has strongly renewed interest in topical anesthetics. A number of different options are widely used, alone or in combination, in order to minimize the pain related to surgery. Moreover, interest in local anesthetics in the treatment of some painful degenerative conditions such as myofascial trigger point pain, shoulder impingement syndrome, or patellar tendinopathy is increasing. Numerous clinical trials have shown that lidocaine–tetracaine combination, recently approved for adults aged 18 or older, is effective and safe in managing pain. The present paper gives an overview of the recent literature regarding the efficacy and safety of lidocaine–tetracaine combination use.
Introduction
Topical anesthetics have an impressive history of efficacy and safety in medical practice. In recent years, the popularity of aesthetic and cosmetic procedures, often performed in outpatient settings, has strongly renewed interest in topical anesthetics. A number of different options are widely used, alone or in combination, in order to minimize the pain related to aesthetic and dermatologic procedures. The broad routine use of topical anesthetics is justified by the fact that they are generally easy to use, and their adverse effects are infrequent. Nevertheless, the choice from among the different formulations available in the market should take into account a number of factors such as type of surgical procedure, effectiveness profile, ease of use, application time, need for occlusion, and whether there are any side effects. 1 Theoretically, ideal topical anesthetics should produce effective local anesthesia by penetrating the epidermis and have no systemic absorption. 2,3 The lidocaine-tetracaine (LT) combination, recently introduced in the market and available as a cream or medicated patch or peel, offers effective pain relief. Numerous clinical trials have evaluated this anesthetic combination, showing what seems to be an association with a very favorable profile compared to other topical local anesthetics thanks to the fact that it is easy to use and has mild side effects.
The aim of the paper is to give an overview on the use of LT combination with an outline of the efficacy, safety, and tolerability reported in clinical studies. submit your manuscript | www.dovepress.com Dovepress Dovepress 96 Giordano et al "tetracaine," and "lidocaine/tetracaine" were used for the research. Only papers published in English were used for the review.
Overview of pharmacology of lidocaine and tetracaine, and rationale for the combination Local anesthetics produce anesthesia by interrupting neural conduction signals at the level of sodium ion channels within neural membranes. The structure of neural fibers and the chemical properties are the main factors that primarily influence local anesthetic action.
Nerve fibers have different diameters and firing rates. A smaller diameter and higher firing rate make the neural fiber more susceptible to local anesthetics. Thus, the tiny, rapid-firing autonomic fibers are the most sensitive, followed by sensory fibers, and finally by somatic motor fibers. 4 Like all local anesthetics, lidocaine and tetracaine have a basic structure consisting of an aromatic ring, intermediate chain, and amine group. Each of these components contributes to chemical and clinical properties of the molecule.
The lipid solubility of the molecule, thanks to the aromatic ring, strongly influences the capacity of local anesthetics to spread through the nerve sheaths. The higher the lipid solubility, the greater is the potency of the local anesthetic.
The intermediate chain makes it possible to classify local anesthetics into two groups: amides or esters. 5 Lidocaine is an amide-like anesthetic such as prilocaine, etidocaine, and bupivacaine. Tetracaine is an ester-like anesthetic like procaine and benzocaine. All local anesthetics are lipophilic and soluble in water. 6 Aside from the taxonomic aspect, the intermediate chain is the main factor influencing metabolism and elimination of local anesthetics. Lidocaine is metabolized rapidly by the hepatic microsomial enzymes to a number of metabolites, including monoethylglycinexylidide, whose pharmacological activity is similar to, but not as potent as, that of lidocaine. 7 Tetracaine is rapidly metabolized by plasmatic esterases to a number of metabolites, including para-aminobenzoic acid, with an unspecified activity. 7 The amine group influences water solubility of the local anesthetic. It can exist in a tertiary lipid-soluble form (three bonds) or in a quaternary water-soluble form. The onset of anesthesia is directly related to the amount of local anesthetic in the tertiary lipid-soluble form. Conversion from quaternary water-soluble to tertiary lipid-soluble form is influenced primarily by physiologic tissue pH (7.4), according to the Handerson-Hasselbach equation: log (tertiary form/ quaternary form) = pKa -pH, where pKa is the ionization constant for local anesthetics. Local anesthetics have a pKa greater than 7.4 and this favors the quaternary water-soluble form at the physiologic pH of 7.4. In the presence of inflammation tissue, pH tends to decrease and this further favors the quaternary water-soluble form.
The rationale for the LT combination is largely due to the pharmacokinetics of the two components. The anesthesia produced by lidocaine is faster, more intense, longer lasting, and more extensive than that produced by an equal concentration of procaine. Lidocaine is an alternative choice for those who are sensitive to ester-type local anesthetics. 8 Tetracaine, a long-acting amino-ester, is more lipophilic than lidocaine, concentrating in the stratum corneum of the epidermis, where it slowly diffuses. Its duration is thus prolonged and systemic uptake is limited. 9 As a result, LT combination produces rapid and durable topical anesthesia. LT combination is available in the market as a 7%/7% self-occluding dermatologic cream or as a 7%/7% cutaneous patch. This allows choosing the best solution on the basis of the kind of procedure and the area to be anesthetized.
Efficacy
In literature, there are many studies that have evaluated the efficacy of LT combination.
Bryan and Alster 10 first proposed using LT combination for cutaneous laser surgery comparing it to placebo in 60 patients undergoing laser surgery. The three protocols of this blinded, randomized trial had different anesthetic application times: in the first of these protocols, 30 subjects were randomized to receive either placebo or LT cream for 60 minutes. In the second and third protocols, subjects (n=15 each) were randomized to receive placebo or LT cream for 20 or for 30 minutes. Clinical effectiveness was evaluated by the subjects, the investigators performing the laser surgery, and an independent observer. The authors found that subjects who received the active drug had less pain compared to those who received the placebo. Only 9% of those receiving the active drug reported inadequate pain relief compared to 66% in the placebo group. Similarly, the investigators rated 75% of the LT cream subjects with no pain and 25% of the placebo group.
In another study, Wallace et al investigated the LT combination with the aim of determining the depth and duration of anesthesia. 11 The authors therefore conducted a randomized, double-blind, placebo-controlled, 2-period crossover study in 24 healthy subjects. Randomization was either to period 1= the heated LT patch and then period 2= placebo patch, or vice versa. Patches were applied for 30 Local and Regional Anesthesia 2015:8 submit your manuscript | www.dovepress.com
97
Lidocaine-tetracaine combination minutes to the volar aspect of the forearm. Pain and sensory depths were measured at baseline and again at 30, 60, 90, and 150 minutes after patch application. Duration of anesthesia was measured at 40, 70, 110, and 130 minutes after patch application by evaluating thermal and mechanical sensation. The author found that pain and sensory depths with the LT patch were greater than those with placebo (P,0.001) at all postdose time points. The active patch achieved a maximum mean pain depth at 8.22 mm; anesthesia lasted at least 100 minutes after the patch had been removed. Cool and warm sensations and hot pain thresholds were increased compared with placebo (P,0.001). The author concluded that the LT patch provided favorable depth and duration of anesthesia without significant sensory loss for superficial venous access and minor dermatological procedures after a 30-minute application.
Recently, Ruetzler et al proposed the use of a topical anesthetic patch containing 70 mg each of lidocaine and tetracaine as an alternative topical anesthetic to subcutaneous injection of local anesthetic for arterial catheterization. 12 This prospective, double-blind clinical trial included 90 patients undergoing elective major cardiac surgery who were randomly assigned to receive one of the following: either an LT patch, followed by subcutaneous injection 0.5 mL of normal saline solution, or a placebo patch with subsequent subcutaneous injection of 0.5 mL of lidocaine 1%. The primary outcome, measured using 100 mm-long visual analog scale (VAS), was pain during arterial catheterization. VAS scores during arterial puncture were comparable in both groups and LT 7%/7% patch was noninferior to subcutaneous lidocaine. Pain scores at the time of subcutaneous injection were significantly lower in patients assigned to the LT patch than to lidocaine (P=0.001). The authors' conclusion was that both the LT patch and subcutaneous injection of lidocaine were comparable in providing pain control during arterial catheter insertion.
Interestingly, in a recent study, Rauck et al tested the usefulness of LT combination for treatment of pain associated with myofascial trigger points in 17 patients. 13 In this open-label, single-center outpatient pilot study, patients with $1 month history of pain associated with myofascial trigger points applied one patch to each myofascial trigger point for 4 hours twice daily for 2 weeks, followed by a 2-week, treatment-free period. At baseline, mean ± standard deviation average pain intensity was 6.3±1.56, which decreased to 4.5±2.31 (33%) (N =20) at the end of treatment. In all, 40% of patients had a clinically significant ($30%) decrease and 25% had a substantial ($50%) decrease. In 35% of patients (N=20), pain interference with lifestyle decreased by $50%, with an improvement of worst trigger point sensitivity in 45% of them. Average pain intensity was 5.0±2.04 2 weeks after stopping treatment; treatment benefit was maintained in eight patients (40%). The author concluded that the heated LT patch has potential utility as a noninvasive pharmacologic approach for managing myofascial trigger point pain.
Similarly, in a recent prospective, single-center pilot study, Gammaitoni et al tested the self-heated LT patch in 13 patients with patellar tendinopathy confirmed by physical examination, with pain of $14 days' duration and baseline average pain scores $4 (on a 0-10 scale), to determine whether the self-heated LT patch might relieve pain and improve function. 14 In the authors' opinion, the pain of patellar tendinopathy might be mediated by neuronal glutamate and sodium channels. Lidocaine and tetracaine might be effective by blocking both of these channels. Patients applied a self-heated LT patch to the affected knee twice a day for 2-4 hours each time for 14 days. Variations from baseline to day 14 in terms of average pain intensity and interference (Victorian Institute of Sport Assessment) scores were assessed. The authors found that the average pain scores decreased from a baseline of 5.5±1.3 to 3.8±2.5 on day 14. Similarly, the Victorian Institute of Sport Assessment scores improved going from 45.2±14.4 at baseline to 54.3±24.5 on day 14. A clinically important reduction in pain score ($30%) was demonstrated by 54% of patients. The authors concluded that the results of their pilot study suggested that patellar tendinopathy may benefit from topical treatment that targets neuronal sodium and glutamate channels.
In a 2-week pilot study in 2013, Radnovich and Marriott investigated the effects of heated LT patch in reducing pain in 18 adult patients with shoulder impingement syndromeassociated pain. 15 In the authors' opinion, participation in an appropriate physical therapy program is possible only if fear of pain is eliminated. Patients were treated with the heated LT (70/70 mg) patch placed over the site of shoulder tenderness each morning and evening for a period of 2 to 4 hours. Average and worst pain during the previous 24 hours and shoulder range of motion were assessed at baseline and on day 14. According to the authors, the mean average pain score at baseline in the per-protocol population was 5.5±1.1 (range 4 to 8); average and worst pain scores decreased by 2.4±2.0 and 3.7±2.7 points, respectively. Two-thirds of the patients achieved a clinically meaningful ($30%) reduction in average pain score, and half of the patients achieved a $50% reduction in average pain score. Shoulder internal rotation increased by 29.7°±21.8° and abduction increased Local and Regional Anesthesia 2015:8 submit your manuscript | www.dovepress.com
98
Giordano et al by 40.0°±44.2°. The authors' conclusion was that patients treated with the heated LT patch for 14 days achieved clinically meaningful improvement in pain intensity and range of motion.
The following year, Radnovich et al tested the heated LT patch in a prospective, randomized, open-label clinical trial in order to evaluate its efficacy in reducing shoulder impingement syndrome pain. 16 The 60 adult patients with shoulder impingement syndrome pain enrolled in the study were randomized to receive either treatment with the heated LT patch or a single subacromial injection of 10 mg of triamcinolone acetonide. Patients in the heated LT patch group applied a single heated LT patch to the shoulder for 4 hours twice daily, with a 12-hour interval between treatments during the first 14 days and could continue to use the patch on an as-needed basis during the second 14-day period. At baseline and at days 14, 28, and 42, patients rated their pain and pain interference with specific activities by a VAS score (0-10). The authors found that the average pain scores declined from 6.0±1.6 at baseline to 3.5±2.4 at day 42 in the heated LT patch group (n=29, P,0.001) and from 5.6±1.2 to 3.2±2.6 in the injection group (n=31, P,0.001). Similar improvements were seen in each group for worst pain, pain interference with general activity, work, sleep, and range of motion. The authors concluded that the efficacy of short-term, noninvasive treatment with the heated LT patch was similar to that of subacromial corticosteroid injections for the treatment of pain associated with shoulder impingement syndrome.
In 2014, Gahalaut et al compared the anesthetic potential of 2.5% lidocaine and 2.5% prilocaine topical cream with 7% lidocaine and 7% tetracaine combination cream when applied under occlusion for 30 minutes for radioablative dermatosurgery. 17 Forty subjects of achrocordons were enrolled in this split-side randomized trial. The authors found that pain severity experienced by subjects in terms of VAS score was significantly less for LT combination cream than for the lidocaine-prilocaine combination. They concluded that LT combination was effective when applied for a short time (30-minute intervals) in achieving local anesthesia to perform various dermatological procedures.
Recently, Bourne et al studied prospectively the anesthetic effect provided by an LT patch in comparison with that of injectable lidocaine during incision and drainage of skin abscesses. 18 Twenty adult patients with a skin abscess needing incision and drainage were randomized to one of two groups: one received LT patch and injectable normal saline for anesthesia, the other a placebo patch and injectable 1% lidocaine. The authors found that preprocedure pain scores were similar in the two groups. Pain scores during incision and drainage and postprocedure in the two groups were compared. The pain experienced by patients receiving injectable lidocaine (50.1±5.9 mm; 95% confidence interval 45.2-55.1) and those receiving the transdermal LT patch (60.1±11.0 mm; 95% confidence interval =55.2-68.1, P=0.04) was similar. The power to detect a difference of 20 mm at P#0.05 was 80%. Although this was statistically significant, it was not clinically significant. There was also no statistical difference between the two groups in the postprocedure pain scores (P=0.65). The authors concluded that both a local injection of lidocaine and the LT patch provided clinically similar analgesia during incision and drainage of skin abscesses. Pain at presentation and after the procedure was similar in both groups. Despite this, in the authors' opinion, emergency physicians should continue to use a local injected anesthetic for incision and drainage of skin abscesses until a less painful alternative is identified.
In two studies, Alster et al evaluated anesthetic efficacy of lidocaine 70 mg/g and tetracaine 70 mg/g in laser-assisted hair removal. 19 Studies A (Phase II) and B (Phase III) were randomized, double-blind, placebo-controlled and paired; applications of LT peel and placebo were concurrent. In Study A, 60 subjects were randomized to groups of 30, 45, or 60 minutes, while in Study B, 50 subjects had 30-minute applications. Efficacy evaluations were achieved by VAS, subject's/investigators impression of anesthetic adequacy, and investigators' pain ratings. VAS scores were significantly lower (P,0.05) for LT peel: mean scores were 26.7 for LT peel versus 44.3 for placebo (Study A total population, similar between application times) and 23 versus 31.7 (Study B), respectively. The authors concluded that a 30-minute LT peel application was effective and well tolerated in providing anesthesia for laser-assisted hair removal.
Safety and tolerability
Currently, there are no guidelines for the use and safety of compound mixtures of local anesthetics. Despite the fact that local anesthetics are considered safe and well tolerated, systemic toxicity has been reported by different authors when they are used either with simple topical application or under occlusion. [20][21][22][23] Systemic effects may include dizziness, seizures, respiratory distress, loss of consciousness, or cardiac arrest.
LT combination is reported to have a safe profile with mild side effects when used according to recommendations. 2,3,5,6,21 Transient cutaneous erythema and edema and skin discoloration are the most common side effects. No cases of Local and Regional Anesthesia 2015:8 submit your manuscript | www.dovepress.com
99
Lidocaine-tetracaine combination methemoglobinemia have been reported. 1 All patients with history of sensitivity to local anesthetics should avoid LT combination use.
In 2008, Ogden et al's randomized study in 36 adult volunteers evaluated the pharmacokinetic profile of lidocaine and tetracaine after a single application of the LT peel. 24 The LT peel was applied to a 50, 100, or 200 cm 2 area of the anterior surface of the left or right thigh of volunteers for 30, 60, or 90 minutes. Venous blood samples were collected at 0, 30, 60, 90, 120, 150, 180, 210, 300, and 420 minutes after the initial application of the LT peel. The authors found that plasma concentrations of lidocaine and tetracaine were below the limits of quantification for the assay (100 and 5 ng/mL, respectively) at all time points. A single application of the LT peel was well tolerated; no study subject reported an adverse event. Ogden et al concluded that a single application of LT peel to up to 200 cm 2 of anterior thigh in adults for up to 90 minutes did not produce systemic levels of lidocaine and tetracaine that were clinically significant at any time point measured up to 420 minutes after the initial application.
In 2013, Rauck et al reported erythema as the most common adverse event following LT combination application. 13 In their 2014 Phase III study to assess the efficacy and safety of LT 7%/7% cream versus placebo cream, Cohen and Gold reported no related adverse events with LT combination. There was, however, one related adverse event of erythema with placebo cream. 25 These studies suggest that LT combination seems to have a safe profile and be well tolerated as most subjects reported no major adverse effects.
Patient satisfaction
Two recent studies addressed the specific issue of patient satisfaction while testing the anesthetic efficacy of LT combination.
In their Phase II-III studies regarding the efficacy of LT combination versus placebo in achieving local anesthesia for laser-assisted hair removal, Alster et al specifically addressed the issue of patient satisfaction. 19 For both studies, subjects' ratings favored LT peel (P,0.05 vs placebo).
In their 2014 Phase III study, Cohen and Gold reported that a significant percentage of subjects declared that they achieved adequate pain relief with LT combination and that they would use it again. 25 In 2013, Rauck et al investigated the satisfaction of patients affected by myofascial trigger points treated with LT combination. 13 The authors reported that 75% of patients enrolled in the study were either satisfied or very satisfied with treatment. Two weeks after stopping treatment, average pain intensity was 5.0±2.04; treatment benefit was maintained in eight patients (40%).
Conclusion
As the number and type of outpatient surgical procedures continue to grow, and as many minor inpatient surgical procedures involve some pain and discomfort, physicians are faced daily with pain management. They therefore need to adequately manage different anesthetic options. A number of local anesthetics, used alone or in combination, have been proposed in clinical practice. Numerous clinical trials have shown that LT combination, recently approved for adults aged 18 or older, is effective and safe in managing pain. Interestingly, LT combination has been successfully tested for the treatment of some painful syndromes such as myofascial trigger point pain, patellar tendinopathy, and shoulder impingement syndrome pain. Thus, it may represent a noninvasive treatment of these painful conditions and may help to limit pain killer prescriptions. Given its topical formulation, LT combination eliminates the use of needles, thereby reducing patient discomfort and anxiety. LT combination, when used according to recommendation of the US Food and Drug Administration, shows a high tolerability and a safe profile, with no major side effects compared to other topical anesthetics.
Nevertheless, as most of the randomized controlled trials reported compared LT combination versus placebo, further studies comparing LT combination versus standard treatment should be conducted in the future.
Disclosure
The authors report no conflicts of interest in this work.
Publish your work in this journal
Submit your manuscript here: http://www.dovepress.com/local-and-regional-anesthesia-journal Local and Regional Anesthesia is an international, peer-reviewed, open access journal publishing on the development, pharmacology, delivery and targeting and clinical use of local and regional anesthetics and analgesics. The journal welcomes submitted papers covering original research, basic science, clinical studies, reviews & evaluations, guidelines, expert opinion and commentary, case reports and extended reports. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors. | 2018-04-03T02:00:10.356Z | 2015-11-27T00:00:00.000 | {
"year": 2015,
"sha1": "94c78b37cf02f7e09f75f4deb3e3e00989104d2a",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=28176",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "94c78b37cf02f7e09f75f4deb3e3e00989104d2a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
253168696 | pes2o/s2orc | v3-fos-license | COVID-19: self-reported reductions in physical activity and increases in sedentary behaviour during the first national lockdown in the United Kingdom
Purpose The United Kingdom (UK) government imposed its first national lockdown in response to COVID-19 on the 23rd of March 2020. Physical activity and sedentary behaviour levels are likely to have changed during this period. Methods An online survey was completed by n = 266 adults living within the UK. Differences in day-to-day and recreational physical activity (at moderate and vigorous intensities), travel via foot/cycle, and sedentary behaviour were compared before and during the initial COVID-19 lockdown. Results The median level of total weekly physical activity significantly reduced (− 15%, p < 0.001) and daily sedentary time significantly increased (+ 33%, p < 0.001). The former was caused by a significant reduction in weekly day-to-day physical activity at moderate intensities (p < 0.001), recreational activities at vigorous (p = 0.016) and moderate (p = 0.030) intensities, and travel by foot/cycle (p = 0.031). Sub-group analyses revealed that some populations became disproportionally more physically inactive and/or sedentary than others, such as those that were: living in a city (versus village), single (versus a relationship), an athlete (versus non-athlete), or earning an average household income < £25,000 (versus > £25,000). Conclusions Now that the UK is transitioning to a state of normal living, strategies that can help individuals gradually return to physical activities, in accordance with the 2020 WHO guidelines, are of paramount importance to reducing risks to health associated with physical inactivity and sedentary behaviour.
Introduction
The World Health Organization (WHO) declared the novel coronavirus (COVID-19) as a Global Health Emergency [1]. Global government strategies to control the spread of COVID-19 included limiting social interaction using enforced national lockdowns. In the United Kingdom (UK) on March 23rd 2020, the government enforced a national lockdown during which people could leave their household for essential reasons only, such as the collection of food, medicine, and outdoor exercise once per day [2]. The lockdown has a clear rationale behind reducing exposure to and transmission of COVID-19 [3], however, this may have also substantially disrupted individuals' daily routines, including physical activity and sedentary behaviour levels, which are known to impact long-term health [4][5][6].
The recently updated World Health Organization 2020 Guidelines on Physical Activity and Sedentary Behaviour [7] recommends that adults should partake in 150-300 min of moderate-intensity or 75-150 min of vigorous-intensity physical activity (or some combination of both) whilst reducing sedentary behaviour. There is an overwhelming evidence-base in support of physical activity for improving health-related outcomes such as for reducing the risk of all-cause mortality [8] and major non-communicable diseases including several cancers [9], cardiovascular disease [10], type 2 diabetes [11], dementia, and Alzheimer's [12]. In addition, physical activity is recommended for reducing the impact of mental health disorders such as anxiety and depression [13][14][15]. Evidence also suggests that reducing levels of sedentary behaviour can bring independent health benefits [16]. Importantly, adults meeting physical activity guidelines have been shown to be at a decreased risk of COVID-19 infection, severe illness, and related death [17]. Despite the benefits of a physically active lifestyle, the opportunity to be active during the UK's first COVID-19 lockdown was severely limited. Leisure centres and gyms, among other recreational activity facilities were required to close throughout the lockdown, and individuals were limited to exercise outdoors once a day (alone or with members of their household) [2]. Furthermore, the requirement to work from home where possible is likely to increase sedentary behaviours such as sitting, bed rest, and lounging. Public Health England [18] and the WHO [19] encouraged people to remain physically active during the pandemic, however, it is unclear whether this occurred during the first UK lockdown in response to COVID-19.
The present study, therefore, aimed to quantify changes in day-to-day living and recreational physical activity, travel via foot/cycle, and sedentary behaviour levels before and during the first UK lockdown (enforced on 23rd March 2020). It was hypothesized that during lockdown compared to before, total physical activity levels would significantly reduce, and sedentary behaviour levels would significantly increase.
Participants
Four preliminary questions via an online survey confirmed that participants met the following eligibility criteria: (1) ≥ 18 years of age, (2) current resident of the UK, (3) following the government lockdown rules, and (4) no recent changes in health status that could influence physical activity levels. All participants must have provided informed consent was provided by all participants via the online questionnaire. The survey was completed by 266 participants that met all eligibility criteria. Ethical approval was granted by the Anglia Ruskin University Ethics Panel (approval code: SES_STAFF_19-10).
Online survey
Online Surveys (https:// www. onlin esurv eys. ac. uk/) was used for the creation and dissemination of the questionnaire. Participant demographics (e.g., sex, age, living location) were initially collected. Thereafter, physical activity levels (frequency and duration) during day-to-day living (e.g., work or household-related labor), recreational exercise, and travel via foot/cycle alongside daily sedentary behaviour duration were collected using a modified version of the WHO Global Physical Activity Questionnaire (GPAQ) Version 2 [20]. Physical activity questions were asked in relation to two intensity domains as defined by the WHO GPAQ: (1) moderate (moderate physical effort that causes small increases in breathing and heart rate for ≥ 10-min), and (2) vigorous (hard physical effort that causes a large increase in breathing or heart rate for ≥ 10-min). The GPAQ has been demonstrated to have fair-to-moderate validity when self-administered and moderate reliability compared to an interviewer-administered version [21].
To allow pre-lockdown versus during-lockdown analysis, physical activity and sedentary behaviour questions were asked twice in the following order: (1) participants were asked to recall their general activity levels in the 8-week prior to the lockdown, and (2) participants were asked to recall their current general activity levels during the lockdown. To reduce the impact of recall bias, especially for numerical data (e.g., duration of sedentary behaviour), the survey was disseminated as soon as possible following the UK lockdown; publicly available from the 10th of April and closed on the 26th of April 2020 (lockdown: 23rd of March 2020). Within the first few days of survey dissemination (10-13th April), 75% of responses were submitted, and by the 17th of April 94% of responses were submitted. To improve readability, information regarding the types of activity in each exercise domain/context was provided alongside the format in which the participant should respond. The survey was initially disseminated using social media platforms (e.g., Facebook) and emails followed by snowball sampling (e.g., participants sharing the survey link with others that expressed interest).
Data processing
Data were exported to the Statistical Package for Social Sciences (SPSS, Version 26, Chicago, IL). Outliers were removed from physical activity and sedentary behaviour metrics; defined as any data point that was > 300% of the upper or lower interquartile range for a given measurement at the cohort level (1.3% [n = 41] of data points were classified outliers). For all physical activity data, participants had to initially confirm that they did/did not exercise in a particular context (e.g., "did you do any vigorous-intensity sport, fitness or recreational (leisure) activities…" [Yes/No]). If participants selected 'No', a null result (valueless character) was generated for the succeeding questions relating to exercise frequency and duration for that activity due to the questionnaire routing system used. To ensure that these data were included in the analysis, they were manually inputted (i.e., frequency = 0 and duration = 0 h). This avoided only including participants that had completed some form of activity, which would have overestimated the cohort's physical activity levels.
Physical activity frequency per week and duration per session outcomes were multiplied to calculate weekly activity volume (activity volume = frequency [per week] × duration [h]). Additionally, metabolic equivalents (MET-h/week) were calculated from moderate-and vigorous-intensity activity data in accordance with the WHO GPAQ guidelines to estimate participant's MET-h/week [20]. One MET is defined as the average energy expenditure of an adult sitting quietly; 3.5mlO 2 /kg/min (or 1 kcal/kg/h) [22]. The 2008 Physical Activity Guidelines for Americans define physical activities at moderate intensities as 3.0-5.9 METs and vigorous intensities as ≥ 6 METs [23]. The WHO GPAQ data processing guidelines are similar, however, are limited to using fixed MET values as opposed to a range, therefore, in accordance with the guidelines, moderate-intensity activity (including travel via foot/cycle) and vigorous-intensity activity are defined as 4 METs and 8 METs, respectively [20]. Therefore, for example, recreational physical activity MET-h/week = (moderate intensity activity volume × 4) + (vigorous-intensity activity volume × 8). MET-h/ week was calculated for day-to-day physical activity, travel via foot/cycle, and recreational physical activity independently. The sum of all activity METs was used to estimate total MET-h/week as an indicator of total weekly physical activity levels.
Statistical analyses
All statistical analyses were conducted using SPSS. All outcomes were assessed for normality via the Shapiro-Wilk Test; none were found to be normally distributed. Physical activity and sedentary behaviour levels for the entire cohort before and during the lockdown were compared using the Wilcoxon Signed Rank Test. Total physical activity levels (MET-hours/week) and sedentary behaviour duration before and during the lockdown were sub-analysed in the following categories: biological sex (male; female), lockdown living location (city; town; village), marital status (single; partnership/married), educational status (degree qualification or above; no degree), lockdown average household annual income (above £25,000; below £25,000) and athletic status (athlete; non-athlete). Comparisons for the absolute change in total physical activity and sedentary behaviour levels between sub-groups were made using the Mann-Whitney U Test (2 groups) or Kruskal-Wallis Test (≥ 3 groups). For all analyses, effect sizes (ES) were calculated using Hedge's G (0.2 = small, 0.5 = medium, 0.8 = large, and 1.3 = very large) [24]. Statistical significance was set at p < 0.05. All data are reported as medians and interquartile ranges. Table 1 presents participant characteristics. Table 2 presents physical activity levels for the cohort. Table 3 presents the change in total physical activity levels (METs/week) within and between sub-groups. Table 4 presents the change in sedentary behaviour duration for the cohort and also within and between sub-groups.
Discussion
This study aimed to quantify changes in physical activity and sedentary behaviour levels before and during the first UK COVID-19 lockdown imposed on the 23rd of March 2020. To address the study's hypotheses: (1) weekly levels of total physical activity significantly reduced (−15%, p < 0.001) and (2) daily sedentary behaviour time significantly increased (+33%, p < 0.001). The requirement to remain at home unless for essential reasons during the initial UK lockdown resulted in reduced self-reported day-to-day and recreational physical activity, travel via foot/cycle, and increased sedentary behaviour. The lockdown restrictions and changing individual circumstances likely created barriers relating to the motivation and opportunity to engage in physical activity. Reduced opportunities to travel and the requirement to work from home may have also contributed to the increase in sedentary behaviour. It is apparent that reduced physical activity and increased sedentary behaviour is a negative 'side-effect' that occurred during the initial UK lockdown; this is a cause for concern due to the health-related risks associated with these behaviours [8][9][10][11][12][13]. Cross-sectional research during the COVID-19 pandemic has identified significant associations between reduced physical activity levels and (independently) sedentary behaviour on poorer physical and mental health outcomes [25][26][27]. Because the first UK COVID-19 lockdown was enforced at a national level these general negative effects are likely to have been experienced by millions of people. Our findings are supported by other related COVID-19 research where data from periods of imposed national lockdowns in the UK and other countries show increased physical inactivity and sedentary time [25][26][27][28]. In residents of France and Switzerland, during their respective COVID-19 lockdowns, although vigorous-intensity physical activity decreased and sedentary behaviour increased, there was a concomitant increase in moderate-intensity physical activity participation [25]. This is contrary to the present study where weekly moderate-intensity recreational activities were significantly reduced. Whilst this may potentially reflect differences in restrictions and/or cultural responses to a lockdown, it may also be due to differences in statistical analysis methods, as we employed non-parametric tests due to our data not being normally distributed. For all sub-group analyses conducted in the present study, no population displayed an increase in overall physical activity (see Table 3) and some displayed non-significant changes, such as individuals living in a town or village, nonathletes, or those that were partnered. Sub-group analysis identified that reductions in total physical activity levels were significantly greater (independently) for those that were single (compared to a relationship), identified as a competitive athlete (compared to non-athlete), and with an average household income < £25,000 (compared to > £25,000). Despite failing to reach statistical significance (p = 0.056), there was an interesting trend for the changes in physical activity levels across living locations; effect sizes (ES) for those living in cities, towns, and villages were 0.53, 0.35, and 0.15, respectively, indicating that those living in urbanized places became disproportionally more physically inactive during the lockdown. Sub-group analysis for changes in sedentary time during lockdown also revealed that some populations became significantly more sedentary than their counterparts; namely, those that were single (compared to in a relationship) and those that had an average household income < £25,000 (compared to those with > £25,000). Periods of altered activity become particularly detrimental to long-term health when physical activity participation and sedentary time no longer meet WHO guidelines for optimal health [7]. However, relative to objective measures of physical activity and sedentary time (e.g., accelerometers), participants have been previously shown to overestimate physical activity and underestimate sedentary time when completing the WHO GPAQ [29]. It is, therefore, difficult to establish firm conclusions around the impact of lockdown on adherence to WHO physical activity and sedentary beahviour guidelines.
As the UK government continues to remove public health measures originally put in place to slow the spread of COVID-19, returning to a physically active lifestyle is of paramount importance to much of the population that was negatively impacted by the lockdown restrictions. Official guidelines from the WHO to engage in at least 150-300 min of moderate intensity or 75-150 min of vigorous-intensity physical activity (or some combination of both) per week, plus strengthening activities at least twice a week, should be used as the benchmark for leading a physically active Table 3 Total physical activity levels before and during lockdown Data are presented as median (interquartile range). Data were analysed before and during the lockdown within groups using the Wilcoxon Signed Rank Test. The change in physical activity levels was compared between groups using the Mann-Whitney U Test (2 groups) or the Kruskal-Wallis Test (≥ 3 groups). Living area effect sizes (ES) were calculated based on the following comparisons: [7]. Importantly, the guidelines acknowledge that even a few minutes of physical activity can incur positive health benefits [7]. Small modifications to daily habits (e.g., carrying the groceries or using the stairs rather than the lift) should, therefore, be encouraged to help people increase activity levels in a practical and achievable manner [30]. This is particularly important as working from home has now become more normalized and could create barriers in returning to physical activities and reducing the sedentary time that occurred during the lockdown, for example, in people who were previously required to commute but whose jobs can now be partially or completed conducted at home. Health practitioners are recommended to consult and employ a risk-stratification approach when overseeing a patient or client's return to physical activity after COVID-19 infection/illness; readers are directed to Salman et al. [30] for more information on this matter.
Strengths and limitations
The strengths of this study are that: (1) important cross-sectional data were collected from a large sample of UK adults during the initial COVID-19 lockdown, and (2) individuals with recent changes in health status (e.g., COVID-19 illness) that may have influenced physical activity or sedentary behaviour were excluded from the analyses.
The main limitation of the present study was the retrospective collection of pre-lockdown data. To reduce the risk of recall bias, simple questions and fast-tracking research design, ethical approval, survey dissemination, and survey closure were implemented. However, given the fast-evolving nature of the pandemic in its early stages, retrospective data collection was the only available method available to retrieve this data. Further, it was logistically and financially difficult to collect direct measures of physical activity (e.g., accelerometers), and therefore indirect self-reported measures were collected through an online survey. Whilst this improved the available sample size, the present study's findings should be correctly interpreted as estimates. As the survey was disseminated via snowball sampling, self-selection bias may have led to an overrepresentation of a population that had more free time to complete an online survey. Therefore, the findings may underrepresent busy populations such as keyworkers, overtime workers, or those with heightend levels of responsibilities (childcare, home schooling, caring, etc.).
Conclusion
During the initial month of the first UK lockdown (enforced on the 23rd of March 2020) in response to the COVID-19 pandemic, self-reported levels of physical activity significantly decreased (−15%, p < 0.001) and sedentary behaviour significantly increased (+33%, p < 0.001) in UK adults compared to before. Now that the UK is transitioning to a phase of normal living, it is important that individuals are encouraged and supported to gradually return to/increase levels of physical activity, using 2020 WHO physical activity and sedentary behaviour guidelines for goal setting. Modifications to daily habits (e.g., carrying the shopping, using the stairs instead of a lift, or a standing desk) should be acknowledged as effective and practical ways to increase physical activity levels and/or reduce sedentary behaviours for the promotion of one's health. A risk-stratification approach is recommended when returning individuals that have been infected by COVID-19 to physical activity to minimize health risks.
Funding This research did not receive any specific grand from funding agencies in the public, commercial, or not-for-profit sectors.
Data availability Data supporting the present study are available in the online public repository Zenodo at: https:// doi. org/ 10. 5281/ zenodo. 45015 30
Declarations
Conflict of interest None.
Ethical approval
The present study was approved by the Anglia Ruskin University Ethics Committee (Approval Number: SES_STAFF_19-10).
Consent to participate All participants provided informed consent via the online questionnaire and had the opportunity to withdraw from the questionnaire at any time without reason.
Consent for publication
All participants were informed prior to providing informed consent that the collected data are to be written up as a scientific publication.
Informed consent This study was conducted in accordance with the declaration of Helsinki. Due to the nature of the study, informed consent was provided by all participants via the online questionnaire.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 2022-10-28T15:13:25.174Z | 2022-10-26T00:00:00.000 | {
"year": 2022,
"sha1": "e80121fcca2a91d57ea3eec47bb04f60fe18d148",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11332-022-01012-0.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "92d2d30df37db2e5d41e59d448adc02497e801ba",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
4279718 | pes2o/s2orc | v3-fos-license | Mathematical Modeling of Interleukin-35 Promoting Tumor Growth and Angiogenesis
Interleukin-35 (IL-35), a cytokine from the Interleukin-12 cytokine family, has been considered as an anti-inflammatory cytokine which promotes tumor progression and tumor immune evasion. It has also been demonstrated that IL-35 is secreted by regulatory T cells. Recent mouse experiments have shown that IL-35 produced by cancer cells promotes tumor growth via enhancing myeloid cell accumulation and angiogenesis, and reducing the infiltration of activated CD8 T cells into tumor microenvironment. In the present paper we develop a mathematical model based on these experimental results. We include in the model an anti-IL-35 drug as treatment. The extended model (with drug) is used to design protocols of anti-IL-35 injections for treatment of cancer. We find that with a fixed total amount of drug, continuous injection has better efficacy than intermittent injections in reducing the tumor load while the treatment is ongoing. We also find that the percentage of tumor reduction under anti-IL-35 treatment improves when the production of IL-35 by cancer is increased.
Introduction
Interleukin-35 (IL-35) is a member of the IL-12 cytokine family. It is produced in human cancer tissues such as in melanoma, B cell lymphoma [1], lung cancer, colon cancer, esophageal carcinoma, hepatocellular carcinoma, cervical carcinoma, and colorectal cancer [2,3], and it plays important roles in tumor progression and tumor immune evasion [1]. Fox3 z regulatory T cells (T reg ) are common in tumor microenvironment [4,5], where they induce immune-suppression. They do so by producing various cytokines, including TGF-b, IL-10 [6], and IL-9 [7], thereby promoting tumor growth. It was also shown that T reg secrete IL-35 [8][9][10][11][12][13][14]. IL-35 functions through IL-35R on various cell types, and is a potent immune-suppressor. Indeed, T reg -derived IL-35 was shown to inhibit antitumor T cell response [15], whereas IL-35-deficient T reg have significantly reduced activity in vitro and in vivo [8]. Stable expression of EBI3, a gene that codes for IL-35 subunit, confers growth-promoting activity in lung cancer, whereas small interfering RNA silencing of EBI3 inhibits proliferation of lung cancer [16].
Recently Wang et al. [1] generated IL-35 producing plasmacytoma cancer cells and showed that the expression of IL-35 in tumor microenvironment increased the number of myeloid derived suppressor cells (MDSCs), and promoted tumor angiogenesis; furthermore, IL-35 inhibited the infiltration of cytotoxic T lymphocytes into the tumor microenvironment and rendered the cancer cells less susceptible to CTL destruction.
These experimental results suggest that blocking IL-35 may be an effective therapeutic approach to human cancer. To explore this possibility we develop in the present paper a mathematical model and then conduct in silica experiments to evaluate to what extend blocking IL-35 reduces tumor growth.
The model consists of a system of partial differential equations (PDEs) that involve interactions among cells (tumor cells, MDSCs, T cells, T reg s, endothelial cells) and cytokines (M-CSF, TGF-b, VEGF, IL-35). We first consider the situation which corresponds to the experiments in Wang et al. [1]. In these experiments two kinds of plasmacytoma cells were injected into wild type mice: tumor cells that have been transfected with IL-35 (J558-IL-35) so that tumor secretes high amount of IL-35 into the microenvironment, and ''normal'' plasmacytoma cells (J558-Ctrl) that secrete very small amount of IL-35. There is also a small amount of IL-35 produced by MDSC [17,18] as well as IL-35 produced by T reg [8][9][10][11][12][13][14]. We show that the model simulations agree with the experimental data in [1]. We also introduce, in this model, the effect of a drug which inhibits production of IL-35, and simulate various protocols for administering the drug. We find, that administering the drug frequently in small amounts yields better results than administering it infrequently in larger amounts. We also find that the percentage of tumor reduction under anti-IL-35 drug improves when the production of IL-35 by cancer is increased.
Mathematical model
The mathematical model is based on the network schematically shown in Figure 1. Cancer cells secrete M-CSF which attracts MDSCs; cancer cells and MDSCs secrete VEGF which triggers angiogenesis by attracting endothelial cells and enhancing their proliferation. The additional roles of MDSC are described in the caption of Figure 1. In particular, MDSC, inhibits the activation CD8 z T cells via IL-10 and a variety of other mechanisms.
As mentioned in the Introduction, Wang et al. [1] considered two kinds of tumor cells injected into mice: J558-IL-35 and J558-Ctrl. In the case of J558-IL-35, IL-35 is produced mostly by tumor cells, less by T reg , and little by MDSC. In the case of J558-Ctrl, cancer cells produce very small amount of IL-35 so that IL-35 mainly comes from T reg and MDSC. MDSC secretes TGF-b and IL-10 which promote T reg [19,20], and there is a positive feedback loop where the last activation is activated by TGF-b and IL-10.
We use the network described in Figure 1 to construct a system of partial differential equations. In order to simplify the computations we assume that the tumor and all the variables are radially symmetric. The variables of the model and their dimension are listed below. We proceed to write down the differential equation of each of the variables. Most of the parameters are taken from the literatures, as indicated; in Methods we explain how the remaining parameters were estimated.
Tumor cell (c). The density c(r,t) of tumor cells satisfies the following equation: MDSCs promote T reg s, but also secrete MCP-1 to attract macrophages into the tumor microenvironment. Macrophages secrete IL-12 to activate CD4 z T cells, and CD4 z T cells secrete IL-2 which activates CD8 z T cells. MDSCs also produce large amount of IL-10, which inhibits the chemotaxis and activation of CD4 z T cells. doi:10.1371/journal.pone.0110126.g001 Lc Lt~D where w 0 is the oxygen level in heathy tissue, and the levels of oxygen for necrotic, extremely hypoxic, and intermediate hypoxic states vary in the intervals ½0,w n , (w n ,w h and (w h ,w 0 , respectively. The first term on the right-hand side of Equation (1) represents the dispersion (or diffusion) of tumor cells with diffusion coefficient D c . The second term accounts for the tumor proliferation, which depends on the concentration of oxygen w(r,t) and tissue carrying capacity c à . The third and fourth terms represent the death of tumor cells by necrosis and apoptosis, respectively. The last term accounts for the killing of tumor cells by activated CD8 z T cells [21]. The parameters in Equation (1) are listed in Table 1.
M-CSF (q). The concentration of M-CSF is given by the equation: The first term on the right-hand side is the diffusion of M-CSF with coefficient D q . The second term represents the M-CSF secreted by tumor cells [19,22], and the last term is the decay of M-CSF. The parameters in Equation (2) are listed in Table 2.
The first and last terms on the right-hand side account for the source and death of MDSCs. MDSCs undergo dispersion as well as chemotaxis driven by M-CSF (the third and fourth terms) [23][24][25]. It was reported in [1], that MDSCs do not undergo chemotaxis by IL-35 in vitro experiments. However, it has been observed that differentiation of MDSCs from myeloid precursor cells is enhanced by IL-35, although the mechanism is currently unknown [1]. We assume that this mechanism results in the second term on the right-hand side of Equation (3). The fifth term accounts for the differentiation of MDSCs from myeloid cells promoted by M-CSF [26]. The parameters in Equation (3) are listed in Table 3.
Regulatory T cell (R). The equation for the density of regulatory T cells is given by
T reg is activated by TGF-b (the third term on the right-hand side) and by IL-10. IL-10 is secreted by MDSC [19,20] and, for simplicity, we do not introduce IL-10 explicitly, and represent the activation of T reg by IL-10 by the term d M M=(Mzs R ). The parameters in Equation (5) are listed in Table 5.
Activated CD8 z T cell (T). Cytotoxic T cells (CTL), or CD8 z T cells, satisfy the equation: MDSC secretes MCP-1 which exerts chemotactic force on macrophages [39,40], while macrophages secrete IL-12 which activates CD4 z T cells [41] and CD4 z T cells produce IL-2 [42,43] which activates CD8 z T cells. The activation of CD8 z T cells is inhibited by TGF-b [44][45][46]. For simplicity we combine all these process by attributing the chemotactic force or CD8 z T cells and activation source of CD8 z T cells to MDSC (the terms in square brackets in Equation (7)). The factor s M =(s M za 1 M) represents the fact that MDSC suppresses CD8 z T cells proliferation by amino acid metabolism. The parameters in Equation (7) are listed in Table 7. VEGF (h). The concentration of VEGF evolves according to the equation where l 5 (w)~l 5 w(w) and l 6 (w)~l 6 w(w) depend on the oxygen concentration w, as follows: and w à [ (w h ,w 0 ) is the threshold at which the hypoxic effect on VEGF production by tumor cells and MDSCs is maximal. The function w(w) is chosen such that tumor cells and MDSCs can secrete VEGF under mild hypoxic conditions. The second term on the right-hand side of Equation (8) represents the VEGF produced by tumor cells and enhanced by I 35 [1], and the third term accounts for VEGF produced by MDSCs and enhanced by M-CSF [47]; accordingly, the ratios k 1 =s h and k 2 =q 0 should be small. The parameters in Equation (8) are listed in Table 8.
Endothelial cell (EC) (e). The equation of the density of EC includes dispersion, chemotaxis by VEGF, and proliferation by VEGF: Le Lt~D Here e 1 is the maximal density of EC inside the tumor, and H( : ) is defined by The last term, taken from [22], reflects the fact that VEGF induces proliferation of EC when the concentration of VEGF is higher than the threshold h 1 . The parameters in Equation (9) are given in Table 9.
Oxygen (w). We model the concentration of oxygen by the equation: Oxygen is delivered by EC (the first term) and is taken up by CD8 z T cells (the third term), MDSCs (the fourth term), T reg s (the fifth term), and tumor cells (the last term). The parameters in Equation (10) are listed in Table 10.
We assume that the tumor is radially symmetric and is contained in a sphere 0ƒrƒL, where L~1:5 cm.
We next introduce the initial and boundary conditions for each of the variables.
Initial conditions. We assume that the tumor cells are concentrated initially near r~0, and take with a positive parameter E, 0vEƒ1, and scaling parameters c 0~7 :2|10 8 cell=cm 3 and L 0~0 :5 cm. Since M-CSF is secreted by tumor cells, we take the initial concentration of M-CSF to be similar to the density of tumor cells, where the constant a q =m q comes from the steady state equation for q.
Since tumor cells are concentrated at the center r~0, we assume that the MDSC is higher at the center and negligible near the boundary r~L, where the constant s 0 =m M comes from the steady state equation of Equation (3). We assume that initially there are no activated CD8 z T cells, and take The activation of T reg s and the productions of I 35 and VEGF are triggered by tumor cells and MDSCs; accordingly, we take
<
: and I 0 35~1 0 2 pg=cm 3 , and h 0~1 0 3 pg=cm 3 . Similarly, I b is produced by tumor cells and T reg s, so accordingly we take where I 0 b~2 :4|10 3 pg=cm 3 . Endothelial cells migrate into the tumor from the surrounding normal healthy tissue, so we take where e 0 is the density of endothelial cell in normal healthy tissue. Finally, since endothelial cells represent capillaries through which oxygen is delivered, we prescribe where w 0 is the oxygen concentration in normal healthy tissue. Boundary conditions. Since we assume radial symmetry, the first r-derivative of each variable vanishes at r~0. We assume no-flux condition at r~L for all the variables except for the oxygen and endothelial cells, and we take where m is the flux rate of EC from healthy normal tissue into the tumor microenvironment. Parameters nondimensionalization.. We nondimensionalizate the Equations (1)-(10) by the following scaling: fl l 1 (ŵ w),l l 2 (ŵ w),l l 5 (ŵ w),l l 6 (ŵ w)gt fl 1 (w),l 2 (w),c 0 l 5 (w)=h 0 ,M 0 l 6 (w)=h 0 g, fl l 7 ,l l 8 ,l l 9 ,l l 10 ,l l 11 ,l l 12 g~tfe 0 l 7 =w 0 ,T 0 l 8 ,M 0 l 9 ,R 0 l 10 ,c 0 l 11 ,l 12 g, The dimensional and nondimensional values of all the parameters of Tables 1-10 are summarized in Tables 11 and 12. After dropping the symbol ''^'', the model equations in the nondimensional form are as follows:
Numerical simulation
In accordance with the experiments in Wang et al. [1], we consider two types of mice plasmacytoma J558 cells in wild type mice: (i) J558-Ctrl tumor cells that secrete a very small amount of I 35 .
(ii) J558-IL-35 tumor cells that secrete a large amount of I 35 .
We use matlab with dr~1=40 and dt~7=216000 in nondimensional variables (i.e., dr~1=80 cm and dt~7=72000 day in dimensional variables). Figure 2 displays the spatial distributions of tumor cell density in cases (i)-(ii) at different times. We note that, in Figure 2, as time goes on, tumor cells migrate toward the boundary r~1:5 cm, where oxygen is rich while tumor cell density is lower near the center r~0 cm, where oxygen is sparse. The migration speeds of these two cases (i)-(ii) are similar to each other, but tumor cells with larger I 35 production (i.e., J558-IL-35 case) have higher peak during migration. The results of Wang et al. [1] were reported 2 weeks after injection of tumor cells into mice. Hence, we compare our simulations at the end of the second week with the results in [1]. In Figure 3(C), the ratio for MDSC of J558-IL-35 to J558-Ctrl is 2, which is the same as Figure five A in [1]. In Figure 3(H), the ratio for VEGF of J558-IL-35 to J558-Ctrl is 17, which is the approximately same as Figure four D in [1]. Next, we compare the ratio for T reg /CD8 z T cells of J558-IL-35 to J558-Ctrl with the result in [1]. But, in [1], they only showed the percentages of CD8 z /CD45 z , of CD4 z /CD45 z , and of Foxp3 z /CD4 z . By combining these results (Figures seven B, seven D, and seven E in [1]), we find that this ratio (for T reg /CD8 z T cells) is 0:54.
n c c |{z} production by tumor z n R R |{z} production by Treg arrival of MDSCs to the tumor microenvironment is somewhat delayed and therefore the number of CD8 z T cells in the control case is significantly less than in the J558-IL-35 case, while (for simplicity) our model does not include such a time delay. The subunits of IL-35, EBI3 and IL-12p35, are highly expressed in cancers such as lung cancer, colorectal cancer, and esophageal carcinoma [2,3]. Anti-IL-35 drug blocks the expression of IL-35 and could be an agent in treating these cancers [48]. To determine the effect of anti-IL-35 drug on cancer growth, we proceed to introduce it, as a drug, into our model. If we denote its concentration by f (r,t) then all we need to do is to modify Equation (4) by We make the pharmacokinetic assumption that f (r,t) decreases in r from the outer boundary of the tumor (r~1:5 cm) towards the center of the tumor (r~0), and take where a~L 2 (~2:25 cm 2 ) and F~10. We shall compare several dosing schedules: (i) no dosing of anti-IL-35, i.e., f (r,t)~1, for all t and 0ƒrƒL; (ii) continuous dosing with anti-IL-35 at fixed level F for 2 months, f (r,t)~F | r 2 za L 2 za , for 0ƒrƒL and 0ƒtƒ2 months ; ð16Þ (iii) intermittent dosing for 2 months, at double level 2F , one week at a time with one week spacing between dosing, f (r,t)~2 F | r 2 za L 2 za , for 0ƒrƒL and t 2i ƒtvt 2iz1 , 0, for 0ƒrƒL and t 2iz1 ƒtvt 2(iz1) , for i~0, 1, 2, 3, where t 0~0 and the length of each interval ½t j ,t jz1 is one week.
We use matlab with dr~1=80 cm and dt~7=24000 day in dimensional variables. Figure 4 also shows that the reduction rate by anti-IL-35 is larger when tumor cells secrete higher amount of IL-35 as in Lung cancer and colorectal cancer [2,3] than lower amount of IL-35 as in plasmacytoma [1]. Accordingly, as a 35 increases, the reduction in total tumor population becomes increasingly significant.
Sensitivity analysis
In this section we perform sensitivity analysis on the parameters (in dimensional form) including those that were only roughly estimated and those that play important role in the model. We list these parameters with their ranges, baselines, and units in Table 13. We use the method described in Marino et al. [49], using the Latin hypercube sampling to generated 500 samples with dr~1=40 cm and dt~7=12000 day. Since we focus on how anti-IL-35 drug inhibits tumor growth, we calculate the partial rank correlation coefficients (PRCC) and p-value, corresponding to the ratio C :~Ð Table 14 lists the PRCC and their p-values. Figure 5 plots the PRCC of the parameters with p-values smaller than 0:01. A negative PRCC (i.e. negative correlation) with pvalue smaller than 0:01 means that increasing this parameter value will decrease the value of C and hence increase the (relative) efficacy of the drug. A positive PRCC with p-value smaller than 0:01 has the opposite meaning, that is, it will decrease the efficacy of the drug. In Table 14, only g c , e 1 , l 5 , s M , s b , a 35 , and b 35 have negative PRCC with p-value smaller than 0:01. The most significant negatively correlated parameter is g c . Larger l 5 increases the production of VEGF and larger a 35 increases the production of I 35 and both increase tumor load. The negative correlation of these parameters shows that the drug is more effective for tumor with higher rate of production of VEGF and IL-35. On the other hand, the negative correlation of g c shows that the efficacy of the drug improves when the CD8 z T cells are more affective in killing tumor cells. However, it is not true to conclude that, in general, the drug efficacy increases with larger tumor load, since larger g c and s b shrink the tumor load but yield better drug efficacy. Similar results hold for the parameters with positive PRCC. For example, larger l 1 and s 0 lead to higher tumor cell population while the tumor efficacy is decreased.
Discussion
IL-35 is the most anti-inflammatory cytokine within the IL-12 cytokine family. In this paper we addressed the questions to what extend IL-35 is involved in tumor microenvironment and how effective is anti-IL-35 drug in reducing tumor growth. It is well known that T reg s are presented in the tumor microenvironment We next extended the model to include anti-IL-35 as an anticancer drug. We compared the efficacy of the drug under two schedules: continuous versus intermittent injections of the same total amount of the drug. We found that continuous injection has better efficacy while the treatment is ongoing. Since it is well known that some cancers including lung and colorectal cancers most likely secrete large amounts of IL-35, we also investigated the efficacy of the drug for such cancers. We found that the percentage of tumor reduction under anti-IL-35 drug improves when the production of IL-35 by cancer is increased.
There are currently only few experimental results by which our model can be tested. In recent experiments by Nicholl et al. [50] it was demonstrated that IL-35 promotes pancreatic cancer cells proliferation while anti-IL-35 reduces this promotion. More specifically, in Figure three of Nicholl et al. [50] it is shown that IL-35 (50 ng=ml) increases, on the average, by 100% the proliferation of colonies of several pancreatic cancer cell lines, while in the presence of anti-IL-35 (200 ng=ml) this increase is reduced to 50%. These in vitro results are in qualitative agreement with our results in Figure three (at week 8). Another example is taken from colorectal cancer in patients. As reported in Zeng et al. [2]. Foxp3 z T reg increases linearly with IL-35, and this is in qualitative agreement with Figures 3D and 3E of our simulations. As more experimental and clinical data become available, we should be able to test our model in more quantitative way, so that the model can further be refined.
In this paper we focused on the role of IL-35, although T reg secrete besides IL-35 also other cytokines that promote tumor, such as IL-10 and IL-9 [7,[51][52][53][54]; these were not included directly in the present model, since we wanted to base the model on the recent experimental data by Wang et al. [1]. When data for other cytokines become available to the same precision as, for instance, in [1], our model could then be extended to include these cytokines, and to obtain a more comprehensive evaluation of anti-IL-35 efficacy in combination with other drugs. We assume that, due to the secretion of IL-35, the production of MDSC in the present model is larger than the production assumed in [56], so we have taken s 0 and a M to be larger than in [56].
Estimate D I 35 and m 35 in Equation (4) Since IL-35 belongs to the IL-12 family, we assume that its diffusion coefficient and its degradation rate are the same as for IL-12 [60][61][62][63]: In the in vivo experiments of Wang et al. [1] the initial number of cancer cells that were injected was 5|10 6 and we assume that they occupy a volume of 50 mm 3 , so that There is no data in [1] on the density of the tumor cells in day 15, but the tumor cells were observed to grow rapidly in the first 15 days. We assume that the average of the density of tumor cells in the first 15 days is very close to the maximal capacity 10 9 cell=cm 3 and take, in (23), c~10 9 cell=cm 3 for J558-IL-35 tumor cells. Recalling Equation (18), we get, with m 35~2 =day (Table 4) Table 14 [11,13,14,27], little by MDSCs, and very little by tumor cells. Hence, in the J558-Ctrl case, we take the production rate of I 35 by tumor cells to be a 35~1 0 {7 pg=cell=day.
The production rate of I 35 by T reg is estimated to be b 35~1 :67|10 {3 pg=cell=day [34] and we take the production rate of I 35 by MDSCs to be small enough, i.e., c 35~1 0 {4 pg=cell=day, so that the production of I 35 in the J558-IL-35 case satisfies: (5) In [38], the cytokine signalling by TGF-b on T reg is modeled bỹ whered d b~3 3:27=day which has dimension per day ands s b~1 which is nondimension. In our Equation (5), the dimension of d b is cell=cm 3 =day and the dimension of s b is pg=cm 3 . Correspondingly, we take d b~d d b |R 0~3 3:27=day|10 5 cell=cm 33 :327|10 6 cell=cm 3 =day, s b~s s b |I 0 b~1 |2:4|10 3 pg=cm 3~2 :4|10 3 pg=cm 3 , where I 0 b &2:4|10 3 pg=cm 3 [64]. MDSC also activates T reg population. We assume that the activation of T reg by MDSC is weaker than the activation of T reg by TGF-b, and hence take it to be d M~3 8 d b &1:25|10 6 cell=cm 3 =day: We also take s R~1 0 7 cell=cm 3 .
Estimate n c and n R in Equation (6) We assume as before that the initial tumor occupies a volume of 50 mm 3 and, accordingly, also T reg occupies the same volume. In [34], the production of I b by tumor cells and T reg s are Estimate s M , b 1 , b 2 , a 1 , a 2 , a 3 , c 5 in Equation (7) Since IL-35 enhances the population of MDSC, the concentration of IL-10, which we represent by a 1 M, is larger than the one in [56]. Hence, we chose s M to be larger than the corresponding value of s M in [56]. Moreover, since IL-35 promotes tumor growth, we expect a stronger immune response by T cells than in [56] and hence we take b 1 and b 2 larger than the corresponding value in [56]. The parameter c 5 is taken from [56]. Since the chemotaxis and activation of CD8 z T cells are indirect, we take a 2 and a 3 to be smaller than a 1 : a 1~2 pg=cell and a 2~a3~0 :01 pg=cell. | 2018-03-30T20:00:49.822Z | 2014-10-30T00:00:00.000 | {
"year": 2014,
"sha1": "3bfa8b6b72a72622ee53c7d223a12a4098cfe144",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0110126&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3bfa8b6b72a72622ee53c7d223a12a4098cfe144",
"s2fieldsofstudy": [
"Mathematics",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
254722906 | pes2o/s2orc | v3-fos-license | TRPV1 and GABAB1 in the Cerebrospinal Fluid-Contacting Nucleus are Jointly Involved in Chronic Inflammatory Pain in Rats
Objective To assess the receptors of TRPV1 and GABAB1 receptors that were colocalized in cerebrospinal fluid contacting nucleus (CSF-contact nucleus) of chronic inflammatory pain (CIP) rats bringing inspiration for reducing chronic pain. Methods A rat model of CIP was constructed by plantar injection of complete Freund’s adjuvant (CFA), and the paw withdrawal mechanical threshold (PWMT) and paw withdrawal thermal latency (PWTL) were measured 1, 3, 5, 7, 10, and 14 days after plantar injection. In the first part of the experiment, rats with CIP were divided into the immunofluorescence group and the coimmunoprecipitation (Co-IP) group (n = 6). Rats in the immunofluorescence group were injected with the retrograde tracer CB conjugated with Alexa Fluor 594 into the lateral ventricle two days before the injection of CFA into the plantar surface of the left paw. Three days later, rats that exhibited hyperalgesia were perfused, and their brains were extracted and used for double immunofluorescence staining of the CSF-contacting nucleus. Rats in the Co-IP group were anesthetized and dissected 3 days after CFA injection, and fresh brain segments containing the CSF-contacting nucleus were collected for Co-IP to assess the colocalization of TRPV1 and GABAB1 in the CSF-contacting nucleus (n = 6). In the second part of the experiment, SD rats were divided into the normal saline group (control group) and the CFA group. Fresh CSF-contacting nucleus-containing tissues were collected for Western blot analysis 3 days after plantar injection to observe the changes in TRPV1 and GABAB1 expression in the CSF-contacting nucleus. Results TRPV1 and GABAB1 were co-expressed in the CSF-contacting nucleus in rats with CIP, and their expression was upregulated. Conclusion TRPV1 and GABAB1 in the CSF-contacting nucleus are jointly involved in CIP in rats, and there is a direct or indirect link between TRPV1 and GABAB1.
Introduction
Chronic inflammatory pain (CIP) is pain caused by persistent or unresolved inflammation. Multiple inflammatory stimuli can sensitize nociceptive neurons, thereby promoting pain hypersensitivity. However, animal experiments and clinical practice have shown that inhibition of individual inflammatory pathways is a problematic approach for relieving pain, as parallel signaling cascades can drive pathologic pain and thus promote pain sensitization. 1 It is known that ion channels and receptors in the dorsal root ganglia (DRG) are responsible for the detection of noxious stimuli, and their plasticity contributes to the increased severity of pain. TRP (transient receptor potential) channels are emerging targets for understanding this process and developing novel treatments. Their ability to form multimeric complexes broadens the variety and complexity of channel regulation and the potential implications for pain modulation. 2 Among TRP channels, the capsaicin receptor TRPV1, a non-selective cation channel, is a key molecular component of pain detection and modulation. Sensitization of TRPV1 is central to the initiation of pathological forms of pain, and multiple signaling cascades are known to enhance TRPV1 activity under inflammatory conditions. 3 Hyperalgesia resulting from tissue injury or inflammation is often associated with sensitization of TRPV1 channel activity which can occur through several mechanisms such as phosphorylation, interaction with phospholipid PIP2, trafficking, and association with accessory proteins. 4 The therapeutic glory of TRPV1 is well recognized in clinics which gives a promising insight into the treatment of pain. 5 But the adverse effects associated with some of the antagonists directed the scientists towards other methods. The GABA B receptor is a G protein-coupled receptor (GPCR) composed of the GABA B1 (GB1) and GABA B2 (GB2) subunits. There are at least 14 subtypes of GB1 (GB1a-n), of which GB1a and GB1b are the most abundant subtypes and are mainly expressed in the central nervous system. 6 Studies have confirmed that GABA B1 receptor subunits act as inhibitors of TRPV1 sensitization in different inflammatory settings, and the effect of GABA B on TRPV1 depends on the close juxtaposition of GABA B1 receptor subunits and TRPV1. Activation of GABA B1 receptor subunits does not attenuate the normal function of the TRPV1 receptor and only restores its sensitized state. 7 Therefore, it is undoubtedly a meaningful scientific problem to find the neural inhibitory pain structure coexisting with TRPV1 and GABA B1 .
The cerebrospinal fluid-contacting nucleus (CSF-contacting nucleus) was first discovered and named by our research group in the world. It locates in the ventral gray matter of the lower part of the midbrain aqueduct (Aq) and the upper part of the base of the fourth ventricle (4V). The distinguishing feature of this nucleus is that the somas are located in the brain parenchyma, and their processes extend into the CSF. 8 This connection is not found in any known nerve or nucleus in the nervous system. The basic biological properties of the CSF-contacting nucleus, such as methods that can be used to specifically label it, 9 its location and stereotactic coordinates, 8 the distribution of receptors, neurotransmitters, and ion channels [10][11][12] and its relationship with morphine dependence and withdrawal, stress, sodium appetite, ion channels 10,13-15 have been revealed. A model animal with the CSF-contacting nucleus eliminated has been successfully established. 16 Based on the expression of TRPV1 and GABA in CSF-contacting nuclei in both neuropathic and inflammatory pain conditions, 17,18 we hypothesized that TRPV1 and GABA B1 receptors may coexist in the neurons of the CSF-contacting nucleus and play important roles in neuropathic and inflammatory pain regulation. In the present study, our data demonstrated that GABA B1 expressed and formed a complex with TRPV1 in the CSF-contacting nucleus and investigated the changes in TRPV1 and GABA B1 expression in CIP.
Experimental Grouping
Adult Specific Pathogen Free (SPF)-grade male SD rats weighing 250±25 g were provided by the Experimental Animal Center of Xuzhou Medical University. All animals were adaptively housed in a housing room on a 12-12 h light/dark cycle for one week before the start of the experiment and were given free access to food and water. In the first part of the experiment, rats with CIP were divided into immunofluorescence and coimmunoprecipitation (Co-IP) groups (n=6); in the second part of the experiment, SD rats were divided into the control group (control group) and CIP group (complete Freund's adjuvant (CFA) group) (n=6). All protocols were approved by the Committee for Ethical Use of Laboratory Animals of Xuzhou Medical University (L20211001001) and were carried out according to the Guidelines for the Care and Use of Laboratory Animals.
Animal Model
To induce chronic inflammatory pain, rats were administered 100 µL CFA intra-plantar injection after being anesthetized with 2.5% sevoflurane. 19 The control group was injected with an equal amount of normal saline according to the same procedure.
Behavioral Assessment
(1) Paw withdrawal mechanical threshold (PWMT) The rats were placed in a transparent enclosure (35 cm*30 cm*25 cm) with a wire mesh bottom and allowed to acclimate in a quiet environment for 15 min before the experiment. We measured the paw withdrawal threshold to von Frey filaments (Stoelting, USA). Each filament (1.4, 2, 4, 6, 8, 10, 15, 26 g) was applied for 6 to 8 s to test the midplantar left hind paw, avoiding the footpads according to the up-down method as described previously. 19,20 Abrupt withdrawal, licking, and shaking of the hind paw in response to von Frey filament was considered a positive response. If a positive response occurred, the next smaller von Frey filament was used whereas the filament of higher force was used. Since the threshold is unknown, strings of similar responses may be generated when the threshold is approached from either direction. Thus, although all responses were recorded, the critical six data points were not counted until the response threshold was first crossed. 21 So, we started with a lower force of 4 g filament. The stimuli were always presented continuously, either ascending or descending. The test was continued till five responses were assessed after the first crossing of the withdrawal threshold, or the upper/lower end of the von Frey filament set was reached before a positive/negative response had been obtained. The pattern of positive and negative responses was converted to a 50% threshold value using the formula according to the up-down method of Dixon. 22 Pre-CFA baseline and 1 d, 3 d, 5 d,7d,10 d, and 14 d post-CFA-injected thresholds were measured.
(2) Paw withdrawal thermal latency (PWTL) A 15 cm*15 cm*15 cm transparent Plexiglas box was placed on a 3 mm-thick glass plate, and the rats were acclimated to the box for at least 30 min. The experiment was performed after the rats had calmed down. A light beam was applied to the middle plantar surface of the left hind paw three times at 3 to 5 min intervals with a thermal pain stimulator (BME2410A, Institute of Biological Engineering, Chinese Academy of Medical Sciences) according to the Hargreaves method. 23,24 The duration from the start of irradiation to a leg lifting or avoidance response was measured as the PWTL, and irradiation was stopped when the rat raised its hind paw. A cutoff time of 20s was set to prevent tissue damage caused by excessive irradiation. The intensity of the thermal stimulus was kept the same throughout the experiment. The average paw withdrawal latency of the three trials was used for data analysis.
Injection of Alexa Fluor 594-Conjugated CB into the Lateral Ventricle
Two days before plantar injection of CFA, the rats were anesthetized with sodium pentobarbital (40 mg/kg, i.p.), and their heads were fixed on a stereotaxic apparatus. CB was dissolved in PBS (0.01 M, pH 7.4) and microinjected into the lateral ventricle (400 ng/2 µL) at the following stereotaxic coordinates determined from the Paxinos and Watson brain atlas: 1.4 ±0.2 mm to the right of the midline, 1.2±0.4 mm posterior from bregma, and 3.2 ±0.4 mm deep. Successful targeting of the lateral ventricle was confirmed by aspiration of CSF. The rats were allowed to recover for two days before subsequent experimental manipulations.
Immunofluorescent Staining
The rats were anesthetized with pentobarbital sodium (40 mg/kg, i.p.). The lower edge of the xiphoid process was cut, the thoracic cavity was opened to expose the heart, the apex of the heart was gently clamped with vascular forceps, and a puncture needle was inserted into the ascending aorta from the left ventricle of the apex of the heart. Vascular forceps were used to fix the puncture needle, 200~300 mL of normal saline was quickly perfused, and the right atrial appendage was simultaneously cut so that the blood quickly flushed out and became clear. Then, the blood was replaced with 200~300 mL of chilled 4% paraformaldehyde. After perfusion, brain tissue was collected, placed in 4% paraformaldehyde for more than 6 h in a refrigerator at 4°C, and then transferred to 30% sucrose solution until it sank to the bottom of the tube. The brain containing the CSF-contacting nucleus was isolated and sectioned coronally on a cryostat (Leica CM1900, Germany) at 40 μm. The sections were collected in PBS and rinsed for 5 min × 3 times, permeabilized with 0.1% TritonX-100 in PBS for 15 min. The brain slices were transferred to a 24-well incubation tank containing 10% donkey serum for 2 h at room temperature. After blocking, they were incubated with rabbit anti-TRPV1 (1:800, Novus) and mouse anti-GABA B1 (1:800, Abcam) antibodies diluted in PBST (0.3%Tween-20 in PBS) overnight at 4°C on a shaker. After washing with PBST, the brain slices were incubated with Alexa Fluor 488-labeled donkey anti-rabbit (1:500) and Alexa Fluor 405-labeled donkey anti-mouse (1:500, Abcam) secondary antibodies in the dark at room temperature for 2 h. After washing three times as described above, sections were mounted in sequence on slides and coverslipped. Immunofluorescence images were acquired on an Olympus FV1000 laser confocal microscope and were processed for quantitative analysis using ImageJ software.
Co-Immunoprecipitation
Three days after plantar injection of CFA, brain tissues containing the CSF-contacting nucleus were rapidly isolated, and 10 times the tissue weight of RIPA Lysis Buffer (Strong)(GK10023, Glpbio) and 1/100 of the lysate volume of a phosphatase enzyme inhibitor (PMSF) (GK10023, Glpbio) were added. The supernatant was collected after ultrasonic homogenization and centrifugation (4°C,12000 rpm,15 min), and the total protein concentration of the sample was determined using a BCA kit (Beyotime, China). The samples were divided into three parts: (1) one part, which was mixed with 1X SDS-PAGE loading buffer and boiled for 10 min, was used as a positive control (input) for Western blot analysis; (2) another part was incubated with negative control IgG (ABclonal, Wu Han, China); and (3) the last part was incubated with rabbit anti-TRPV1 or mouse anti-GABA B1 primary antibody overnight at 4°C with shaking. Prepared rProtein A/G Plus MagPoly beads (ABclonal, Wu Han, China) were added to 1 mL of 3% BSA and incubated at 4°C with shaking for 1 h to eliminate nonspecific binding. The antibody-antigen binding complexes were mixed with the prepared magnetic beads and incubated at 4°C overnight. The magnetic bead-antibody-antigen complexes were rinsed three times, and the supernatant was discarded. Then, 30 µL of 2X SDS-PAGE loading buffer was added, and the samples were mixed well and heated at 95°C for 15 min for denaturing elution. The immunoprecipitated protein complexes were separated from the supernatant by SDS-PAGE. Western blot analysis with antibodies against TRPV1 and GABA B1 was performed.
Western Blot Analysis
SD rats were divided into the control and CFA groups, plantar injection three days later, and fresh brain tissues containing the CSF-contacting nucleus were collected. Ten times the tissue weight of RIPA lysis buffer (Beyotime, China) and 1/100 of the lysate volume of PMSF (Beyotime, China) was added. After the tissues were homogenized and centrifuged at 4°C for 15 min at 12,000 rpm, the supernatant was collected, and the total protein concentration of each sample was determined by a BCA protein assay kit (Beyotime, China). Then, the proteins were separated by 8% SDS-PAGE. After electrophoresis, the proteins were transferred to PVDF membranes, which were blocked with 5% skim milk powder for 2 h and incubated with rabbit anti-TRPV1 (1:2000, Novus Biologicals), mouse anti-GABA B1 (1:500, Abcam) and mouse anti-GAPDH (1:2000, Proteintech) antibodies at 4°C overnight. After half an hour at room temperature, the PVDF membranes were rinsed three times with TBST (TBS containing 1% Tween-20) and incubated with corresponding HRP-conjugated secondary antibodies (1:2000, Beyotime, China) for 2 h. Then, the membranes were rinsed six times with TBST. The protein bands were detected by a chemiluminescent reagent (Beyotime, China), and the relative grayscale values were analyzed using ImageJ software.
Statistical Analysis
GraphPad 8.0 software was used to statistically analyze the data. Statistical comparisons between two groups were conducted by two-tailed, unpaired Student's t-test. Repeated-measures data were compared using a two-way ANOVA analysis accompanied by Bonferroni's multiple comparison post hoc tests. P<0.05 was considered statistically significant. Normally distributed data are expressed as the mean ± SEM. MATLAB software was used to quantify the mechanical paw-withdrawal threshold using the registries of the up-down method test. ImageJ software was used to quantify the density of protein bands.
Establishment of a CFA-Induced CIP Model
Rats in the CFA group, ie, CIP model rats, showed redness and swelling of the plantar surface of the left hind paw after CFA injection, and lifting and licking of the hind paw were observed. No such symptoms were observed in the control group before or after injection, and no significant changes in the PWMT and PWTL were observed before or after injection (P>0.05). The CFA group showed a lower pain threshold on the second day, and the PWMT and PWTL of rats in the CFA group were lower than the respective baseline values (P <0.01). The decrease in the PWMT and PWTL lasted for approximately two weeks (Figure 1).
Immunofluorescence Analysis of TRPV1 and GABA B1 Coexpression in the CSF-Contacting Nucleus
Laser confocal microscopy was used to visualize the CSF-contacting nucleus. The CSF-contacting nucleus was indicated by red fluorescence, and the localization of the red fluorescence was consistent with the expected position of the CSFcontacting nucleus. TRPV1-positive cells were labeled with green fluorescence, and GABA B1 -positive cells were labeled with blue fluorescence. Immunofluorescence staining revealed that TRPV1 and GABA B1 were co-expressed in the CSFcontacting nucleus. Quantitative analysis showed that the co-labeling of TRPV1 and GABA B1 receptors with the CSFcontacting nucleus respectively increased compared with the control group ( Figure 2).
Co-Immunoprecipitation
TRPV1 and GABA B1 receptors in the CSF-contacting nucleus form complex. Co-IP was used to investigate the possible interaction between TRPV1 and GABA B1 at the protein level. Bidirectional Co-IP was performed, and TRPV1 and GABA B1 were detected in the CSF-contacting nucleus lysates, which were used as positive controls ("input" samples), but not in negative control samples incubated with normal IgG ("IgG" samples), indicating the specificity of the antibody. The results indicate that TRPV1 and GABA B1 bind to each other in neurons in the CSF-contacting nucleus (Figure 3).
Changes in the Expression of TRPV1 and GABA B1 Receptors in the CSF-Contacting Nucleus in Chronic Inflammatory Pain Rat
To explore the changes in TRPV1 and GABA B1 expression in the CSF-contacting nucleus in CIP model rats, we first constructed a rat model of chronic inflammatory pain by injecting CFA into the plantar surface and then performed Western blot analysis. The results revealed that the expression of TRPV1 and GABA B1 in the CSF-contacting nucleus was significantly upregulated in the CIP model group compared with the control group (P<0.01) (Figure 4).
Discussion
In 2000, the World Health Organization stated that chronic pain is a disease that does harm health. The causes of chronic pain, especially neuropathic pain, can be very complex, but it is generally accepted that a variety of factors lead first to inflammation and finally to chronic pain and even neuropathic pain. Therefore, preventing chronic inflammatory pain from further transforming into neuropathic pain is one of the main strategies in the field of research and treatment. Although it has become the main goal of basic research and the basic measure of clinical treatment to find specific paincausing factors and reduce pain by intervening in this factor, both basic research and clinical practice show that the analgesic effect of simply intervening in a specific target is not ideal. As parallel signaling cascades can still drive pathologic pain and thus promote pain sensitization. 1 TRPV1 is a well-established ion channel activated by pain and heat. Gaurav G-N et al recently showed that natural products and some semi-synthetic analogs as potential TRPV1 ligands for attenuating neuropathic pain. 25 Akhilesh et al reviewed Unlocking the potential of TRPV1 based siRNA therapeutics for the treatment of chemotherapy-induced neuropathic pain. 5 Pathogenic sensitization of TRPV1 is characterized by a decrease in the activation threshold and an increase in responsiveness. 26 Various mechanisms, such as alterations in channel dynamics, changes in TRPV1 levels in the neuronal plasma membrane, and changes in the levels of proteins that bind and regulate TRPV1, may play a role in this process. [27][28][29][30][31] Therefore, targeting substances that interact with TRPV1, especially those that specifically regulate pathological states such as inflammatory pain, may be an interesting alternative to blocking TRPV1. 32,33 American scientists David Julius and Ardem Patapoutian have won the 2021 Nobel Prize in Physiology/Medicine for their outstanding contributions to "finding TRPV1 in capsaicin and discovering it as a receptor for sensing temperature and touch".
GABA is a major transmitter in the central nervous system. Its receptors include three types: GABA A and GABA C receptors can form ligand-gated chloride channels, while GABA B receptors belong to the G protein-coupled receptor family and are accompanied by K + and Ca 2+ channels. Stephen G. Brickley and Istvan Mody (2012) introduced the function of GABA A Receptors in the CNS and its Implications for Disease. GABA A Rs are known to be key targets of anesthetics, sleep-inducing drugs, neurosteroids, and alcohol. The network dynamics associated with epilepsy and
3937
Parkinson's disease are likely to involve tonic GABA A R-mediated changes in conductance. Therefore, GABA A Rs may be a therapeutic target for the treatment of these disorders, with the potential to enhance cognition and aid functional recovery after stroke. 34 GABA B R is a G protein-coupled receptor. Recently, Marzia M (2018) and Dietmar B (2022) respectively systematically reviewed the biological and pharmacological research progress of GABA B receptors in pain. This receptor has been considered a valuable target for the treatment of chronic pain due to extensive evidence that they are involved in the regulation of pain signaling. 35,36 The CSF-contacting nucleus is distinct from other brain nuclei, our previous studies have shown that neurons in this nucleus contain not only TRPV1 but also GABA and are involved in the regulation of various pain. 12,37 In this study, we demonstrated for the first time that TRPV1 and GABA B1 receptors not only coexist but also form a complex in the CSFcontacting nucleus. This result will provide a new insight for reducing inflammatory pain by intervening GABA B1 or TRPV1 in the CSF-contacting nucleus through the cerebrospinal fluid pathway. Of course, further research is needed.
Conclusions
This study demonstrated that TRPV1 receptors and GABA B1 receptors are both expressed in the CSF-contacting nucleus and form complexes, which may be involved in the development of chronic inflammatory pain. This will provide a new idea for the treatment of chronic inflammatory pain.
Ethics Approval
Ethics approval was obtained from the Experimental Animal Ethics Committee of Xuzhou Medical University (L20211001001). | 2022-12-16T16:04:13.596Z | 2022-12-01T00:00:00.000 | {
"year": 2022,
"sha1": "35dff3c4bf9cd3f7220a4c9c7ba370dc6002adb6",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=86145",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "26a1b0e1b2d45edc40f74140fbb6a515290781c9",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
221936143 | pes2o/s2orc | v3-fos-license | Control and Dynamic Simulation of Linear Switched Reluctance Generators for Direct Drive Conversion Systems
This chapter addresses the dynamic simulation and control of linear switched reluctance generators for direct drive conversion systems. The electromechanical energy conversion principles of linear switched reluctance machines are briefly explained. A detailed mathematical model is developed for linear switched reluctance generators. The different types of control strategies adopted for switched reluctance generators are referred. The hysteresis controller is applied to control the conversion systemwith constant damping load. The proposed control strategy also includes a DC/ DC isolated converter to control the system DC bus voltage by adjusting the energy flow between the conversion system and the resistive load. Themathematical model is applied to simulate the behavior of a tubular linear switched reluctance generator as power take-off system in an ocean wave point absorber device. To accomplish this task, the dynamic equations of the point absorber are presented and integrated with the linear switched reluctance generator dynamic model. In the simulation process, the system is driven by a regular ocean wave and operates with constant damping load. The system performance is evaluated for different load values, and the simulation results are presented for the optimal damping load case scenario.
Introduction
The switched reluctance machine (SRM) is an electromechanical structure with salient poles defined by a fixed and a movable part as the main constitutive elements. This machine is characterized by the absence of permanent magnets and by the electric phase coils only being present in just one part (usually the fixed part). The part where the phase coils are located is designed as primary and the other part, which works as passive element, as secondary. The SRMs can be rotating or linear. Since these two configurations are homologous, they share the same operation principles with the exception of the electromagnetic force direction [1]. In SRMs, a force is developed when the magnetic structure tends to minimize the magnetic energy by displacing the movable part to achieve a configuration with minimum
Electromagnetic conversion principles
In SRM, the magnetic circuit of each electric phase may be characterized with different values of magnetic reluctance for distinct relative positions of the movable part, in respect to the static part. When a magnetic field is established in this circuit, an electromagnetic force is developed to displace the movable part to the position with minimum magnetic reluctance in order to minimize the energy in the system. The referred position, designed as alignment position z a , is characterized by the structural configuration where the saliencies of the movable part are aligned with the saliencies of the static part. In this situation, the self-inductance of the electric phase may achieve its maximum value. The nonalignment is verified for all the remaining relative positions, between the movable and static parts. For these configurations, the magnetic circuit of the electric phase is characterized by superior magnetic reluctance achieving its maximum value at the unaligned position z u . This position is defined by the minimum value of inductance for the electric phase.
The typical linear inductance profile for SRMs is displayed in Figure 1.The inductance is given as function of the relative position between the movable and fixed parts. As consequence of the structural pattern inherent to these machines, when the movablepartismovedinasingledirection,theinductanceischaracterizedbya periodic and symmetrical profile resulting from successive movements, which are approaching (the rate of change of the inductance L is positive) and leaving (the rate of change of the inductance L is negative) from the alignment position.
By energizing the electric phases in the approximation region or separation region, the SRM may operate as an actuator or a generator, respectively. In the first case, magnetic flux established by the electric phase will tend to minimize its energy developing an electromechanical force to achieve structural alignment. As a result, a linear force will act on the machine's movable part. In the presence of an external load, which can overcome the referred electromagnetic force, the generator is displaced from the alignment position, increasing the reluctance of the active phase, which will reduce the respective magnetic flux. As a consequence, a backelectromotive force is developed, seeking to increase the electric current to restore the magnetic flux. During this procedure, mechanical energy is extracted from the movable part of the generator and converted to electrical energy [7].
The electric phase equation is given as: where for phase k, u k is the voltage across the electric phase, R a is the internal electric resistance, λ is the flux linkage, i is the electric current, and z k is the electric position. In the last part of Eq. (1), emf is the back-electromotive force developed by the phase: with v as the velocity of the movable part of the linear SRM. These conversion processes may only occur with appropriate power electronic converter to control the energy flow. With proper switch commands, the conversion periods may be established, and electric phase current intensity may be regulated by the converter [8].
In [9,10], several electronic converters are identified that are suitable for electric generation with SRMs. The asymmetric H-bridge converter is the typical choice. This converter topology can be found, for one phase, in Figure 2. It is a practical and simple solution, but it is characterized with a variable output voltage due to the self-excitation process. However, this drawback may be minimized with an external voltage source [11]. This converter has the less apparent power required to operate and is classified as an economical solution [10].
With the converter illustrated in Figure 2, the conversion cycle may be defined by the excitation period, the generation period, and the free-wheeling period, an intermediate stage between the first two. The circuit configurations for the different periods are illustrated in Figure 3. Assuming that the capacitor is already at its nominal operating voltage, the excitation period is initiated when the switches S 1 and S 2 are closed. Usually, this occurs when the phase is near the position that corresponds to its maximum inductance.
When the current intensity reaches a certain value, the switches are opened and the generation period begins. During this period, the back-electromotive force increases the electric current as a consequence of the magnetic flux reduction. Thus, the electric current is maintained through the diodes D 1, k and D 2, k , delivering the generated electric energy to the capacitor and to the load. The transition between these two conversion periods is characterized by hard commutation because the voltage is inverted. In the free-wheeling period, just one switch is closed to provide a zero voltage across the generator electric phase. Only the back-electromotive force acts on the phase electric current. This period can be implemented to achieve a soft commutation where, after the excitation period, the voltage is first annulled and, only then, inverted [12]. In each conversion cycle, the electric energy supplied by the converter during the excitation period is given by: where t on is the start time of the excitation period and t off is the respective finish time. Circuit configuration and electric current path for (a) excitation period, (b) generation period, and (c) free-wheeling period (adapted from [12]). The electric energy returned by the phase to the converter is: with t ext as the time when the electric current is extinguished. The amount of electric energy generated by the electromechanical conversion process is computed as follows:
Dynamic model
To assess the dynamic characteristics of switched reluctance generator, a mathematical model must be formulated to describe the system behavior in transient state. The mathematical model can be solved with numerical methods and computational calculus in order to estimate the system response for the given operating conditions. The mathematical model of SRMs is obtained from Eq. (1) that describes the transient phenomena involved in the electromagnetic conversion of each electric phase. The solution for this equation may be obtained by applying time integration to the linkage magnetic flux [13] or to the electric current derivative with time [14]. For both methods, it is required to relate the machine electromagnetic characteristics with the phase electric current and the movable part position. Several approaches have been proposed in the literature to include the machine nonlinear nature in the mathematical model. In [15], look-up tables are used to model the electromagnetic characteristics of the SRM. These are obtained from a static analysis and are used as data base to build the look-up tables. Analytic functions are also used to represent the machine magnetic characteristics within the mathematical model. The use of analytical expressions simplifies the computation process related to the differentiation and integration of the electromagnetic entities [16]. This approach can be achieved by fitting an appropriate function to discrete data [17]. Fourier series expansion may also provide analytical expressions to represent nonlinear electromagnetic characteristics in the SRM mathematical model [18]. A simpler method, but less precise, comprises the use of piecewise functions to establish a linear [19] or nonlinear [20] relation with the independent variables. Artificial intelligence-based methods have already been adopted in the dynamical analysis of SRMs. These methods comprise the use of neural networks and fuzzy logic with real data to develop a mathematical representation of SRM electromagnetic characteristics [21].
All the referred approaches require the representative data of the electromagnetic characteristics, usually expressed as a function of the electric phase current intensity and movable part relative position. With look-up tables, these data are used directly. Methods based on analytic expressions or artificial intelligence models apply these data to develop appropriate mathematical expressions.
The required discrete data may be obtained through experimental measurements. This process provides realistic curves, but a physical model of the machine is needed for it. In electric machine design, where several structural possibilities must be assessed, it becomes unpractical to get the machine electromagnetic characteristics with experimental evaluation, as numerical or analytical methods are needed to perform this task. As an alternative to experimental tests, the finite element method (FEM) is used for electromagnetic characterization of SRMs, providing results with great precision, but needing for a sophisticated mathematical implementation and large amount of computational resources to process the solution [22]. However, with the existence of FEM-based commercial software, it is possible to evaluate electromagnetic systems without a deep knowledge of electromagnetism. Thus, FEM is getting more adopted in the design of electric machines.
The mathematical model of switched reluctance generator results from the analysis of the associated power converter. Since all electric phases are defined by identic magnetic and electric circuits, they are represented by the same dynamic equations. Its analysis is generalized following the notation specified in Figure 2. The voltage across each electric phase k is: where for each phase k, R a k is the internal electric resistance, i k is the electric current, λ k is the magnetic linkage flux, and z k is the electric position. The magnetic linkage flux is given by: with λ kj z j , i j ÀÁ as the magnetic linkage flux in phase k due to current in phase j and q as the number of electric phases. Replacing Eq. (7) in Eq. (6): Applying the chain rule to Eq. (8), one has: where L k is the self-inductance of phase k and M kj is the mutual inductance between phases k and j. The linear velocity of the movable part v is: The electromotive force, which results from the change of magnetic flux in phase k, is: During the conversion period, electric energy is exchanged between the capacitor of the converter and the electric phases of the generator. In the excitation period, the capacitor supplies energy to the phase and in the generation period receives energy from it. The voltage across capacitor U c is related with its capacity and input current i c by: The total net value of the electric energy flowing between the electric phases and the capacitor i T is given by: The bus voltage U bus of electronic power converter is imposed by capacitor voltage: In order to fully define the model of the switched reluctance generator system, the voltage across each electric phase must be known. Its value depends on the different circuit configurations for the electric energy flow in the phase. According to the possible combinations of switch states, there are three different configurations, as illustrated in Figure 3, each of them corresponding to a distinct period of electromagnetic conversion.
The voltage across each electric phase, according to the switch states, is: with U s and U D as the voltage drops across the electronic switch and diode, respectively.
The linear force exerted by the generator F gen is: where F em, k z k , i k ðÞ is the individual component of force provided by electric phase k during the conversion cycle. The generator electric efficiency can be determined by the mean values of the generated electric power P gen and extracted mechanical power P mec as: All the formulation presented so far was defined for a phase electric position falling within the two opposing nonalignment positions. As the movable part of the generator is displaced, the electric position describes a periodic profile which is identical in all electric phases of the SRM. However, these electric positions have distinct values for the same absolute position because the phases are shifted. The relation between the electric position of one phase z k and the mechanical absolute position z mech is defined by: where k offset is the shift distance between the same electric position of two consecutive phases and S t is the distance between the aligned and unaligned positions of the phase.
The graphical representation of Eq. (18) can be found in Figure 4.
The mathematical model of the LSRG system is defined by a nonlinear problem of initial value and is schematized in Figure 5. To obtain the respective solution, it is necessary to perform the time integration of the differential equations that govern the system electromechanical behavior. Also, a controller must be included to provide the appropriate switching commands to the electronic power converter.
The bus voltage of the LSRG converter should be kept as constant as possible to allow for a proper self-excitation operation. Also, since high voltage levels are required to energize each phase in due time, it may be desirable to reduce the output voltage applied to the electric load. To achieve these requirements, an H-bridge isolated DC/DC converter is admitted in the conversion system to control the energy flow between the conversion system and the electric load, as schematized in Figure 6. With the ideal model of the H-bridge isolated DC/DC converter, it is possible to establish a relation between the mean values of the input and output voltages (U 1 and U 2 ) and of the input and output currents (I 1 and I 2 ). The duty cycle D is the control parameter to regulate the output voltage at constant chopping frequency. The relations between the referred electric entities are defined in [23] as: where, respectively, U 1 and U 2 are the mean values of input and output voltages, I 1 and I 2 are the mean values of input and output electric currents, and N 1 and N 2 are the numbers of turns of the transformer primary and secondary coils.
Control
The SRM operation relies on the switching electric positions. Fixed values for these positions may cause system instability, especially when operating at variable velocity. As a consequence, the converter bus voltage may change considerably, depending on the system electric load [24]. Therefore, a closed loop control is required. As stated in [25], when operating as generator, the control of SRMs must be applied to preserve the converter output voltage, conditioned by the current flow in the electric phases and electric load.
The amount of energy extracted for conversion relies on the electromechanical force exerted by the generator which is also affected by the electric phase current. Thus, with a proper switching strategy, it is possible to control the electric current intensity to attain a desirable voltage level as well as to improve the system conversion efficiency. With the additional DC/DC converter, the bus voltage may also be kept near a nominal value by modifying the energy flow between the LSRG converter and the load. This method is proposed in [24] to achieve maximum energy conversion by controlling the voltage level according to the velocity of the generator.
At low operation velocities, the back-electromotive force is inferior to the bus voltage and the current is forced to decrease gradually. In this situation, the phase must be submitted to successive commutations to adjust the applied voltage and achieve the desirable current intensity [7]. This process can be accomplished with hysteresis band control [26]. This control is characterized with a variable switching frequency because it is conditioned by the rate of change of the electric phase current [27].
The control of the switched reluctance generator is the selection of appropriate parameter values that are responsible for its behavior. For velocities superior to the base velocity, it is only necessary to account the electric positions that define the conversion cycle. For lower velocities, it is also required to define a reference value for the phase current. With the mathematical model, it is possible to establish an optimal relation between the values of the control parameters and the physical entities that need to be controlled and include it in the control process through looktables or fitted analytical expressions. Thus, the appropriate control parameters can be defined as function of the operation variables [28].
The proportional integral (PI) control may be applied to control the switched reluctance generator. It has been used in real-time optimal control of rotating generators, where the reference current and commutation angles are computed from the error of the output voltage [29]. PI control was adopted in [30] to adjust the phase current intensity and minimize the converter output voltage ripple in linear generators.
Hysteresis control
With proper switching commands, the current is maintained within the hysteresis band h b , which is a range of values established around the reference current i ref .
The typical current profile obtained with this control is shown in Figure 7(a). When the current falls below the inferior limit of the hysteresis band Àh b =2, the phase is energized to increase the current. If it is superior to the upper limit of the hysteresis band h b =2, the switches are opened and the generation period begins until the lower limit is reached again [31]. The phase current error e i is used to control the switches in agreement with the logic schematized in Figure 7(b).
The conversion cycle is characterized by successive excitation and generation periods that occur while the phase electric position is between z on and z off and the movable part is in motion. For zero velocity, there is no electric generation and the switches remain opened. Thus, for each phase k, the hysteresis controller provides the electronic switching commands S 1, k and S 2, k as a function of the phase electric position z k , movable part velocity v, current reference value i ref , hysteresis band h b , and electric positions z on and z off .InFigure 8 the algorithm of the hysteresis controller is illustrated. For an optimal control, the parameters must be chosen to maximize the generated electric energy. With the mathematical model of the conversion system, these parameters may undergo an optimization process to find the best values for the best performance.
Proportional integral (PI) control
The H-bridge isolated DC/DC converter is used to maintain the system DC bus voltage level close to a reference value. In order to accomplish it, a PI control is used to properly compute the duty cycle value for the applied switching commands. The control variable s is given as function of error e between the variable to be controlled and the respective reference value: with K p and K i as the controller proportional and integral gains, respectively. The duty cycle value is given as: where D init is the initial value of the duty cycle and ΔDt ðÞis the incremental duty cycle provided by the PI controller: with e u as the normalized error between DC bus voltage U bus and the reference value U ref :
Practical case study scenario
The described formulation will be applied to evaluate the performance of a tubular linear SRM working as generator in an ocean wave point absorber device schematized in Figure 9.
The system comprises a floating body that drives the generator by action of the incoming ocean waves. The linear generator is a three-phase machine with coils located in the inner part. The outer part is rigidly coupled to a floating body that only allows a vertical motion. Mechanical springs are used to link the generator outer part to the reference system, which is fixed to the ocean bottom. The tubular LSRG is illustrated in Figure 10.
Point absorber mathematical model
The mathematical model of an ocean wave direct drive converter is also needed to fully define the conversion system and to compute the mechanical entities that are used as input in the generator dynamic model.
The dynamic behavior of the point absorber device can be found in [32] and is described by the expression: where m b is the combined mass of the floating body and generator movable part, € z is the bodies' acceleration, F exc is the wave excitation force, F rad is the radiation force, F H is the hydrostatic force, F v is the viscous damping force, and F gen is the generator damping force.
The excitation force is given by: hi (26) with S a as the wave amplitude,F exc as the complex value of the excitation force per meter of wave amplitude, ω as the wave angular frequency, and the φ as wave phase. The radiation force is calculated as follows: In Eq. (27), m ∞ is the added mass for infinite frequencies, K r is the radiation velocity impulse response, and _ z is the floating body and generator movable part. The hydrostatic force is: with ρ as the specific mass of seawater, g as the gravity acceleration, A w as the cross-sectional area of the submerged part of the floating body, and z as its vertical position.
The floating body is subjected to a viscous drag force given by: where A D is the cross-sectional area of the floating body, C D is the viscous drag coefficient, and _ η is the vertical velocity of the water surface. The mathematical equations of TLSRG and the point absorber devices are combined to fully define the dynamic model for the direct drive ocean wave conversion system, as schematized in Figure 11. expected for the ocean wave considered for simulation. The dimensions of the TLSRG are presented in Table 1. The electromagnetic characteristics of the machine, displayed in Figure 12, were obtained with MagNet ® , a commercial software that uses the finite element method for electromagnetic analysis. The control parameters h b , z on and z off were optimized for different combinations of velocity and phase current to maximize the mean value of the generated electric power. In this chapter, only the dynamic simulation of the system is important, so the applied optimization process is not presented. The optimal values are displayed in Figure 13.
System simulation/simulation results
These values were included in the mathematical model through 2D look-up tables and were computed, as function of the TLSRG movable part velocity and phase current, by linear interpolation. The capacitor in the converter was defined with a capacitance of 0.05 F.
A value of 41,430 kg was assumed for the mass of the oscillating body m b , and the added mass for infinite frequencies m ∞ was quantified as 9951.1 kg. The specific mass of seawater ρ was set as 1025 kg/m 3 , and for the viscous drag coefficient, a value of 0.88 was defined. The excitation force profile and impulse response function were computed with NEMOH and are displayed in Figure 14. For the H-bridge Figure 11. Schematics of the ocean wave conversion system dynamic model. isolated DC/DC converter, the ratio N 2 N 1 was chosen as 10 to supply an output voltage of 400 V for a DC bus voltage of 4 kV and a duty cycle of 0.5. The PI controller was configured with a proportional gain K p of 4089.3 and an integral gain K I of 639.6. The system's mathematical model was implemented in Simulink ® and was solved by the Dormand-Prince method, with a relative tolerance of 1 Â 10 À3 and a maximum step of 8 Â 10 À4 s. The system's mathematical model was simulated for distinct values of i ref in order to evaluate the generator performance with different damping forces. The mean value of generated electrical power and electric conversion efficiency, as function of i ref , can be found in Figure 15.
Attending to the simulation results, the greatest average of electric generated power was 7.6 kW for a reference current of 35 A. For this reference, current value also had the best electric efficiency of 43%. Figure 16 shows the oscillating body absolute position and velocity obtained from the simulations with i ref of 35 A.
The electric phase current and electromechanical force profiles are shown, respectively, in Figures 17 and 18. The TLSRG converter DC bus voltage level is displayed in Figure 19. With the application of the DC/DC isolated converter, the DC bus voltage was kept near its reference value with a maximum error of 1.05%.
Conclusion
In this chapter, a mathematical model to simulate the dynamic behavior of linear switched reluctance generators was presented, to apply in ocean wave direct drive converters. The model was developed according with the circuit configuration of the H-bridge asymmetric converter, adopted as power electronic converter to control the energy flow in the machine when operating as a generator. A hysteresis controller was applied to maintain the electric phase current close to a reference value during each conversion cycle, in order to allow the control of the generator damping force. Also, a DC/DC isolated converter was admitted to adjust the energy flow between the H-bridge asymmetric converter and the system electric load, in order to keep the DC bus voltage level near its nominal value. A PI controller was proposed to control the pulse width of the DC/DC conversion stage. A practical study case scenario was considered where the generator mathematical model was applied to simulate TLSRG operating as a power take-off system in an ocean wave point absorber device. The mathematical model of the TLSRG was integrated with the dynamic equations of the point absorber to evaluate the system behavior. In the simulation process, a regular ocean wave was assumed to drive the system. The system performance was evaluated for distinct values of I ref , which implies distinct damping load profiles. The best performance was found for an I ref of 35 A where an average electric power of 7.6 kW was generated with an efficiency of 43%. With the application of the DC/DC isolated converter, it was possible to maintain the DC bus voltage near its reference value with a maximum deviation of 1.05%. | 2020-09-26T05:53:27.167Z | 2020-09-09T00:00:00.000 | {
"year": 2020,
"sha1": "aff2384c717b9ec26afc4aa288e1b19ae54292b1",
"oa_license": "CCBYNC",
"oa_url": "https://www.intechopen.com/citation-pdf-url/69173",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "aff2384c717b9ec26afc4aa288e1b19ae54292b1",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
209862698 | pes2o/s2orc | v3-fos-license | The Radial Point Interpolation Mixed Collocation (RPIMC) Method for the Solution of Transient Diffusion Problems
The Radial Point Interpolation Mixed Collocation (RPIMC) method is proposed in this paper for transient analysis of diffusion problems. RPIMC is an efficient purely meshless method where the solution of the field variable is obtained through collocation. The field function and its gradient are both interpolated (mixed collocation approach) leading to reduced $C$-continuity requirement compared to strong-form collocation schemes. The method's accuracy is evaluated in heat conduction benchmark problems. The RPIMC convergence is compared against the Meshless Local Petrov-Galerkin Mixed Collocation (MLPG-MC) method and the Finite Element Method (FEM). Due to the delta Kronecker property of RPIMC, improved accuracy can be achieved as compared to MLPG-MC. RPIMC is proven to be a promising meshless alternative to FEM for transient diffusion problems.
Introduction
The diffusion equation describes physical phenomena where motion is driven by the gradient of the field variable. Time-dependent problems like heat and mass transport [1], unsteady viscous fluid flow [2] and magnetohydrodynamics flow [3] can be solved by the transient diffusion equation. Also, the diffusion equation appears in the description of coupled phenomena, such as the transport of chemical or biological reactions by diffusive propagation in a medium (reaction-diffusion phenomena) [4]. Mathematically, reaction-diffusion problems are described by a coupled set of ordinary differential equations (ODEs) that describes the reactive term and a partial differential equation (PDE) that describes the diffusive term. Usually, a much smaller time scale is required for the reactive term than for the diffusive term and operator splitting techniques are used to decouple the problem and compute a numerical solution efficiently [5,6]. Among the various available methods to solve the PDE of the diffusive term, of great interest are Meshless Methods (MMs).
MMs, in contrast to mesh-based methods, do not require connectivity information for the construction of basis functions. Therefore, domains with irregular geometry, nonlinearity and discontinuity can be treated efficiently. The use of MMs to solve both the steady and transient diffusion equation has been extensively reported. Steady-state heat conduction in isotropic and functionally graded materials has been solved successfully by the Meshless Point Collocation (MPC) method [7]. In [8], an explicit collocation method with local Radial Basis Functions (RBFs) has been successfully applied to solve the transient diffusion equation in two-dimensional (2D) domains. The collocation methods have demonstrated high efficiency due to the compact support and the small bandwidth in linear algebraic systems [9]. However, accuracy has been shown to deteriorate near the Neumann boundaries due to the requirement for the approximation of spatial derivatives, which is significantly less accurate than the approximation of the field variable [10].
On the other hand, in the Element Free Galerkin (EFG) meshless method [11], which is based on the Galerkin weak formulation, the Neumann boundary conditions (BCs) are satisfied naturally, similarly to the Finite Element Method (FEM). In EFG, the notion of a background mesh is introduced for the generation of quadrature points and the evaluation of the weak form's spatial integrals. Application of EFG for the solution of heat transfer problems has been rigorously explored [12,13,14]. Moreover, the improved Moving Least Squares (MLS) approximants have been proposed to enhance the handling of Dirichlet BCs and the efficiency of EFG for three-dimensional (3D) heat conduction problems [15]. Maximum entropy approximants that possess the weak-Kronecker delta property for direct imposition of Dirichlet BCs have also been proposed in the framework of EFG [16,17,18,19].
A different approach is considered in the Meshless Local Petrov-Galerkin (MLPG) method [20,21,22], where quadrature points are generated in individual local quadrature domains centered at each field node and the trial and test functions can be selected from different spaces. The flexibility in the selection of the test functions offers the possibility to construct different variations of the MLPG method [23]. By choosing the Dirac function as the test function and interpolating both the field function and its gradient, the MLPG Mixed Collocation (MLPG-MC) method is derived [24]. MLPG-MC has minimum computational cost since no integration is performed. Compared to standard collocation methods, MLPG-MC demonstrates reduced deterioration at the Neumann boundaries, as the order of the spatial derivatives is reduced through the interpolation of the field function's gradient. The MLPG-MC method has been successfully applied to solve inverse Cauchy problems for steady-state heat transfer [25].
Variations of the MLPG method using different trial functions have been investigated extensively [26,27]. Since MLS basis functions do not possess the delta Dirac property, special treatment to impose the Dirichlet BC is required. To address this issue, the Local Radial Point Interpolation method has been proposed [28], in which the MLS basis functions are replaced with Radial Point Interpolation (RPI). The RPI basis functions possess the Kronecker delta property and Dirichlet BC imposition is straightforward as in FEM and maximum entropy approximants. The Local RPI method has been used to successfully solve problems in free vibration analysis [29], incompressible flow [28], material non-linearity [30] and transient heat conduction [31], among others. However, to our knowledge, RPI has not been so far evaluated in the mixed collocation variant.
The purpose of the present study is to investigate the performance of the mixed collocation method using the RPI basis functions for the solution of transient diffusion problems. The method is subsequently applied to solve the monodomain reaction-diffusion equation for electrical impulse propagation in the heart [32]. The standard approach to solve the monodomain model involves the use of the operator splitting method to decouple the reaction and diffusion parts and solve them separately. It is for that reason that in this study pure transient diffusion problems are initially considered. Without loss of generality, the method is evaluated in 2D and 3D benchmark problems of transient heat conduction. The structure of the paper is the following. In section 2, the theory of the RPI basis functions is reviewed. In section 3, the mathematical formulation and implementation details of the Radial Point Interpolation Mixed Collocation (RPIMC) method are presented. In section 4, the RPIMC method is first evaluated in 2D and 3D heat conduction benchmark problems and subsequently applied to solve the monodomain model for a 3D tissue slab and a biventricular geometry. The time efficiency of RPIMC method for the solution of the monodomain model is profiled. Finally, in section 5 some concluding remarks are provided.
Radial point interpolation review
In RPI [33], RBFs augmented with polynomials are used to approximate the field function. In contrast to MLS, RPI possesses the Kronecker delta property, therefore essential boundary conditions are imposed directly. For any field function u(x), defined in the domain Ω ⊂ R d , the RPI approximation u h (x I ) at a point of interest x I ∈ R d is given by: where r i (x I ) are the RBFs and p j (x I ) are the polynomial basis functions, a i (x I ) and b j (x I ) denote the corresponding coefficients, n is the number of neighbor nodes in the local support domain of x I , and m is the number of polynomial terms. In Equation (1), different forms of RBFs can be used to represent r i (x I ). In this study, we used the Multi-Quadric RBFs (MQ-RBFs) due to their satisfactory performance reported in previous studies [34,27]. In 2D, the MQ-RBFs are given by: where r c and q are positive-valued shape parameters of the MQ-RBF and d Ii is the Euclidean norm between the point of interest x I = (x I , y I ) and the i th neighbor node x i = (x i , y i ). Analogous MQ-RBFs are defined in 3D. Rectangular and cuboid local support domains for 2D and 3D problems, respectively, were constructed in this study. Following the notation in [34], the shape parameter r c is given by: where α c is a dimensionless constant and d c denotes the average nodal spacing in the proximity of the point of interest x. The effect of the choice of α c and q on the approximation accuracy has been investigated in [35,28]. In this study, parameter values α c = 1.5 and q = 1.03 were used in all the problems of section 4.
In this work, we used the linear polynomial basis functions (k = 1). The coefficients a i (x I ), b j (x I ) are obtained by requiring the field function to pass through all the n field nodes in the local support domain, expressed in matrix form: where u s = {u 1 , u 2 , . . . , u n } T is the vector of the field function parameters at the nodes of the local support domain, R is the RBF moment matrix of size n × n, and P is the polynomial moment matrix of size n × m. A unique solution to Equation (5) is obtained by applying the following constraint condition [36]: By combining Equations (5) and (6) the following equations are obtained: and the unique solution is given by: To ensure that G −1 is not singular, R −1 should exist. The existence requirement is usually satisfied, even for arbitrarily scattered nodes [37,38], rendering RPI a stable approximation method. Finally, the RPI approximation u h (x I ) at x I as a function of the RPI basis function: is obtained from Equations (1) and (8) as follows: The derivatives of u h (x) can be computed as: where J denotes spatial coordinate and the comma symbol designates partial differentiation with respect to J.
Radial Point Interpolation Mixed Collocation Method
In this section the theoretical aspects of RPIMC and its computer implementation are described. The RPIMC theoretical formulation is based on the principles of MLPG-MC [25,24]. However, the RPI basis function is used as trial function instead of the MLS. The Dirac function is selected as the test function, thus reducing the integration over local domains to collocation. Without loss of generality, the field variable u is used to represent the temperature field in the following.
Theoretical aspects
Let's consider the balance equation of heat transfer in a domain Ω with boundary ∂Ω = ∂Ω u ∪ ∂Ω q given by: where c is the specific heat capacity, ρ is the material density, u(x, t) is the temperature field, q(x, t) is the heat flux, f (x, t) denotes the sum of any heat sources acting in the domain Ω,ū is the prescribed value of the temperature field on the Dirichlet boundary ∂Ω u ,q is the prescribed value of the heat flux on the Neumann boundary ∂Ω q , n is the outward unit vector normal to ∂Ω q and k is the thermal diffusivity coefficient.
The heat flux q can be expressed by the heat flux -temperature gradient relation as: The RPI basis functions are used to interpolate the temperature field u(x, t) and the heat flux fields q J (x, t), thus obtaining: where n is the number of field nodes in the local support domain of x.
From Equations (15) and (16), the relationship of the nodal heat fluxes to nodal temperatures is obtained. The heat flux-to-temperature relationship at a node x I for time t is given by: where N is the number of field nodes in the discretization of the domain Ω. In matrix form, the heat flux to temperature relationship is written as: where q is a vector containing the heat fluxes at each field node x I , K a is a sparse matrix containing the partial derivatives of the RPI basis functions at the nodes in the local support of each field node scaled by (−k) and u is a time-dependent vector containing the nodal temperature values.
Introducing Equations (16)(17)(18) in Equation (12), the RPIMC formulation for a node x I at time t is given by: which can be expressed in terms of the temperature field as: Equations (20) and (21) can be written in the equivalent matrix form as: Mu
Boundary conditions imposition
To impose Dirichlet BCs (Equation (13)) in the mixed collocation method, the prescribed temperature values at field nodes x I belonging to the Dirichlet boundary ∂Ω u are considered. The collocation method is used to enforce them: where n is the number of field nodes in the support domain of x I . Due to the Kronecker delta property of the RPI basis functions, Equation (24) is reduced to strong imposition in the RPIMC method and Dirichlet BCs are satisfied exactly. This is in contrast to the MLPG-MC, in which special treatment for the Dirichlet BCs is required due to the lack of the Kronecker delta property of the MLS basis functions.
Neumann BCs (Equation (14)) are enforced using the penalty method described in [39]. The rows of the matrices K s and K a are reordered such that . Superscript 1 denotes the γ r nodes on the ∂Ω q boundary (Neumann nodes) and superscript 2 denotes the γ u nodes on the ∂Ω u boundary (Dirichlet nodes). γ in denotes the nodes in the interior of the domain Ω, such that the total number of nodes is N = γ r + γ u + γ in . For a given time t, the matrix form of Equation (14) is given by: where q 1 is the vector of the nodal heat fluxes for the γ r nodes. N r is the matrix containing the normal vectors and q r is the vector of the prescribed heat fluxes for the γ r nodes, given by: The Neumann BCs are enforced at the nodes γ r by multiplying Equation (25) with the penalty factor αN T r and adding it to Equation (19) to obtain: By rearranging terms, Equation (27) can be written as: where I is the identity matrix and Q = I + αN T r N r . Combining Equations (22) and (28), the matrix form of the modified heat transfer balance equation is given by: where In the penalty method a large value should be selected for the penalty factor α to ensure the accuracy of the BC enforcement. However, if α is too large, stability issues may arise. In our study, we found that α in the range 10 4 , 10 7 led to satisfactory results, with the best ones (lowest approximation errors in benchmark problems) obtained for α = 10 6 .
Computer implementation
Regularly distributed nodes with equidistant spacing h in all coordinates were considered. The RPI shape parameters were selected as α c = 1.5, d c = h and q = 1.03. The penalty factor α = 10 6 was chosen to enforce Neumann BCs through the penalty method. The standard forward finite difference scheme (forward Euler) with mass lumping was used to approximate partial differentiation with respect to time explicitly. The forward Euler method is well-known as a conditionally stable method. To ensure stability, an adequately small time step must be used. An estimation of the stable time step was computed by applying the Gerschgörin theorem [40]: where m ii , k ii are the diagonal entries in M and K matrices, respectively. The selected time step dt = (0.9)dt s was chosen after applying a 10% reduction to the stable time step to ensure the stability of the time integration. The pseudo-code of the RPIMC method's computer implementation is given in Algorithm (1). compute normals for boundary field nodes 5: for <each field node i> do 6: find field nodes in the local support domain of i 7: compute basis functions and derivatives 8: if <i is on Neumann boundary> then compute dt Using Gerschgörin Theorem, Equation (30) 15: while <t <= t f > do 16: update body source: f 17: update field variable: u Using Forward Euler scheme 18: t = t + dt 19: end while 20: end procedure
Numerical Benchmarks and Cardiac Electrophysiology problem
The performance of the RPIMC method is presented for several 2D and 3D heat transfer benchmark problems for which an analytical solution is available. Convergence analysis for the numerical solution u h against the analytical solution u an was performed in terms of the E 2 and N RM S error metrics given by: For comparison, the benchmark problems were additionally solved with the MLPG-MC and FEM methods and convergence analysis was performed. The convergence rate (ρ) for the E 2 and N RM S error metrics at successive refinements was calculated at the final simulation time t = t f using Equation (32), as proposed in [41]: where E a , E b denote the error and h a , h b the nodal spacing at two successive refinements. For the MLPG-MC method, the MLS basis function with linear polynomial basis was used as trial function and the quartic spline function as test function. FEM simulations were performed by using linear triangle and tetrahedral elements in 2D and 3D problems, respectively.
The RPI and MLS approximation schemes used in this work were implemented using MATLAB and are available in an open-source repository [42].
Lateral heat loss in 2D with Dirichlet boundary conditions
A heat conduction problem with lateral heat loss was solved in a 2D square domain Ω with edge length l = 1. The problem is described by the PDE: with Dirichlet BCs on ∂Ω: u(0, y, t) = u(1, y, t) = 0, The initial condition was obtained by the analytical solution for t = 0: u(x, y, t) = e −t sin(πx)cos(πy).
The problem was solved for the time interval t = [0, 1] for 11×11 regularly distributed nodes in Ω with spatial spacing h = 0.1. Figure (1) shows the profiles of the solution for y = 1 and for x = 0.5, respectively. Convergence analysis was performed for successive refinements with h = [0.1, 0.05, 0.025, 0.0125]. The convergence analysis results for the E 2 and N RM S error metrics are presented in Figure (2). A summary of the convergence rates for E 2 and N RM S error metrics is provided in Table 1 and Table 2.
Heat conduction in 3D with insulated borders
A heat conduction problem with insulated borders was solved in a 3D cubic domain Ω with edge length l = π. The problem is governed by the PDE: with Neumann BCs on ∂Ω: The initial condition was obtained by the analytical solution for t = 0: u(x, y, z, t) = 1 + 2e −3t cos(x)cos(y)cos(z) + 3e −29t cos(2x)cos(3y)cos(4z).
The problem was solved for the time interval t = [0, 1] for 11×11 regularly distributed nodes in Ω with spatial spacing h = π/10. Figure (3) shows the profiles of the solution for y = z = π/5 and x = y = π/5. Convergence analysis was performed for successive refinements with h = [π/10, π/20, π/30, π/40]. The convergence analysis results for the E 2 and N RM S error metrics are presented in Figure(4). A summary of the convergence rates for E 2 and N RM S error metrics is provided in Table 1 and Table 2.
The problem was solved for the time interval t = [0, 1] for 11 × 11 regularly distributed nodes in Ω with spatial spacing h = π/10. Figure (5) shows the profiles of the solution for y = z = π/2 and for x = y = 3π/5. Convergence analysis was performed for successive refinements with h = [π/10, π/20, π/30, π/40]. The convergence analysis results for the E 2 and N RM S error metrics are presented in Figure(6). A summary of the convergence rates for E 2 and N RM S error metrics is provided in Table 1 and Table 2.
Electrical propagation in a cardiac biventricular model
The propagation of an electrical stimulus in a cardiac biventricular geometry was simulated by solving the decoupled monodomain model after application of the operator splitting method [43] given by: where Ω and ∂Ω denote the domain of interest and its boundary, respectively, and n is the outward unit vector normal to the boundary. ∂V /∂t is the time derivative of the transmembrane voltage, I ion is the total ionic current and C is the cell capacitance per unit surface area. D denotes the diffusion tensor calculated as: where d 0 denotes the diffusion coefficient along the myocardial fiber direction, ρ is the transverse-to-longitudinal ratio of conductivity, f is the myocardial fiber direction vector, I is the identity matrix and ⊗ denotes the tensor product operation.
The biventricular anatomy was discretized in a tetrahedral mesh with 273919 nodes and 1334218 elements. The myocardial fiber direction vectors were computed using a rule-based method [44]. The value for the diffusion coefficient in the fiber direction was set to d 0 = 0.002 cm 2 /ms and the value for the transverse-to-longitudinal ratio of conductivity was set to ρ = 0.25. The fast conduction system in the biventricular model was generated using a fractal-tree generation algorithm [45]. Electrical stimulation was applied at the terminal nodes of the fast conduction system, so called Purkinje-Myocardial Junctions (PMJs). Stimuli of 1-ms duration and twice the diastolic threshold in amplitude were applied onto the PMJs at a cycle length of 1 s. A full cycle (t = 1 s) was simulated using the RPIMC and MLPG-MC methods to solve the diffusion term of Equation (46) according to Equation (21) with zero Neumann BC. The reaction term was defined by using the O'Hara cell model [46] to represent human ventricular cellular electrophysiology.
A dilatation coefficient set to a c = 2.85 was used for the construction of support domains for both RPIMC and MLPG-MC, leading to support domains containing 51-149 nodes. The RPIMC and MLPG-MC simulation results were compared in terms of local activation time (LAT) with a FEM simulation (Figure 7). Time integration was performed with the forward Euler method. The critical diffusion step was found to be dt RP IM C = 0.064 ms, dt M LP G−M C = 0.065 ms, dt F EM = 0.035 ms using the Gerschgörin theorem [40]. Figure 8. RPIMC required longer time than MLPG-MC for stiffness matrix assembly. This was due to the inversion operation of the enriched moment matrix G during the computation of the RPI basis functions and their gradients. However, the time required for the solution of the monodomain model with RPIMC was smaller for all resolution levels.
Concluding remarks
The RPIMC method was proposed and tested to solve transient diffusion problems. Since RPIMC uses RPI basis functions as trial functions that possess the delta Kronecker property, Dirichlet BCs were imposed similarly to FEM. The results obtained for a number of benchmark problems demonstrated that RPIMC can achieve high accuracy, similar to that of FEM, generally outperforming the MLPG-MC method.
Furthermore, we showed that RPIMC can be used to solve the monodomain model for simulation of electrical propagation in the heart. Local activation time maps obtained by RPIMC were found to be in good agreement with those of FEM, being remarkably closer than those obtained by MLPG-MC. However, deterioration of the RPIMC solution at the boundary nodes, where Neumann boundary conditions were imposed, was observed. Rough edges present on the biventricular model's surfaces led to discontinuities in normal vectors' direction, which is postulated to have negatively affected the accuracy at the Neumann boundary.
In terms of computational efficiency, RPIMC and MLPG-MC methods performed similarly in simulations of electrical propagation in a cardiac tissue slab, being somewhat less efficient than FEM. As expected, the time for the assembly of the stiffness matrix was higher for RPIMC and MLPG-MC than for FEM due to the additional workload associated with the computation of the meshfree basis functions and their gradients. Additionally, RPIMC was found to be 1.65 times slower than FEM in solving the monodomain model in a large-scale biventricular model. Nevertheless, it should be noted that RPIMC is more efficient than other meshfree methods such as SPH. In [47], the execution time to run a simulation in an SPH implementation of the monodomain model was reported to be of 57 minutes for a ventricular model with 51037 nodes and a simulation time of 150 ms, while in this study RPIMC required 99.6 minutes for a biventricular model of 273919 nodes and a simulation time of 1000 ms.
In summary, the RPIMC method is shown to be a good alternative to FEM with satisfactory accuracy for the solution of transient diffusion problems, such as heat conduction, and with good capabilities for the solution of the mon-odomain model in cardiac electrophysiology simulations. Importantly, RPIMC is expected to be a promising option to solve coupled electromechanical problems in cardiology, where it could outperform FEM in problems involving large displacements. | 2020-01-04T03:30:37.000Z | 2020-01-04T00:00:00.000 | {
"year": 2020,
"sha1": "c92af66c1436198f4ec5ad4d032779accbdffe5c",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2001.01027",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ca99af2964a9f4fc50eb74e02215e2fa5c429b09",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
261196300 | pes2o/s2orc | v3-fos-license | (Al, Ga)N-Based Quantum Dots Heterostructures on h-BN for UV-C Emission
Aluminium Gallium Nitride (AlyGa1-yN) quantum dots (QDs) with thin sub-µm AlxGa1-xN layers (with x > y) were grown by molecular beam epitaxy on 3 nm and 6 nm thick hexagonal boron nitride (h-BN) initially deposited on c-sapphire substrates. An AlN layer was grown on h-BN and the surface roughness was investigated by atomic force microscopy for different deposited thicknesses. It was shown that for thicker AlN layers (i.e., 200 nm), the surface roughness can be reduced and hence a better surface morphology is obtained. Next, AlyGa1-yN QDs embedded in Al0.7Ga0.3N cladding layers were grown on the AlN and investigated by atomic force microscopy. Furthermore, X-ray diffraction measurements were conducted to assess the crystalline quality of the AlGaN/AlN layers and examine the impact of h-BN on the subsequent layers. Next, the QDs emission properties were studied by photoluminescence and an emission in the deep ultra-violet, i.e., in the 275–280 nm range was obtained at room temperature. Finally, temperature-dependent photoluminescence was performed. A limited decrease in the emission intensity of the QDs with increasing temperatures was observed as a result of the three-dimensional confinement of carriers in the QDs.
Introduction
The Minamata Convention, convened in Japan on 10 October 2013, led to the establishment of an international treaty aimed at restricting the use of mercury (Hg) and mercury-based devices.This treaty paved the way for the advancement of more efficient and environmentally friendly light sources, such as light-emitting diodes (LEDs) [1].LEDs provide numerous benefits including compact size, a wide range of wavelength emissions, extended lifetimes, and reduced power consumption over commonly used Hg lamps that are defined by being bulky, suffer from high voltages, and pose toxicity risks and recycling issues [2].LEDs based on aluminum gallium nitride ((Al, Ga)N) can emit light in the ultraviolet (UV) range, particularly in the deep UV (DUV) or UV-C range, with emission wavelengths below 280 nm.These DUV emissions are achievable with Al compositions generally exceeding 40%.This particular aspect has attracted wide interest in developing (Al, Ga)N-based LEDs since the UV-C region is considered the germicidal range of UV radiation: in this range, the damaging effect on intercellular components (e.g., DNA, RNA, and proteins) of microbes occurs, which can be applied in strategic applications such as surface disinfection, sterilization, and water purification [3,4].Despite the significant amount of research conducted in the past ten years, (Al, Ga)N UV LEDs with a wavelength shorter than 365 nm exhibit a low external quantum efficiency (EQE) that typically falls within the single-digit percentage range.Despite recent advances that have increased the efficiency to a maximum value of 20% for UV-C LEDs emitting around 275 nm [5], the EQE typically remains below 10%.The EQE of LEDs is determined by the product of injection efficiency (IE), light extraction efficiency (LEE), and internal quantum efficiency (IQE).Hence, new approaches and methods are required to enhance their efficiency values.
Sapphire is predominantly utilized as the preferred substrate for the growth of (Al, Ga)N-based LEDs since it offers several advantages such as its low cost, large size availability (up to 8-inch wafers), and its transparency in the UV range.Nevertheless, growing on sapphire presents several drawbacks.The high lattice mismatch (~13% for AlN and a AlN = 3.112 Å for the basal plane lattice parameter) and the mismatch in the lattice thermal coefficient (~43% for the basal plane) negatively impact the crystalline structural quality of the epitaxial layers, resulting in increased threading dislocation densities (TDDs).These TDDs act as non-radiative recombination centers, causing a decrease in the IQE and the EQE.Therefore, favoring radiative recombination by improvement of the carrier confinement into the active region can increase the IQE in LEDs.The growth of three-dimensional (3D) quantum dots (QDs) as the active region instead of 2D quantum wells (QWs) can help reduce the impact of non-radiative recombination of excitons with surrounding TDs, thus increasing the IQE.This is attributed to the better carrier confinement offered by the threedimensional confinement potential of QDs, resulting in a higher probability of radiative recombination.The use of two-dimensional (2D) materials has also gained widespread attention as a potential solution to overcome the issue of lattice mismatch and the high density of the threading dislocations [6].The extended crystalline planar structures of 2D materials are characterized by robust in-plane covalent bonds and relatively weaker out-of-plane van der Waals forces.This unique bonding arrangement facilitates the straightforward extraction of individual layers by breaking the van der Waals bonds, causing minimal damage to both the extracted layer and the remaining structure [7].This unique feature of 2D materials enables van der Waals epitaxy to exploit them, greatly reducing the lattice mismatch of conventional heterostructure growth methods [6].Among the various options of 2D materials, hexagonal boron nitride (h-BN) stands out as highly suitable due to its chemical compatibility with AlN or (Al, Ga)N-based epitaxial layers.Moreover, h-BN templates can serve as mechanical release layers, enabling the transfer of nitride-based LEDs to suitable substrates and thereby facilitating the development of flexible devices [8].However, growing high-quality III-nitride films on h-BN is challenging due to the lack of dangling bonds on the surface, which complicates the nucleation step, as seen in the formation of randomly oriented, polycrystalline, isolated islands in gallium nitride (GaN) growth on h-BN [8].AlN, on the other hand, with its lower mobility and higher sticking coefficient of Al adatoms, may serve as a more favorable nucleation layer on h-BN [9,10].
Research on MBE growth of (Al, Ga)N heterostructures on h-BN is limited.Our group recently used face-to-face high-temperature annealing (FFA) to enhance the crystalline quality and surface morphology of AlN layers grown by MBE on h-BN/sapphire templates [11].Previous studies on nitride heterostructures grown on h-BN mainly used metal-organic vapor phase epitaxy (MOVPE) growth modes.In their study, Qingqing Wu et al. conducted the growth of AlN and deep ultraviolet (DUV) LEDs on monolayer h-BN, which was transferred onto a sapphire substrate after being grown via low-pressure chemical vapor deposition (CVD) on a copper (Cu) foil substrate [12].The same research group also reported the successful development of crack-free crystalline AlN and DUV LEDs emitting at 281 nm on multilayer h-BN grown by MOVPE [13].These findings highlight the advantages of utilizing multilayer h-BN for achieving high-quality epilayers and devices on large surfaces.J. Shin et al. reported the fabrication of vertical full-colormicro-LEDs using 2D material-based layer transfer techniques, allowing for the mechanical transfer of LED layers from the 2D materials and the reuse of the substrate [14].However, MBE growth of QD-based (Al, Ga)N heterostructures on h-BN for UV-C flexible LED fabrication has not been reported yet and was the subject of investigation in this study.
This study focused on the growth of AlN, Al x Ga 1-x N thick layers, and Al y Ga 1-y N QD layers using MBE on h-BN/sapphire templates.Additionally, the impact of increasing the thickness of h-BN layers on the surface morphology of the AlN epitaxial layers was investigated.The h-BN layers with thicknesses of 3 nm and 6 nm were directly grown on 2-inch sapphire substrates using MOVPE.The impact of h-BN on the AlN/Al x Ga 1-x N layer's growth was investigated, showing two different orientations in the growth plane.Finally, the Al y Ga 1-y N QDs structural and optical properties were studied, showing a homogenous distribution of QDs and a successful emission below 280 nm in the UV-C, thus paving the way for the fabrication of flexible QD based Al x Ga 1-x N UV-C LEDs.
Materials and Methods
The growth of h-BN templates with thicknesses of 3 nm and 6 nm was performed using an Aixtron MOVPE close-coupled showerhead (CCS) reactor on (0 0 0 1) sapphire substrates.The process was carried out at a temperature of 1280 • C and a pressure of 90 mbar.Triethylboron (TEB) and ammonia (NH 3 ) were employed as precursor gases for boron (B) and nitrogen (N), respectively.More comprehensive information on the growth conditions for h-BN can be found in previously published reports [15].The growth of Al x Ga 1-x N/AlN structures on the h-BN templates was performed using MBE in a RIBER 32P reactor.Solid sources of the III-elements (Al, Ga) and NH 3 as a nitrogen source were utilized, except for the QD active region where NH 3 was replaced by a nitrogen (N 2 ) plasma source.This substitution was necessary to enable the formation of QDs and 3D islands, as NH 3 would result in a 2D growth mode that inhibits their formation [16].Under N 2 , a 2D-3D "Stranski-Krastonov" growth mode can be achieved [16,17], allowing for the growth of Al y Ga 1-y N QDs on Al x Ga 1-x N (with x > y).Two structures, known as sample A and sample B, were fabricated (see Figure 1).The fabrication process for both samples was similar, except for the thickness of the Al x Ga 1-x N layer.For both samples, a 200 nm thick AlN layer was initially grown with the following conditions: a 10 nm thick AlN buffer layer was grown at 1070 • C with an ammonia flow rate of 50 sccm and a growth rate of 50 nm/h.Subsequently, the 200 nm AlN layer was grown at 1120 • C with an ammonia flow rate of 50 sccm and a growth rate of 100 nm/h.Following this, a 500 nm thick Al 0.7 Ga 0.3 N layer was grown at 870 • C for sample A and a 330 nm thick layer for sample B, both at a growth rate of 290 nm/h.Next, an active region consisting of six Al 0.3 Ga 0.7 N QD planes with an Al nominal composition (n.c.) of 0.3, separated by Al 0.7 Ga 0.3 N barriers lattice matched to the Al 0.7 Ga 0.3 N template, was deposited.The equivalent 2D thickness of the Al 0.3 Ga 0.7 N (n.c.) QDs was 7 monolayers (MLs), approximately 1.8 nm, with 1 ML corresponding to half the c-lattice parameter, considering a variation of the lattice parameter following Vegard's law between AlN and GaN.In between the QD planes, 5 nm thick Al 0.7 Ga 0.3 N cladding layers were grown for both samples.After the deposition of the fifth QD plane, a 30 nm thick Al 0.7 Ga 0.3 N layer was grown at 820 • C. Finally, the sixth and last QD plane was deposited on the surface of the top cladding layer for both samples.
Atomic force microscopy (AFM) EDGE-DIMENSION (BRUKER, Billerica, MA, USA), operating in tapping mode with a silicon tip with a radius between 5 and 10 nm, was used to study the surface morphologies of h-BN and AlN.In addition, a diamond-coated tip with a typical radius between 5 and 10 nm was also used to investigate the morphology of the surface QDs plane, and all data were processed using WSxM software [18].Furthermore, X-ray diffraction (XRD) measurements were conducted using a PANalytical X'Pert PRO MRD four-circle diffractometer (Malvern Panalytical, Malvern, United Kingdom) to assess the crystalline quality of the Al x Ga 1-x N/AlN layers and examine the impact of h-BN on the subsequent layers.Regarding the optical characteristics, continuous wave PL measurements were carried out at room temperature (RT) and low temperature (LT), i.e., at 300 K and 12 K, in a closed-cycle Helium (He) cryostat using a frequency-doubled Argon (Ar) laser at 244 nm (5.08 eV) with an excitation power of 20 mW.Atomic force microscopy (AFM) EDGE-DIMENSION (BRUKER, Billerica, MA, USA), operating in tapping mode with a silicon tip with a radius between 5 and 10 nm, was used to study the surface morphologies of h-BN and AlN.In addition, a diamond-coated tip with a typical radius between 5 and 10 nm was also used to investigate the morphology of the surface QDs plane, and all data were processed using WSxM software [18].Furthermore, X-ray diffraction (XRD) measurements were conducted using a PANalytical X'Pert PRO MRD four-circle diffractometer (Malvern Panalytical, Malvern, United Kingdom) to assess the crystalline quality of the AlxGa1-xN/AlN layers and examine the impact of h-BN on the subsequent layers.Regarding the optical characteristics, continuous wave PL measurements were carried out at room temperature (RT) and low temperature (LT), i.e., at 300 K and 12 K, in a closed-cycle Helium (He) cryostat using a frequencydoubled Argon (Ar) laser at 244 nm (5.08 eV) with an excitation power of 20 mW.
Results
In the first part of this section, the growth of AlN on h-BN templates and the study of its surface roughness evolution as a function of the AlN thickness are presented.The second part is devoted to an in-depth XRD characterization of Al0.7Ga0.3N,as well as Al0.3Ga0.7N(n.c.) QDs' main structural and optical properties, including a surface morphology study by AFM and PL measurements.
Characterization of h-BN Templates before Growth
To analyze the surface morphology, the 3 nm and 6 nm thick h-BN layers on sapphire samples were initially examined.AFM topographic images of (10 × 10) µm 2 and (2 × 2) µm 2 , along with their corresponding root-mean-square (RMS) values, are depicted in Figure 2a,i.
Results
In the first part of this section, the growth of AlN on h-BN templates and the study of its surface roughness evolution as a function of the AlN thickness are presented.The second part is devoted to an in-depth XRD characterization of Al 0.7 Ga 0.3 N, as well as Al 0.3 Ga 0.7 N (n.c.) QDs' main structural and optical properties, including a surface morphology study by AFM and PL measurements.
Characterization of h-BN Templates before Growth
To analyze the surface morphology, the 3 nm and 6 nm thick h-BN layers on sapphire samples were initially examined.AFM topographic images of (10 × 10) µm 2 and (2 × 2) µm 2 , along with their corresponding root-mean-square (RMS) values, are depicted in Figure 2a,i.
Figure 2a is a (10 × 10) µm 2 AFM scan that shows the 3 nm h-BN layer covering the entire wafer surface with a measured surface RMS roughness of 1.1 nm.The inset is a (2 × 2) µm 2 AFM scan with a surface RMS roughness measured of 1.1 nm. Figure 2i instead shows the 6 nm h-BN layer with a surface RMS roughness of 1.3 nm.The inset of the (2 × 2) µm 2 scan is similar to the one in Figure 2a with a surface RMS roughness of 1.2 nm.The occurrence of wrinkles, represented by the white segments observed in the AFM images, is a common characteristic of 2D materials.Regarding h-BN, these wrinkles are primarily caused by the variation in the thermal expansion coefficient (TEC) between h-BN and sapphire substrates.During the cooling process, this TEC mismatch leads to the generation of compressive strain within the h-BN layer, resulting in the formation of wrinkles.The wrinkling instability facilitates the release of energy, resulting in the creation of surface roughness in the sample [19].Figure 2a is a (10 × 10) µm 2 AFM scan that shows the 3 nm h-BN layer covering the entire wafer surface with a measured surface RMS roughness of 1.1 nm.The inset is a (2 × 2) µm 2 AFM scan with a surface RMS roughness measured of 1.1 nm. Figure 2i instead shows the 6 nm h-BN layer with a surface RMS roughness of 1.3 nm.The inset of the (2 × 2) µm 2 scan is similar to the one in Figure 2a with a surface RMS roughness of 1.2 nm.The occurrence of wrinkles, represented by the white segments observed in the AFM images, is a common characteristic of 2D materials.Regarding h-BN, these wrinkles are primarily caused by the variation in the thermal expansion coefficient (TEC) between h-BN and sapphire substrates.During the cooling process, this TEC mismatch leads to the generation of compressive strain within the h-BN layer, resulting in the formation of wrinkles.The wrinkling instability facilitates the release of energy, resulting in the creation of surface roughness in the sample [19].
AlN Growth on h-BN/Sapphire Templates by MBE
We proceeded to grow AlN (total thickness of 200 nm) on both the 3 nm and 6 nm thick h-BN/sapphire templates by MBE.The 200 nm layers were grown in three steps.First, a 50 nm layer was grown followed by another 50 nm thick layer and a 100 nm final layer.This systematic growth was carried out as a way to study the evolution of the AlN layer growth on h-BN in terms of surface morphology with increasing thickness.Figure 2b,c,ii,iii illustrate the surface morphology evolution when the AlN thickness increased from 50 nm to 200 nm for both samples.For sample A, the initial surface morphology of the 50 nm thick layer was a rough one with an RMS = 3 nm at a (10 × 10) µm 2 scan range.It was dominated by a high density (~1.7 × 10 9 cm −2 ) of 3D island-like structures (height
AlN Growth on h-BN/Sapphire Templates by MBE
We proceeded to grow AlN (total thickness of 200 nm) on both the 3 nm and 6 nm thick h-BN/sapphire templates by MBE.The 200 nm layers were grown in three steps.First, a 50 nm layer was grown followed by another 50 nm thick layer and a 100 nm final layer.This systematic growth was carried out as a way to study the evolution of the AlN layer growth on h-BN in terms of surface morphology with increasing thickness.Figure 2b,c,ii,iii illustrate the surface morphology evolution when the AlN thickness increased from 50 nm to 200 nm for both samples.For sample A, the initial surface morphology of the 50 nm thick layer was a rough one with an RMS = 3 nm at a (10 × 10) µm 2 scan range.It was dominated by a high density (~1.7 × 10 9 cm −2 ) of 3D island-like structures (height ~32 nm ± 14 nm and lateral size ~119 nm ± 30 nm).After the thickness was increased to 200 nm, the surface morphology became rougher with an RMS increase to 3.5 nm.On the other hand, the density of the 3D island-like structures decreased to ~5.4 × 10 8 cm −2 as well as its height (~23 nm ± 8 nm) but its lateral size increased (~152 nm ± 40 nm).In addition, the surface morphology in-between the islands, observed in the (2 × 2) µm 2 scan range inset image in Figure 2ii, showed a smoother surface compared to the (10 × 10) µm 2 scan range with an RMS = 1.9 nm.Regarding sample B, the initial surface morphology of the 50 nm thick layer was also rough with an RMS = 2 nm on a (10 × 10) µm 2 scan range.It was also dominated by 3D island-like structures (density ~3 × 10 8 cm −2 , height ~22 nm ± 7 nm, and size ~84 nm ± 16 nm).After the thickness was increased to 200 nm, the surface roughness also increased with an RMS = 2.7 nm, similar to sample A's surface evolution.The island's density decreased to ~9.1 × 10 7 cm −2 but its height and lateral size increased (~40 nm ± 15 nm and 124 nm ± 20 nm).The surface morphology in-between the islands observed in the (2 × 2) µm 2 scan range inset image in Figure 2iii showed a very smooth surface compared to the (10 × 10) µm 2 scan range with an RMS = 0.5 nm.With the increase in AlN thickness, the RMS roughness of the surface improved positively for both samples.Initially, the increase in AlN thickness resulted in a reduction in island density, although the individual islands grew larger in size.As a result, the surface between the islands became smoother, contributing to the improved RMS roughness.We can still see a difference between the AlN growth on the 3 nm and 6 nm thick h-BN templates where smoother AlN layers with lower islands density were obtained on the 6 nm h-BN compared to the 3 nm h-BN template despite both h-BN templates having the same initial surface RMS roughness values of 1.2 nm.
3.2.Al 0.3 Ga 0.7 N/Al 0.7 Ga 0.3 N QDs Structural and Optical Properties 3.2.1.Morphological Properties After the growth of the Al 0.7 Ga 0.3 N layers and Al 0.3 Ga 0.7 N QD active region, AFM was performed in order to investigate the QDs' structural properties.Figure 2d,iv show AFM images of samples A and B for (500 × 500) nm 2 and (200 × 200) nm 2 scan ranges.
The QDs' densities, heights, and diameters were determined from AFM measurements for both samples.For sample A, the QDs' average height was found to be ranging between 6 MLs and 8 MLs with the highest QDs population having a height of 7 MLs based on the histogram (red sticks) in Figure 3a.The QDs' average diameter was found around 13 nm ± 3 nm.Furthermore, for sample B, the QDs' height was also found to be ranging between 6 MLs and 8 MLs based on the histogram (blue sticks) in Figure 3b, with the highest number for 7 MLs and the QDs average diameter around 15 nm ± 2 nm.The QDs' height results for both samples are similar to previous results obtained by our group on Al y Ga 1-y N QDs grown by MBE [20].Regarding the QDs' density, it was estimated at 4 × 10 11 cm −2 for both samples.This trend of high QDs density (>10 11 cm −2 ) is usually observed in the case of Al y Ga 1-y N QDs compared to GaN QDs [20], which is attributed to the lower surface mobility characteristic of Al adatoms compared to Ga ones.Meanwhile, both samples presented domains on their surface, which are separated by surface depressions (depression depth for both samples ~8 Å) with a density ranging from around 3 × 10 7 to 4 × 10 7 cm −1 .XRD measurements have indicated that the (Al, Ga)N structures are experiencing a tensile strain at room temperature (not shown).A thermal expansion difference between h-BN and AlN could be at the origin of a tensile strain, contrary to the compressive strain typically observed during the growth of AlN/AlGaN on sapphire.Therefore, strain effects could be at the origin of the specific surface morphology of Al 0.7 Ga 0.3 N layers, related to the interplay between sapphire, h-BN, and the (Al, Ga)N materials, and the formation of the surface depressions, the study of which goes beyond the scope of this work.
Crystal Properties by X-ray Diffraction
Figure 4 illustrates the 2θ-ω X-ray diffraction diagram performed on samples A and B ranging from 10 • to 170 • in order to study all the layers'' orientations along the growth direction.
For both samples, only the (0 0 0 1) orientation was observed for nitride layers: at 36 • the Al x Ga 1-x N and AlN (0 0 0 2) peaks were mingled.The Al x Ga 1-x N and AlN (0 0 0 4) peaks were seen at 75.3 • and 76.5 • , respectively.The Al x Ga 1-x N and AlN (0 0 0 6) peaks were well separated at 132.9 • and 136.38 • .The thinner peaks correspond to the sapphire substrate: the (0 0 0 6) and (0 0 0 12) reflections were observed with high intensities and three forbidden reflections of sapphire were detected.It should be mentioned that two peaks, observed at 17.8 • and 44.4 • , came from the sample holder itself since the studied samples are smaller than the X-ray beam footprint.
Figure 5 illustrates the XRD Phi (in-plane rotation of the sample) scans of the (1 0 −1 1) Al 0.7 Ga 0.3 N layer and (1 0 −1 4) sapphire substrate performed on sample A. Twelve peaks were observed for the Al 0.7 Ga 0.3 N layer instead of the six peaks expected for a hexagonal structure.This can be explained by the presence of two domains twisted by 30 • in the growth plane.
Crystal Properties by X-ray Diffraction
Figure 4 illustrates the 2θ-ω X-ray diffraction diagram performed on samples A and B ranging from 10° to 170° in order to study all the layers'' orientations along the growth direction.
Crystal Properties by X-ray Diffraction
Figure 4 illustrates the 2θ-ω X-ray diffraction diagram performed on samples A and B ranging from 10° to 170° in order to study all the layers'' orientations along the growth direction.1) Al0.7Ga0.3Nlayer and (1 0 1 4) sapphire substrate performed on sample A. Twelve peaks were observed for the Al0.7Ga0.3Nlayer instead of the six peaks expected for a hexagonal structure.This can be explained by the presence of two domains twisted by 30° in the growth plane.When (Al, Ga)N hexagonal layers are grown directly on sapphire, the (Al, Ga)N unit cell is turned by an angle of 30° compared to that of sapphire.Thereby, the sharp peaks have the classic orientation of (Al, Ga)N on sapphire.The wide peaks represent a new inplane orientation turned by 30°, thus creating two different orientations in the growth plane.The h-BN layer permitted this structural arrangement since it is forbidden for such orientations to happen on sapphire.
The study continued on the Al0.7Ga0.3N(1 0 1 1) plane by performing two omega scans at Phi = 0° and Phi = 30°, respectively.The two omega scans showed different intensities and FWHM values, as illustrated in Figure 6.The ω scan peak in red color (Phi When (Al, Ga)N hexagonal layers are grown directly on sapphire, the (Al, Ga)N unit cell is turned by an angle of 30 • compared to that of sapphire.Thereby, the sharp peaks have the classic orientation of (Al, Ga)N on sapphire.The wide peaks represent a new in-plane orientation turned by 30 • , thus creating two different orientations in the growth plane.The h-BN layer permitted this structural arrangement since it is forbidden for such orientations to happen on sapphire.
The study continued on the Al 0.7 Ga 0.3 N (1 0 −1 1) plane by performing two omega scans at Phi = 0 • and Phi = 30 • , respectively.The two omega scans showed different intensities and FWHM values, as illustrated in Figure 6.The ω scan peak in red color (Phi = 0 • ), corresponds to the Al 0.7 Ga 0.3 N orientation due to the presence of h-BN on the sapphire surface.The FWHM of this peak is equal to 20 • , indicating crystalline domains with a high defect density.On the other hand, the ω scan peak in black color (Phi = 30 • ) corresponds to the Al 0.7 Ga 0.3 N orientation typically observed on sapphire.However, it presented an unusual peak shape: a careful analysis shows that this peak was composed of a base similar to the large peak presented in red color (Phi = 0 • ).This feature suggests that in this classical orientation of Al 0.7 Ga 0.3 N, two domains are present.One of which, corresponding to the thinner part of the peak (FWHM = 7 • ), was of better crystalline quality than the domain corresponding to the wider part of the peak (FWHM = 20 • ).
Continuous wave photoluminescence
In order to study the optical characteristics of the Al 0.3 Ga 0.7 N QDs, continuous PL measurements at LT (12 K) and at RT (300 K) were performed and compared for both samples using the same excitation conditions.Figure 7 shows the PL spectrum at 12 K and 300 K for both samples.corresponds to the Al0.7Ga0.3Norientation typically observed on sapphire.However, it presented an unusual peak shape: a careful analysis shows that this peak was composed of a base similar to the large peak presented in red color (Phi = 0°).This feature suggests that in this classical orientation of Al0.7Ga0.3N,two domains are present.One of which, corresponding to the thinner part of the peak (FWHM = 7°), was of better crystalline quality than the domain corresponding to the wider part of the peak (FWHM = 20°).
Optical Properties Continuous wave photoluminescence
In order to study the optical characteristics of the Al0.3Ga0.7NQDs, continuous PL measurements at LT (12 K) and at RT (300 K) were performed and compared for both samples using the same excitation conditions.Figure 7 shows the PL spectrum at 12 K and 300 K for both samples.For sample A with a QD nominal composition y of 0.3, two PL peak emissions were observed at LT.The main PL peak with the highest intensity had an emission in the UV-C region at 275 nm (4.5 eV).It originates from the exciton radiative recombination in the Al 0.3 Ga 0.7 N QDs.In addition to this main PL peak, a low energy shoulder peak at 283 nm (between 4.3 eV and 4.4 eV) with a lower intensity was also observed.The modulation of PL intensity is affected by Fabry-Perot interference fringes caused by the significant differences in refractive indices at the interfaces of (Al, Ga)N/sapphire and air/(Al, Ga)N.As shown in Figure 7a, a weak decrease in both the QDs' PL peaks was observed from LT to RT, confirming the strong 3D carrier confinement in the Al 0.3 Ga 0.7 N QDs.The FWHM of the PL peak at RT was 18 nm.
On the other hand, sample B, with the same QD nominal composition y of 0.3 as sample A, had one clear PL peak emission observed at both LT and RT in the UV-C region at a very close wavelength of 280 nm (4.43 eV).As shown in Figure 7b, a decrease in the QDs' PL peak intensity was observed from LT to RT.The FWHM of the PL peak at RT was 16 nm, a value roughly similar to sample A.
From these measurements, it was found that the spectrally integrated PL intensity ratio (I (300 K)/I (12 K)) of the QD emission peak between RT and LT was roughly 10%.This is rather high compared to the case of QWs on high-dislocation-density (Al, Ga)N materials, which is below 10 −2 [21].In addition, in comparison with a reference Al y Ga 1-y N QDs/Al x Ga 1-x N structure directly grown on AlN/sapphire, the ratio of the PL integrated intensities was found to be between 15 and 45 times lower for Al y Ga 1-y N QDs grown on h-BN (see Supplementary Material).This result indicates that there is still room for improvement in the structural design and growth condition optimizations of both h-BN and (Al, Ga)N materials for UVC LEDs.For sample A with a QD nominal composition y of 0.3, two PL peak emissions w observed at LT.The main PL peak with the highest intensity had an emission in the U C region at 275 nm (4.5 eV).It originates from the exciton radiative recombination in Al0.3Ga0.7NQDs.In addition to this main PL peak, a low energy shoulder peak at 283 (between 4.3 eV and 4.4 eV) with a lower intensity was also observed.The modulation
Discussion
The proposed study, which focused on both 3 nm and 6 nm h-BN thicknesses, aimed to explore the impact of such h-BN templates on the nucleation and growth of (Al, Ga)N-based heterostructures and quantum dot (QD) active regions, targeting UVC emitters.Therefore, the choice of these specific thicknesses ensured complete coverage of h-BN on the sapphire substrate, enabling both efficient nucleation of the AlN layer and opening the perspective of a possible subsequent exfoliation of the layer, as was already reported in previous study by Prof. Ougazzaden's group [22].By employing ammonia-based MBE, the growth of AlN on h-BN/sapphire templates resulted in the formation of Al-polar layers characterized by predominantly 2D surfaces.However, the presence of 3D islands disrupted the otherwise flat morphology of the layers.Following the subsequent AlN layer growth and thickness increase from 50 nm to 200 nm in total, the surface RMS roughness estimated by AFM measurements for a large (10 × 10) µm 2 scan range increased for both samples, while it decreased for a small (2 × 2) µm 2 scan range.This feature originates from the presence of large 3D AlN islands originating from an initial 3D growth mode (see Supplementary Material), which have also been observed for AlN growth on sapphire [23], and lead to an important increase in the surface RMS roughness.As the AlN layer thickness was increased and the 3D island's density was decreased, the surface RMS roughness at a smaller scale was reduced, consequently leading to low RMS values for (2 × 2) µm 2 scans in-between the 3D islands.After the growth of the Al 0.7 Ga 0.3 N layers and Al 0.3 Ga 0.7 N QDs active region, AFM measurements showed a homogenous distribution of QDs with the majority of QDs having a height of 7 MLs and a density around 4 × 10 11 cm −2 for both samples.In addition, the XRD symmetric 2θ-ω scan of samples A and B showed that all the layers observed in both samples were grown along the growth axis (c-axis).On the other hand, the XRD 360 • Phi scan for sample A showed twelve peaks instead of the usual six peaks observed on Al x Ga 1-x N structures grown on sapphire.After performing a Phi scan on the sapphire template, it was found that the six other peaks actually came from another domain that was rotated in the growth plane by an angle of 30 • , compared to the domain that originated from the wurtzite crystal structure of the (Al, Ga)N layers grown on sapphire.This is due to the h-BN template that created two different orientations in the growth plane.This result confirms the orientation guidance from h-BN observed in a previous study carried out by S. Sundaram et al.where III-nitrides were grown on h-BN on c-plane sapphire [24].They observed that the varying degrees of misorientation that can be observed in III-nitrides are depending on the crystalline quality of h-BN.The III-nitrides could be even amorphous beyond a certain crystallinity limit of h-BN.This observation shows the importance of growing good crystalline quality h-BN layers.
QDs' PL emissions showed a maximum intensity in the UV-C range between 275 and 280 nm at room temperature and with an FWHM around 16-18 nm, which is attributed to fluctuations in both the QDs' size and composition (Al concentration) [25].
In our study, the structural properties of (Al, Ga)N layers grown on h-BN were compared to those obtained in a previous study conducted by our research group wherein we investigated the growth of Al x Ga 1-x N/Al y Ga 1-y N QDs on sapphire using MBE.Based on X-ray measurements, it was found that the FWHM of the Al x Ga 1-x N layers grown on sapphire [20] was significantly narrower compared to the FWHM of the Al x Ga 1-x N layers grown on h-BN.However, and importantly, the temperature dependence of the PL integrated intensity of Al 0.3 Ga 0.7 N QDs/Al 0.7 Ga 0.3 N on sapphire was observed to be similar to that of Al 0.3 Ga 0.7 N QDs/Al 0.7 Ga 0.3 N on h-BN, showing a PL integrated intensity ratio between 10K and 300K of the same order of magnitude [20].Put together, these results suggest that while there may be significant differences in the structural properties of heterostructures grown on sapphire or on h-BN templates, the temperature-dependent behavior and overall PL characteristics of UVC-emitting Al 0.3 Ga 0.7 N QDs/Al 0.7 Ga 0.3 N active regions remain comparable.We propose that the temperature dependence of the PL integrated intensity of Al y Ga 1-y N QDs is not strongly influenced by the crystalline quality of the material, and this can be attributed to the robustness of the QDs resulting from their 3D confinement of excitons at the nanoscale.It is worth noting that variations in PL characteristics can still be observed among different samples, including those grown directly on sapphire, but with a maximum magnitude limited to 10 only when comparing the most and less radiatively efficient samples (see Supplementary Material).
Conclusions
Al 0.3 Ga 0.7 N QD-based structures emitting in the UV-C range were grown on h-BN/sapphire templates by MBE using two different h-BN thicknesses: 3 nm and 6 nm.The structural properties of Al 0.7 Ga 0.3 N layers were studied, showing an impact of the h-BN layers on the subsequent structures in the form of an orientation guidance and creating two different orientations in the growth plane affirming the crucial importance of growing high-crystalline-quality h-BN layers.Optical characterizations showed that the QDs for sample A and sample B had emissions in the UV-C with a maximum intensity emission ranging between 275 nm (4.5 eV) and 280 nm (4.43 eV).In addition, the spectrally integrated PL intensity (I (300K)/I (12K)) of the QD emission peak between RT and LT for samples A and B was determined to be around 10%.This study demonstrated the successful growth of UV-C-emitting Al 0.3 Ga 0.7 N QDs structures by MBE on h-BN/sapphire templates.This result will enable us to move to the next step, which is the fabrication of QD-based UV-C LEDs and their exfoliation, hence demonstrating the possible fabrication of flexible UV-C LEDs.
Supplementary Materials:
The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/nano13172404/s1, Figure S1: Reflection high-energy electron diffraction (RHEED) images, along the <1 − 100> direction, of the AlN growth on h-BN/sapphire templates; Figure S2: RMS roughness values evaluated as a function of the AlN thickness for samples A and B; Figure S3: Photoluminescence measurements of Al y Ga 1-y N QD plane at room temperature (300 K) for different samples; Table S1: Wavelength emission and PL integrated intensity ratio between LT and RT for different Al y Ga 1-y N QDs/Al x Ga 1-x N structures grown on sapphire and on h-BN; Table S2: X-ray rocking curve diffraction measurements of (0 0 0 2) and (1 0 −1 3) symmetric and asymmetric reflections for Al 0.7 Ga 0.3 N structures grown on sapphire and Al 0.7 Ga 0.
Figure 1 .
Figure 1.Schematics of the two Al y Ga 1-y N quantum dot (QDs) structures grown on h-BN/sapphire templates.(a) Al y Ga 1-y N QDs structure grown on 3 nm h-BN on sapphire (sample A).(b) Al y Ga 1-y N QDs structure grown on 6 nm h-BN on sapphire (sample B).
Figure 2 .
Figure 2. Atomic force microscopy images of (a) 3 nm thick h-BN layer grown on sapphire c-plane substrates, (b) 50 nm AlN grown on 3 nm h-BN, (c) 200 nm AlN grown on 3 nm h-BN, (d) Al0.3Ga0.7NQDs morphology on the surface of sample A, (i) 6 nm thick h-BN layer grown on sapphire c-plane substrates, (ii) 50 nm AlN grown on 6 nm h-BN, (iii) 200 nm AlN grown on 6 nm h-BN, and (iv) Al0.3Ga0.7NQD morphology on the surface of sample B. (a-c,i-iii) are (10 × 10) µm 2 images and the inset shows (2 × 2) µm 2 scan images, while (d,iv) are (500 × 500) nm 2 scan images with an inset of (200 × 200) nm 2 scan images.The term Z represents the vertical scale and the variation in height in the AFM images, and its value is reported at the bottom right-hand side of each image.
Figure 2 .
Figure 2. Atomic force microscopy images of (a) 3 nm thick h-BN layer grown on sapphire c-plane substrates, (b) 50 nm AlN grown on 3 nm h-BN, (c) 200 nm AlN grown on 3 nm h-BN, (d) Al 0.3 Ga 0.7 N QDs morphology on the surface of sample A, (i) 6 nm thick h-BN layer grown on sapphire c-plane substrates, (ii) 50 nm AlN grown on 6 nm h-BN, (iii) 200 nm AlN grown on 6 nm h-BN, and (iv) Al 0.3 Ga 0.7 N QD morphology on the surface of sample B. (a-c,i-iii) are (10 × 10) µm 2 images and the inset shows (2 × 2) µm 2 scan images, while (d,iv) are (500 × 500) nm 2 scan images with an inset of (200 × 200) nm 2 scan images.The term Z represents the vertical scale and the variation in height in the AFM images, and its value is reported at the bottom right-hand side of each image.
Figure 3 .
Figure 3. Histograms showing the QDs' height distributions as a function of monolayer (ML) units (1 ML = 0.257 nm) for (a) sample A and (b) sample B.
Figure 3 .Figure 3 .
Figure 3. Histograms showing the QDs' height distributions as a function of monolayer (ML) units (1 ML = 0.257 nm) for (a) sample A and (b) sample B.
Nanomaterials 2023 ,Figure 7 .
Figure 7. Photoluminescence (PL) spectrum of Al0.3Ga0.7NQDs with an Al0.3Ga0.7Ndeposited amo of 7 MLs at 12 K and 300 K for samples A and B. (a) The PL intensity has been multiplied by 7 the spectrum obtained at 300 K. (b) The PL intensity has been multiplied by 13 for the spectr obtained at 300 K.
Figure 7 .
Figure 7. Photoluminescence (PL) spectrum of Al 0.3 Ga 0.7 N QDs with an Al 0.3 Ga 0.7 N deposited amount of 7 MLs at 12 K and 300 K for samples A and B. (a) The PL intensity has been multiplied by 7 for the spectrum obtained at 300 K. (b) The PL intensity has been multiplied by 13 for the spectrum obtained at 300 K. | 2023-08-27T15:38:28.603Z | 2023-08-24T00:00:00.000 | {
"year": 2023,
"sha1": "f44eb9853e2e1f9a2cf89cc42b87201acdc1d5c9",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-4991/13/17/2404/pdf?version=1692876165",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d151cbb5d782ac71f9e0df44586656fdd2133e1d",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
1192797 | pes2o/s2orc | v3-fos-license | Identification of Genes Involved in Polysaccharide-Independent Staphylococcus aureus Biofilm Formation
Staphylococcus aureus is a potent biofilm former on host tissue and medical implants, and biofilm growth is a critical virulence determinant for chronic infections. Recent studies suggest that many clinical isolates form polysaccharide-independent biofilms. However, a systematic screen for defective mutants has not been performed to identify factors important for biofilm formation in these strains. We created a library of 14,880 mariner transposon mutants in a S. aureus strain that generates a proteinaceous and extracellular DNA based biofilm matrix. The library was screened for biofilm defects and 31 transposon mutants conferred a reproducible phenotype. In the pool, 16 mutants overproduced extracellular proteases and the protease inhibitor α2-macroglobulin restored biofilm capacity to 13 of these mutants. The other 15 mutants in the pool displayed normal protease levels and had defects in genes involved in autolysis, osmoregulation, or uncharacterized membrane proteins. Two transposon mutants of interest in the GraRS two-component system and a putative inositol monophosphatase were confirmed in a flow cell biofilm model, genetically complemented, and further verified in a community-associated methicillin-resistant S. aureus (CA-MRSA) isolate. Collectively, our screen for biofilm defective mutants identified novel loci involved in S. aureus biofilm formation and underscored the importance of extracellular protease activity and autolysis in biofilm development.
Introduction
Staphylococcus aureus is a human commensal and the causative agent of diverse acute and chronic bacterial infections. The chronic infections persist and cause significant morbidity and mortality to the patient due to the development of a recalcitrant biofilm structure. Compared to the free-living (planktonic) state, S. aureus living in biofilms exhibit significant differences in gene expression and physiology [1], and the close proximity of organisms may also allow cooperative metabolic functions, promote horizontal gene transfer, and facilitate cell-to-cell communication [2,3]. The most notorious biofilm characteristic is their extraordinary resistance to antimicrobial killing [4]. In a recent comparison, we observed a six-log difference in cell viability in the presence of antibiotics of an S. aureus biofilm versus planktonic cells [5].
Despite the important role of S. aureus biofilms in disease, our understanding of the molecular mechanisms contributing to biofilm formation is incomplete. Recent studies of S. aureus biofilm development suggest that the extracellular matrix consists of proteins, DNA, and/or polysaccharide (also called the polysaccharide intercellular adhesin or PIA). In support of this proposal, compounds capable of dissolving matrix components (proteases, DNAse, or glycoside hydrolases) can disrupt established biofilms or prevent the formation of a biofilm [5,6,7,8]. Recently, it has become evident that emerging clinical S. aureus isolates are not reliant on PIA for biofilm formation [7,9,10]. Protein-mediated biofilm formation has emerged as a prominent alternative to PIA, and many surface adhesins, such as Bap [11], Spa [12], FnBPA and FnBPB [13], and SasG [14,15], have been implicated in this divergent biofilm mechanism. Biofilms produced by these PIAindependent strains are unaffected by polysaccharide-degrading enzymes, such as dispersin B [6], or mutations in the ica gene locus that generates PIA [1,5,7,13]. Therefore, we set out to uncover PIA-independent mechanisms of biofilm formation.
Here we report the generation of a mariner transposon mutant library of 14,880 mutants in a S. aureus strain that develops a biofilm by a PIA-independent mechanism. This library was screened for reduced biofilm formation in a microtiter assay and numerous novel loci were identified. This work expands our understanding of genetic factors controlling biofilm formation and may provide potential targets for therapeutic intervention.
Identification of mariner transposon mutants defective in biofilm formation
To identify genes involved in PIA-independent biofilm formation, we mutagenized S. aureus strain SH1000 using the bursa aurealis (mariner) transposon mutagenesis system [16]. This strain was chosen for mutagenesis and screening because it forms PIA-independent biofilms and is readily amendable to genetic manipulation [5]. Preliminary testing of the mariner transposon system in strain SH1000 was shown to be successful [10]. Altogether, a transposon mutant library of 14,880 mutants was created and banked for further analysis.
Initial screening for biofilm formation in microtiter plates yielded 91 mutants with a defect in attachment. Mutants with severe growth defects were eliminated from further analysis and the remaining potential biofilm mutants were reexamined for reproducibility in the biofilm assay. Thirty-one mutants displayed reproducible biofilm formation defects and arbitrary PCR and sequencing was used to map the insertion location (Table 1). Transposon insertions resulting in a biofilm phenotype were mapped to a variety of genetic loci. Some of the loci, such as altA, fmtA, graS, rsbU and rsbV have previously been shown to be important in S. aureus biofilm formation [10,17,18,19]. The remaining mutants had insertions in genes not previously identified to be involved in biofilm formation.
Extracellular protease activity of biofilm mutants
Recent studies have demonstrated that protease activity can have antibiofilm effects [5,7,20,21]. To determine if any of the biofilm defective mutants produced increased levels of extracellular protease activity, we assayed protease levels in culture supernatants of the biofilm mutants (Fig. 1). We found that 16 of the 31 transposon mutants displayed an increase in protease activity in cell free supernatants. We hypothesized that the increase in extracellular protease production could be the cause of the biofilm defect in these mutants. To test this hypothesis, the general protease inhibitor a 2 -macroglobulin was added to mutant strains from the beginning of biofilm growth [15]. The addition of this protease inhibitor restored biofilm formation in 13 of the 16 protease overproducing mutants (blue bars, Fig. 2b). Recovery of the rsbU and rsbV insertions with a 2 -macroglobulin supports our previous report where the biofilm defect of a sigB deletion was recovered with the same inhibitor [10]. The three protease overproducing mutants that did not form a biofilm in the presence of a 2 -macroglobulin had insertions in a two-component system (graS, mutant 37 C9) or in a probable inositol monophosphate phosphatase (hereafter the gene is called ''imp'', mutants 60 G7 & 64 G10). In mutants that displayed a wild-type level of extracellular protease activity, the addition of a 2 -macroglobulin had no effect. Collectively, these results indicate that extracellular protease activity is an important contributor to the biofilm defect in 13 of the biofilm mutants.
Extracellular nuclease activity of biofilm mutants
Evidence in many different bacterial species, including S. aureus and CA-MRSA, indicates that extracellular DNA (eDNA) is an important component of the biofilm matrix [6,8,9,22,23,24,25,26]. Consistent with the key role of eDNA, the expression and extracellular activity of the S. aureus thermostable nuclease has profound effects on biofilm maturation [6,8]. Based on these results, we hypothesized that some mutants may be unable to form biofilms due to increased nuclease expression. To address this hypothesis, a phenotypic plate assay was employed to assess the levels of extracellular nuclease activity in biofilm defective mutants. After testing all 31 mutants, four mutants in the pool, 3 C5 (rsbU), 37 G12 (rsbV), 60 G7 (imp) and 64 G10 (imp), had increased levels of extracellular nuclease activity (Fig. 3). These four mutants were also tested using a quantitative nuclease assay, and the results confirmed the plate assay phenotypes (Fig. 3). The increased nuclease activity of the rsbU and rsbV mutants is consistent with microarray profiling of Sigma B defective strains [27].
Autolysis levels of biofilm mutants
S. aureus biofilm matrix is composed in part by eDNA that has been released by autolysis [8,23]. Therefore if lysis is altered, the change could reduce the ability of S. aureus to effectively form a biofilm. To test for this phenotype, we utilized an assay that measures autolysis as a function of b-galactosidase release into culture supernatants [23]. We focused on the 18 transposon mutants in the pool of 31 unable to form a biofilm in the presence of a 2 -macroglobulin (black and red bars, Fig. 2). Due to some genes with multiple insertions (Table 1), this pool was reduced to 12 insertions in unique loci and each of these mutants was transformed with plasmid pAJ22, which constitutively expresses the lacZ gene. Cell free culture supernatants were assayed for bgalactosidase activity at indicated time points. During a time course, the autolysis profile of five mutants mirrored wild type (data not shown), while seven of the 12 mutants showed significant changes in b-galactosidase activity (Fig. 4). Four of these seven mutants, 42 F6 (purA), 54 C2 (opuD), 123 D2 (hypothetical membrane protein), and 132 G7 (atlA), all demonstrated reduced autolysis compared to their isogenic parent. Conversely, three mutants 51G5 (fmtA), 37 C9 (graS), and 60 G7 (imp), displayed an increase in autolysis. Of these, the fmtA mutant showed high levels of autolysis early in the time course (12-36 hrs, Fig. 4), while lysis in the graS and imp mutants was more pronounced later in the time course (36-72 hrs, Fig. 4). The altered ability of these mutants to precisely control autolysis (and thus release of eDNA) may contribute to their inability to effectively form biofilms.
graS and imp mutants are biofilm defective in a CA-MRSA strain Mutations in the graS and imp genes were chosen for further study because of their unique combination of autolysis, nuclease, and protease phenotypes (Figs. 2, 3, 4). To assess the biofilm maturation phenotypes in a flowing environment, the graS and imp transposon mutants were grown in a flow cell biofilm model. As shown in Figure 5A, both the graS mutant (37 C9) and the imp mutant (60 G7) displayed profound defects in biofilm maturation in the flow cell. Importantly, the biofilm phenotype could be complemented by expressing the wild-type gene from a plasmid.
To address potential issues of strain-to-strain variation, the graS and imp transposon insertions were transduced into a CA-MRSA isolate called LAC* (erythromycin sensitive variant of LAC, see Materials and Methods). The LAC* strain is a member of the USA300 lineage [28], and previous studies have demonstrated that the strain is a robust biofilm former on diverse surfaces [9,10]. The graS and imp insertions were transduced into the LAC* strain and growth in a flow cell biofilm model was compared to wild type (Fig. 5B). Similar to the SH1000 genetic background, the LAC* graS and imp mutants were defective in biofilm maturation, and again the biofilm phenotype could be complemented through plasmid expression. These results demonstrate that the graS and imp mutant biofilm phenotypes are consistent across multiple in vitro models of biofilm maturation and the phenotype is conserved in a clinical isolate.
Discussion
The ability of S. aureus to form biofilms is an important virulence factor in many persistent infections [3,29]. Recently, it has been shown that some S. aureus clinical isolates do not require polysaccharide production to form a biofilm [1,7,9,10]. To the best of our knowledge, this type of polysaccharide-independent biofilm (termed PIA-independent) has not been systematically investigated through transposon mutagenesis. We created and screened a mariner transposon mutant library in a PIAindependent strain and identified 31 insertions that displayed reproducible biofilm defects. Our screen was large but not saturating, as some loci known to be biofilm deficient in the SH1000 background, such as sarA and dltA, were not identified (data not shown).
To understand the 31 identified biofilm mutants in more detail, various aspects of biofilm matrix production and breakdown were investigated. With the known importance of proteinaceous material in the matrix [5,6,7], we examined the contribution of secreted proteases to the biofilm phenotype. Almost half of the mutant pool displayed an increased level of protease activity in culture supernatants (Fig. 1), and the biofilm phenotype of this subset could be mostly restored with the general inhibitor a 2macroglobulin (Fig. 2). This finding suggests a critical and perhaps underappreciated role of the extracellular proteases in biofilm maturation. In recent reports, high levels of protease activity have been linked to biofilm phenotypes in sigB and sarA mutants [10,20,21], and this activity has an additional role in biofilm dispersal [5]. Numerous potential targets for the proteases have been investigated, including surface adhesins BAP [21,30], Spa [12], SasG [14,15], FnBPA, and FnBPB [13], and the murein hydrolyases have also been suggested as a target [10].
Along with proteins in the S. aureus biofilm matrix, growing evidence indicates eDNA is an important matrix material [8,9,23]. Microarray data has indicated that the nuc gene is down-regulated during biofilm formation as compared to planktonic cultures [1], suggesting a disregulation of nuclease could lead to defects in biofilm formation. Transposon insertions in rsbU, rsbV, and imp genes resulted in increased nuclease activity in culture supernatants (Fig. 3). Insertions in rsbU and rsbV eliminate SigB activity [10], and sigB mutants are known to have increased nuc gene expression [27]. However, the significance of this phenotype is not clear as higher levels of protease activity have been linked to the biofilm phenotype [10], which is supported by the recovery of the rsbU and rsbV mutants with a 2 -macroglobulin (Fig. 2). Increased nuclease activity may be a factor in the imp mutant biofilm phenotype, but further analysis will be necessary to demonstrate a connection.
In order to release eDNA into the environment, S. aureus cells lyse and are able to incorporate the eDNA into the biofilm matrix [8,23]. This autolysis event is a complex and tightly controlled process that involves regulation of holins, antiholins and murein hydrolase activity [31]. In the pool of biofilm-defective mutants, seven displayed altered autolysis levels compared to the isogenic parent. Of this subset, four displayed a reduction in autolysis activity, suggesting they failed to release adequate amounts of eDNA as biofilm matrix material. As anticipated, mutations in the major murein hydrolase, atlA, possessed low levels of autolytic activity. Mutants in atlA in S. aureus and Staphylococcus epidermidis (called atlE) have previously been shown to have defects in biofilm formation [17,32], and the expression of atlA is known to increase during biofilm growth [33]. In S. epidermidis, the reduction in biofilm formation by atlE mutants has been attributed to the ability of this protein to bind vitronectin, an abundant glycoprotein found in serum [32]. Considering that no serum was present in our biofilm growth conditions, we speculate that the biofilm defect in S. aureus atlA mutants is due to decreased autolysis and DNA release.
Three other transposon insertions in an osmoprotectant transporter (OpuD homolog), purA, and a hypothetical membrane protein (mutant 123 D2) also displayed decreased autolysis. The OpuD-like protein has sequence similarity to a glycine-betaine osmoprotectant transporter that is necessary for biofilm formation in Vibrio cholerae [34]. The purA gene encodes for a putative adenylosuccinate synthetase that is upregulated during biofilm growth [1], and the purA knockout displayed the lowest level of autolysis of tested mutants (Fig. 4). Studies in Bacillus cereus revealed that a purA mutant is unable to effectively form biofilms or release eDNA, and the expression of purA is upregulated in a biofilm versus planktonically grown cells [35].
In contrast, transposon insertions in the genes fmtA, graS, and imp displayed an overall increase in autolysis (Fig. 4). FmtA is an autolysis and methicillin-resistance related protein that was identified in screens for increased oxacillin sensitivity in the presence of detergent [36]. Biochemical analysis demonstrated that FmtA is a penicillin-binding protein that is resistant to blactam inactivation [37], and the protein is known to localize to the membrane and affect cell wall structure [38]. A previous transposon screen for biofilm defective mutants in a PIAdependent strain identified the fmtA gene three different times [18], suggesting that the FmtA protein is important across both PIA-dependent and independent mechanisms of biofilm formation. The observation that both mutants that decrease autolysis (altA, opuD, purA, 123 D2) and increase autolysis (fmtA, graS, imp) have biofilm defects suggests that the timing of autolysis and the accumulation of eDNA in the biofilm matrix may be critical in biofilm development.
A transposon insertion in graS of the graRS (glycopeptide resistance-associated) two-component regulatory system also displayed an increase in autolysis (Fig. 4) and a pronounced biofilm defect in both laboratory and CA-MRSA strains (Fig. 5). Mutations in graRS are known to make S. aureus hyper-susceptible to vancomycin [39], cationic antimicrobial peptides [40,41], and cell lysis inducing agents [19,39], findings which correlate with our observed increase in graS mutant autolysis. Genes encoding proteins for D-alanylation of teichoic acids (dlt operon) and atlA are induced by the GraRS system, and graRS mutants are reported to have a microtiter-based biofilm defect [19]. The increase in graS mutant autolysis over wild-type levels did not occur until late in the time course (.24 hrs, Fig. 4), with the most significant increase after 36 hrs. In the flow cell biofilms (Fig. 5), most S. aureus attachment and preliminary biofilm formation is occurring in the first 24 hr, suggesting that the reduced dlt and atlA expression are the major contributors to the graS mutant biofilm phenotype.
The final mutant that displayed increased autolysis was a transposon insertion in the imp gene, which is predicted to encode for an inositol monophosphatase, an enzyme capable of dephosphorylating inositol phosphate to inositol. Mutations in imp displayed nuclease overproduction (Fig. 3) and a pronounced biofilm defect (Fig. 5) that was conserved in a CA-MRSA strain. Inositol is an important signaling molecule in eukaryotes, but its role in bacteria is not well understood [42]. The altered nuclease and autolysis levels suggest a potential regulatory role for the imp gene in S. aureus. Interestingly, an inositol monophosphatase mutant (suhB) was found in a search for biofilm defective mutants in Burkholderia cepacia [43], suggesting this gene may play an important role in biofilm formation in diverse bacterial species. Further analysis will be necessary to define the role of the imp gene in biofilm maturation and determine the significance of the observed nuclease and autolysis phenotypes.
Five remaining mutants did not fall into the categories of displaying increased protease or nuclease activity, or having an altered autolysis phenotype. The inactivated genes encode two potential membrane proteins (mutants 12 D3 and 46 H7), a glutamate transporter (gltT), a glutamate-1-semialdehyde-2,1aminomutase, and a glucose group IIA phosphotransferase protein. Microarray studies have demonstrated an upregulation of expression in gltT during PIA-dependent biofilm growth, potential evidence of a link between GltT and biofilms [44]. At this time the roles for these five genes and their encoded proteins in biofilm formation is unknown.
Overall, the work presented here confirms published reports and provides new insight into genetic loci required for PIAindependent biofilm formation. Over half of the biofilm defective mutants expressed high levels of extracellular protease activity, and a significant portion of the mutant pool either overproduced nuclease or had altered cell lysis phenotypes, indicating a major biofilm role for proteins and eDNA in the absence of exopolysaccharide. Moving forward, it will be important to assess how these various identified factors collaborate during the biofilm maturation mechanism.
Strains and growth conditions
The bacterial strains used in this study are described in Tables 1 and 2. Strains of Escherichia coli were grown in Luria-Bertani broth or Luria agar plates, and growth medium was supplemented with ampicillin (100 mg/ml) or chloramphenicol (10 mg/ml) as needed for maintenance of plasmids. Strains of S. aureus were grown in tryptic soy broth (TSB) or tryptic soy agar (TSA). For selection of chromosomal markers or maintenance of plasmids, S. aureus antibiotic concentrations were (in mg/ml) the following: chloramphenicol (Cam), 10; erythromycin (Erm), 10; and tetracycline (Tet), 5. All reagents were purchased from Fisher Scientific (Pittsburg, PA) and Sigma (St. Louis, MO) unless otherwise indicated.
Strain LAC was made Erm sensitive by serial passage in TSB in order to cure the strain of the native plasmid pUSA03 that confers Erm resistance [45]. A single colony was picked and saved as Erm sensitive strain AH1263, and hereafter this strain will be referred to as LAC * .
Recombinant DNA and genetic techniques
Restriction and modification enzymes were purchased from New England Biolabs (Beverly, MA). All DNA manipulations were performed in E. coli strain BW25141 [46]. Oligonucleotides were synthesized at Integrated DNA Technologies (Coralville, IA). Nonradioactive sequencing was performed at the DNA sequencing facility at the University of Iowa. Plasmids were transformed into S. aureus RN4220 by electroporation and moved to other strains using transduction by bacteriophage 80a as described previously [47,48]. Chromosomal markers were moved between S. aureus strains using bacteriophage 80a or 11 transduction.
Generation and mapping of transposon mutants SH1000 was transformed with plasmids pFA545 and pBursa, and mutagenesis was performed as described previously [10]. Transposon mutants with secondary agr defects were removed from the pool by testing for AIP-I production as described previously [49]. Mutants were banked in deep-well microtiter titer plates in TSB with 10% glycerol and stored at -80uC. Transposon insertion sites were mapped using arbitrary PCR [10].
Biofilm assays
Microtiter plate biofilms and flow cell biofilms were grown as described previously [5]. For culture media, microtiter biofilms were grown in 66% TSB supplemented with 0.2% glucose, and flow cell biofilms were grown in 2% TSB supplemented with 0.2% glucose. For protease inhibition in microtiter biofilms, cells were added to the plate with a 2 -macroglobulin (final concentration, 0.25 units/ml; Roche). Confocal scanning laser microscopy (CSLM) and image analysis was performed as described previously [5]. Biofilms were treated with 330 nM Syto9 (LIVE/DEAD BacLight Bacterial Viability Kit; Invitrogen, Carlsbad, CA) 15 min prior to visualization.
Protease and nuclease assays
Quantitative protease activity measurements were determined using Azocoll (Calbiochem, San Diego, CA) reagent as described previously [50]. Difco DNase test agar with methyl green (BD, Franklin Lakes, NJ) was used to screen biofilm mutants for altered nuclease production. Nuclease activity in culture supernatants was measured as described by Heins et. al [51].
Autolysis assay
To determine autolysis activity, overnight cultures of each strain harboring plasmid pAJ22 [52], were diluted to an OD at 600 nm of 0.1 in TSB (no antibiotic) and grown at 37uC shaking at 200 rpm. At various time points, supernatants from each culture were harvested by centrifugation. b-galactosidase activity in the supernatants was determined as described [53] using o-nitrophenyl-beta-d-galactopyranoside as the substrate and reported in Miller units [54].
Plasmid construction
pCM1. Plasmid pCM1 was created to serve as a new chloramphenicol resistant S. aureus expression vector. This plasmid was constructed by moving the chloramphenicol acetyltransferase gene into plasmid pAH15 [55] and eliminating erythromycin resistance. The chloramphenicol acetyltransferase gene was amplified using oligonucleotides CLM-329: GTTGTTGCTCAGGTAAAGGAGGCATATCAAATGAAC and CLM330: GTTGTTTGATCATTATAAAAGCCAGTCA-TTAGGCCTATC and plasmid pAH5 [55] as template. The PCR product was digested with BclI and Bpu10I and ligated to pAH15 digested with the same enzymes to create pCM1. pCM1-imp. Plasmid pCM1-imp was created by cloning a 1081 base pair PCR product containing the imp gene and promoter region into pCM1 that had been restriction digested with HindIII and EcoRI. The oligonucleotides used to generate the imp PCR product were: inositolREVecoRI (59-GAGGAATTCACTGGTTTTATATTGGCGCGTG-39) and inositiolFORhindIII (59-GAGAAGCTTTAGfAGTACCTCCT-GTATAGTGT-39). The resulting pCM1-imp plasmid has the imp gene with its native promoter controlling expression.
pCM1-graRS. Plasmid pCM1-graRS was created by cloning a 1751 base pair PCR product containing the graRS genes without their native promoter into pCM1 that had been restriction digested with KpnI and EcoRI. The oligonucleotides used to generate the graRS PCR product were: graSkpn (59-AAAAAA-GGTACCGTTTAAAATGACAAATTTGTC-39) and graReco-R1 (59-AAAGAATTCTGATATTGGGTGATATGGATGC-39). The resulting pCM1-graRS plasmid has the graRS genes with their expression driven by the sarA promoter located on pCM1. | 2014-10-01T00:00:00.000Z | 2010-04-14T00:00:00.000 | {
"year": 2010,
"sha1": "9f435e404e7d61a57c931c49f3c1b0a958f5810e",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0010146&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9f435e404e7d61a57c931c49f3c1b0a958f5810e",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
260032790 | pes2o/s2orc | v3-fos-license | A study on the correlations between acoustic speech variables and bradykinesia in advanced Parkinson's disease
Background Very few studies have assessed the presence of a possible correlation between speech variables and limb bradykinesia in patients with Parkinson's disease (PD). The objective of this study was to find correlations between different speech variables and upper extremity bradykinesia under different medication conditions in advanced PD patients. Methods Retrospective data were collected from a cohort of advanced PD patients before and after an acute levodopa challenge. Each patient was assessed with a perceptual-acoustic analysis of speech, which included several quantitative parameters [i.e., maximum phonation time (MPT) and intensity (dB)]; the Unified Parkinson's Disease Rating Scale (UPDRS) (total scores, subscores, and items); and a timed test (a tapping test for 20 s) to quantify upper extremity bradykinesia. Pearson's correlation coefficient was applied to find correlations between the different speech variables and the tapping rate. Results A total of 53 PD patients [men: 34; disease duration: 10.66 (SD 4.37) years; age at PD onset: 49.81 years (SD 6.12)] were included. Levodopa intake increased the MPT of sustained phonation (p < 0.01), but it reduced the speech rate (p = 0.05). In the defined-OFF condition, MPT of sustained phonation positively correlated with both bilateral mean (p = 0.044, r-value:0.299) and left (p = 0.033, r-value:0.314) tapping. In the defined-ON condition, the MPT correlated positively with bilateral mean tapping (p = 0.003), left tapping (p = 0.003), and right tapping (p = 0.008). Conclusion This study confirms the presence of correlations between speech acoustic variables and upper extremity bradykinesia in advanced PD patients. These findings suggest common pathophysiological mechanisms.
Introduction
Speech alterations are very common in Parkinson's disease (PD) and are reported in 70-90% of patients (1,2). Hypokinetic dysarthria is the most frequent manifestation and can emerge at any stage of the disease, but it particularly aggravates in the later stages, causing a progressive loss of communication and leading to social isolation (3). Based on the perceptual analysis, hypokinetic dysarthria is characterized by harsh, breathy voice quality, reduced variability of pitch and loudness, reduced stress, imprecise consonant articulation, and short rushes of speech interrupted by inappropriate periods of silence (4).
In recent years, the acoustic analysis of speech has become an important tool in the study of PD and other movement disorders, allowing for the quantification of the alterations in the different dimensions of speech production (5)(6)(7)(8)(9).
Mixed results have been reported regarding the effects of dopaminergic treatment on speech acoustic variables in PD patients (6,10,11). This inconsistency in results could be secondary to the complex pathophysiology of speech alterations in PD that involve both dopaminergic and non-dopaminergic (i.e., cholinergic) pathways (10).
Besides speech alterations, PD is mainly characterized by wellknown cardinal motor features, including bradykinesia, rigidity, tremor, and postural instability (12). According to the MDS Clinical Diagnostic Criteria for PD, bradykinesia is defined as "slowness of movement and decrement in amplitude or speed (or progressive hesitations/halts) as movements are continued (13)." Bradykinesia may impair the fine motor movements, which are usually demonstrated in PD patients during rapid alternating movements of the fingers, hands, or feet as a progressive reduction of speed and motion amplitude (14). Upper extremity bradykinesia can be clinically evaluated by using finger tapping, hand movements, and pronation-supination movements (13). It has been proposed that bradykinesia may result from a failure of basal ganglia output to reinforce the cortical mechanisms that prepare and execute the commands to move (15). This leads to particular difficulties with self-paced movements, prolonged reaction times, and abnormal pre-movement EEG activity (15). In PD patients, movement amplitude is disproportionally more affected than movement speed in the OFF-medication condition. Levodopa normalizes movement speed to a greater extent than movement amplitude, suggesting that movement speed and amplitude may be associated with partially separate mechanisms (16,17). To date, prevailing evidence have indicated that hypokinetic dysarthria is related to axial motor symptoms, while only a few studies have documented an association between a speech disorder and limb bradykinesia in PD (4,(18)(19)(20)(21)(22). In this setting, very few studies have quantitatively assessed the possible correlations between speech acoustic variables and upper limb bradykinesia, including the presence of possible similarities in terms of response to levodopa (18,19,23). In addition, as previously reported, some features of hypokinetic dysarthria may respond to dopaminergic treatment (6) raising the doubt that hypokinetic dysarthria in PD should not be considered tout court an axial symptom but that it should be deconstructed by looking for different aspects either linked or not linked to axial and appendicular PD symptoms.
Based on these premises, the objective of this study was to verify if there are correlations between different speech variables and upper extremity bradykinesia in different medication conditions in advanced PD patients.
Participants
This study included retrospective data from a cohort of consecutive advanced PD patients admitted to the Neurology Unit of the OCB Hospital, Italy, for a preoperative evaluation before subthalamic nucleus deep brain stimulation (STN-DBS) surgery from 2012 to 2017.
The criteria of inclusion were PD diagnosis according to the MDS criteria (13), the presence of disabling motor complications (i.e., motor fluctuations or L-dopa-induced dyskinesia) not optimized with anti-PD medication; and age younger than 75 years (24,25).
Patients with severe cognitive impairment or non-native Italian speakers were excluded from the analysis. This study was approved by the local ethics committee (Protocol number: 0031287/18), and written informed consent was obtained from participants according to the Declaration of Helsinki (26).
Clinical assessment
Clinical evaluations were performed following the Core Assessment Program for Surgical Interventional Therapies in Parkinson's Disease (CAPSIT-PD) protocol (17).
Each patient was evaluated in the defined-OFF medication condition (after 12-h withdrawal of antiparkinsonian medication) and in the defined-ON medication condition (60 min after the administration of a 30% higher dose of the usual levodopa morning intake). Disease severity was assessed through the four parts of the Unified Parkinson's Disease Rating Scale (UPDRS) (27) and the Hoehn and Yahr (H&Y) scale staging system. Furthermore, upper extremity bradykinesia was quantitatively assessed through a tapping timed test in accordance with the CAPSIT-PD protocol (17, 28) in the defined-OFF and -ON medication conditions. Each patient tapped alternatingly on two buttons (at a 20 cm distance) with the index finger by using the whole upper extremity for a defined fixed time (20 s). Each hand was tested twice, and the mean value of the tapping rate was reported. All tests were videotaped, and through the retrospective analysis of each video, it was possible to calculate with certainty the correct number of taps for each task. A free editing software (Wondershare Filmora 9) was used to analyze the video in slow motion. The retrospective analysis of the video was performed by a GDR blinded to both defined-ON and -OFF conditions.
The total amount of the dopaminergic treatment was determined using the L-dopa equivalent daily dose (LEDD) (29).
Speech evaluation
Patients' speech was evaluated in both the defined-OFF and defined-ON medication conditions by two speech and language therapists (CB and AG) with expertise in phonetics and movement disorders related to speech disturbances. Each session of speech evaluation took place immediately at the end of each neurological examination. Evaluations were made in a quiet room. The speech was recorded using a digital voice recorder maintained at 20 cm from the patients' lips. The acoustic speech analysis was performed using the Praat software (30). The perceptual-acoustic analysis was retrospectively performed, with the speech and language therapists blinded to the patient's condition. The speech assessment protocol (6, 7) consisted of various tasks, including sustained production of the phoneme /a/ for as long as possible and performed three times, counting from 1 to 20, and an oral diadochokinesis task in which the participants produced the syllables /pa/, /ta/, /ka/ and the pseudoword /pataka/ as fast as they could with habitual pitch and loudness.
Statistical analysis
Descriptive statistics were performed for clinical and acoustic variables; continuous variables were expressed as mean [standard deviation (SD)] and median (range), while frequencies and percentages were calculated for categorical variables. The variables were tested for normal distribution using the Kolmogorov-Smirnov test of normality. A p-value of <0.05 was considered significant.
The primary outcome of the study was the possible correlation between the different speech variables and upper extremity bradykinesia quantified through the tapping test in different medication conditions. Concerning the tapping test, the following variables were selected for both defined-OFF and defined-ON medication conditions: mean value of the left hand; mean value of the right hand; and mean value of both hands ([mean value of the left hand + mean value of the right hand]/2). The secondary outcome was the possible correlation between the levodopa-induced variation of speech variables and upper extremity bradykinesia, both calculated as follows: [(defined-ON value minus defined-OFF value)/defined-ON value] × 100. Positive scores denote an increase in speech variables or tapping rate. The correlation between speech variables and UPDRS motor scores and subscores was not included in the analysis because it was already performed in a previous study by our group (6). Considering that most variables were normally distributed, the Pearson correlation coefficient was applied. The correlation analyses, including speech and motor variables, were performed for one of the single conditions tested (defined-OFF, defined-ON).
Demographic and clinical results
From a total of 66 consecutive advanced PD patients, 13 patients were excluded from the analyses for the following reasons: non-native Italian speakers (five patients), missing data (four patients), and severe cognitive impairment (four patients). The clinical and demographic characteristics of the remaining 53 patients are reported in Table 1.
In the defined-ON medication condition, all motor scores, subscores, and tapping rates significantly improved, whereas only speech rate (p = 0.005) and MPT (p = 0.001) were influenced ( Table 2).
Correlation between speech variables and upper extremity bradykinesia
The correlations between speech variables and upper extremity bradykinesia are reported in Table 3.
In the defined-OFF medication condition, the MPT of sustained phonation correlated positively with bilateral mean tapping (p = 0.044, r-value:0.299). Analyzing the test for the single limb, the correlation remained significant for the left (p = 0.033, . /fneur. . r-value:0.314), which presented a worse performance, while only a trend was detected on the right (p = 0.067, r-value:0.272).
In the defined-ON medication condition, the MPT correlated positively with bilateral mean tapping (p = 0.003, r-value:0.429); in this case, a significant correlation was maintained for both left (p = 0.003, r-value:0.438) and right tapping (p = 0.008, r-value:0.391).
In both the defined-OFF and -ON medication conditions, speech rate did not show a correlation with bradykinesia, with the exception of a weak significance (p = 0.038, r-value:0.307) Frontiers in Neurology frontiersin.org . /fneur. . for the right tapping in the defined-OFF condition. Nevertheless, these data were insufficient to support an unequivocal relationship between the two findings.
Correlation between levodopa-induced changes in speech variables and upper extremity bradykinesia
Although both tapping tests, speech rate, and MPT significantly changed after levodopa intake, no significant correlations were found, which means that the effects of levodopa on these variables were not uniform (Table 4).
Discussion
The main objective of this study was to determine a relationship between speech and upper extremity bradykinesia in advanced PD patients. We found different correlations between speech acoustic variables and tapping rate in both the defined-OFF and defined-ON medication conditions. However, it is important to keep in mind that the correlation tests, either Pearson or Spearman, do not prove causality but only strength of association. In particular, in our study, neither the methodology employed nor the statistical analysis was designed to infer causation. In addition, curiously, we found a positive correlation between speech impairment and left bradykinesia during the defined-OFF condition and with both left and right bradykinesia during the defined-ON condition, which could be surprising. However, in the defined-OFF condition, the correlation between speech acoustic variables and right tapping showed a trend toward significance. Thus, we may assume that, with a larger cohort, the correlation might become significant even with the right tapping.
In our cohort, the MPT correlated with upper limb bradykinesia in both pharmacological conditions, meaning that patients with a longer MPT performed better on the tapping test. Phonatory alterations are quite common in PD, including insufficient breath support, a reduction in phonation time, increased acoustic noise, instability of the articulatory organs, microperturbations of frequency/amplitude, and a harsh, breathy voice quality (31). The physiological and anatomical correlates of these alterations have been investigated through laryngoscopy, stroboscopy, photoglottography, laryngeal electromyography, computed-tomography, pulmonary function testing, and aerodynamic assessments (32). These correlates have revealed numerous abnormalities, including incomplete glottic closure and vocal fold hypoadduction/bowing to account for these voice changes. Many of these phenomena are likely related to rigidity or bradykinesia of the laryngeal muscles (32). The clinical feature of hypokinetic dysarthria reflects the effects on the speech of aberrations in the control of proper background tone and the supportive neuromuscular activity on which the quick, discrete, phasic movements of speech are superimposed. Hypokinetic dysarthria prominently affects aspects of speech motor control such as the preparation, maintenance, and switching of motor programs with movements that are attenuated in range and amplitude and restricted in their flexibility and speed, allowing inferences about the role of the basal ganglia control circuit in speech motor control (33).
The MPT, a marker of reduced phonation time, depends on many factors, including phonation volume (which varies with age, sex, and stature), mean airflow rate, comprehension of the task, and maximal effort (34). Reduced MPT has been well documented mainly in PD hypokinetic dysarthria (9,(35)(36)(37)(38)(39), probably as a consequence of laryngeal dysfunction or decreased respiratory volume, leading to the development of short phrases and short rushes of speech (6). This hyporespiratory pattern may result from the rigidity and bradykinesia in the respiratory muscles, particularly the intercostal ones (38). In PD patients, reduced respiratory excursions, reduced vital capacity, paradoxical respiratory movements, rapid breathing cycles, and difficulty altering vegetative breathing for speech breathing seem consistent with rigidity, hypokinesia, and difficulty in initiating movements (33). These factors could significantly contribute to reduced physiologic support for speech and some of the phonatory disorders and prosodic abnormalities, including short phrases and short rushes of speech (33).
The short-term improvement of MPT with levodopa was demonstrated previously, supporting the hypothesis that levodopa might improve thoracic mobility in PD patients (38). This finding was also confirmed by a recent study from our group, which showed that MPT was the only speech acoustic variable responsive to levodopa in the acute setting (6). The correlation between phonatory alterations and limb bradykinesia has also been confirmed (based on clinical scales) in de novo PD patients (22), suggesting that dopaminergic deficiency may be involved in voice dysfunction in PD, presumably through bradykinesia and/or rigidity of the laryngeal musculature (22).
Based on these premises, we suppose that the correlation found between MPT and upper extremity tapping rate in our cohort might reflect a common pathophysiological basis (i.e., bradykinesia of appendicular, laryngeal, and respiratory muscles) with the involvement of prevalent dopaminergic pathways responsive to levodopa. In fact, the degree of response to levodopa is different between these two parameters, as confirmed in our cohort. Indeed, no correlations were found between levodopa-induced percentage changes in MPT and upper extremity bradykinesia, which means that, even if levodopa improves both parameters, this improvement is not uniform. Moreover, it must be considered that the tapping test and MPT are only two specific findings of two more complex functions such as movement and speech.
We found a weak correlation between speech rate and right tapping in the defined-OFF medication condition. Nonetheless, the absence of correlations in the other defined-OFF and -ON medication conditions would exclude an unequivocal relationship between the two findings.
Speech rate is generally expressed as the number of syllables pronounced during a defined time period. It is affected by different factors such as segment duration, variability between the duration of utterances, and the pause time between the different utterances (40). In PD patients, speech rate alterations have been found in both directions (i.e., slower and faster), and the mean rate differences between PD and control speakers were not found to be significant due to a highly heterogeneous overall group . /fneur. .
performance (3). Previous studies have also shown little or no effect of levodopa administration or bilateral STN-DBS on speech rate in PD patients (41)(42)(43). This was not confirmed in our cohort; indeed, speech rate was significantly reduced after levodopa intake. These contradictory results about the short-term effects of levodopa on speech rate and rhythm indicate a considerable impact of the nondopaminergic mechanism, which are implicated in the impairment of time perception, motor planning, and dysfunctional feedback mechanisms (40). Our study has several limitations, including the retrospective origin of the data and the lack of a control group of healthy subjects to compare with the PD cohort. In addition, we assessed only advanced PD patients, so further studies are needed to test the relationship between upper limb bradykinesia and speech variables in early PD patients.
Furthermore, we did not collect several demographic and anthropometric variables of the participants, including weight, height, and BMI. In addition, voice features relevant to the acoustic analysis of hypokinetic dysarthria in PD have not been included in the study. It is known that f0, jitter, and shimmer, among other voice features, can be severely impaired in hypokinetic dysarthria, especially in advanced-stage PD. Also, these features are relevant for discriminating early-and advanced-stage PD, so they should always be considered in speech analysis in movement disorders, as nicely reported in Rusz et al. (8) and Suppa et al. (44).
To the best of our knowledge, only one study has compared speech acoustic variables and upper limb motor dysfunction with a quantitative approach, supporting the hypothesis that pathophysiological processes leading to limb motor dysfunction in PD may play a role, at least partially, also in a more complex function such as speech (18). In particular, significant relationships were observed between the quality of voice assessed by jitter and amplitude decrement of finger tapping, consonant articulation evaluated using voice onset time and expert rating of finger tapping, and the number of pauses and Purdue Pegboard Test score (18). Based on their results, Rusz et al. in their study assumed that vocal fold vibration irregularities appeared to be influenced by mechanisms similar to amplitude decrement during repetitive limb movements, while consonant articulation deficits were associated with decreased manual dexterity and movement speed, likely reflecting fine motor control involvement in PD (18). Furthermore, MPT was not included among the different speech variables, and no correlation was found between diadochokinetic rate and markers of upper limb motor dysfunction, as opposed to our cohort (18). This could be explained both by the different tasks used to quantify speech rate (oral diadochokinesis vs. counting from 1 to 20) and the different pharmacological conditions tested in the two cohorts. In fact, our patients were evaluated both in the defined-OFF and defined-ON medication conditions, while in the study by Rusz et al. the patients were evaluated only in chronic pharmacological treatment (18).
Conclusion
Our study confirms the presence of some correlations between speech acoustic variables and upper extremity bradykinesia in advanced PD patients. This may be due to common pathophysiological mechanisms and the possible involvement of dopaminergic pathways, as assumed for MPT. This confirms the need to take into account independently every single speech parameter altered in PD. As a consequence, speech alterations should not be considered anymore as a solely axial manifestation of the disease unresponsive to dopaminergic treatment and without a relationship with PD cardinal motor symptoms. Future studies will be needed to confirm these data on larger samples and on early-stage PD patients.
Data availability statement
The datasets for this article are not publicly available due to concerns regarding participant/patient anonymity. Requests to access the datasets should be directed to the corresponding author.
Ethics statement
The studies involving human participants were reviewed and approved by Comitato Etico dell'Area Vasta Emilia Nord (Protocol Number: 0031287/18), and written informed consent was obtained from participants according to the Declaration of Helsinki.
Author contributions
FC, GDR, AG, CB, VF, SC, EMe, FA, and FV were responsible for writing the manuscript, data collection, and analysis. FC, GDR, CB, AG, SC, VF, EMe, SP, EMo, FA, and FV were responsible for manuscript drafting. CB, SP, EMe, FA, and FV were responsible for manuscript revision. All authors have read and approved the final manuscript.
Funding
This study was partially supported by Italian Ministry of Health -Ricerca Corrente Annual Program 2023.
Conflict of interest
EMo has received honoraria from Medtronic for consulting and lecturing and has received research grants from Ipsen and Abbott.
The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. | 2023-07-22T15:49:19.452Z | 2023-07-18T00:00:00.000 | {
"year": 2023,
"sha1": "a988f95ee5b15dfcad62608ba65626b424b3fe23",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2023.1213772/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e66d552609b1c6889715ca3429483b750df31fb2",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119075942 | pes2o/s2orc | v3-fos-license | Visualization of Magnetic Flux Structure in Phosphorus-Doped EuFe$_2$As$_2$ Single Crystals
Magnetic flux structure on the surface of EuFe$_2$(As$\rm_{1-x}$P$\rm_x$)$_2$ single crystals with nearly optimal phosphorus doping levels $x=0.20$, and $x=0.21$ is studied by low-temperature magnetic force microscopy and decoration with ferromagnetic nanoparticles. The studies are performed in a broad temperature range. It is shown that the single crystal with $x=0.21$ in the temperature range between the critical temperatures $T_{\rm SC}=22$ K and $T_{\rm C}=17.7$ K of the superconducting and ferromagnetic phase transitions, respectively, has the vortex structure of a frozen magnetic flux, typical for type-II superconductors. The magnetic domain structure is observed in the superconducting state below $T_{\rm C}$. The nature of this structure is discussed.
I. INTRODUCTION
The coexistence of superconductivity and magnetic ordering has been a subject of a strong interest. 1 Currently, the electric transport and magnetic properties are well studied for a number of single-crystalline compounds of the so-called magnetic superconductors: borocarbides, 2-7 uranium compounds, 8 hightemperature cuprate superconductors, 9 and iron-based superconductors. 10 An important issue of the coexistence of superconductivity and magnetism from both theoretical [11][12][13][14] and experimental perspectives [15][16][17][18] relates to the microstructure of the magnetic flux, as well as to its dynamics upon variation of the temperature and external magnetic field. Until recently, low temperatures of superconducting and magnetic phase transitions of known single crystals, as well as the requirement of a high spatial resolution, have limited experimental capabilities for visualization of the magnetic flux structure employing e.g. magnetic force microscopy (MFM), 19 scanning Hall probe imaging, 20 and decoration with magnetic nanoparticles. [21][22][23] Recently, new iron-based compounds AFe 2 (As 1−x P x ) 2 (where = Ba, Sr, Ca, Eu) have been synthesized. Superconductivity in these compounds can be induced by doping with phosphorus. 24 Superconductivity in EuFe 2 (As 1−x P x ) 2 single crystals occurs in a rather narrow doping range x = 0.14 − 0.25 (or in the phosphorus content range 7.0 − 12.5 at%) with the maximum superconducting transition temperature T max SC = 27 K. 25,26 The magnetic transition in the Eu 2+ subsystem is observed at temperatures T C ∼ 17 − 20 K and depends moderately on the phosphorus content (doping level) in the specified range of contents. 25,[27][28][29][30][31][32] Previously, the magnetic flux structure was visualized with the MFM on artificial thin-film superconductor/ferromagnet (Nb/FeNi) hybrid structures, 15 where the domain structure and Abrikosov vortices frozen in the superconductor were observed simultaneously. However, in Ref. 15 the Curie temperature T C of ferromagnetic layers was much higher than the critical temperature of the superconducting transition T SC in niobium films. Also, vortex structures were observed in spatially homogeneous ErNi 2 B 2 C bulk superconducting single crystals (T SC = 10.5 K) in Ref. 33 using the decoration method, and interpreted as an evidence of presence of domain boundaries in a weakly ferromagnetic phase with T C = 2.3 K.
In this work, the structure of the magnetic flux in EuFe 2 (As 1−x P x ) 2 single crystals with x = 0.20 and x = 0.21 is studied with MFM and Bitter decoration technique in a broad temperature range. Stripe and maze domain structures typical for ferromagnets with perpendicular magnetic anisotropy, are observed in the superconducting state below T C . In contrast to artificial hybrid systems, in EuFe 2 (As 1−x P x ) 2 an interface is absent and superconductivity and ferromagnetism coexist on the atomic scale.
II. EXPERIMENTAL DETAILS
EuFe 2 (As 1−x P x ) 2 single crystals were synthesized using the self-flux method. 34 The actual composition of synthesized single crystals was determined by energy dispersive X-ray (EDX) microanalysis employing Carl Zeiss Supra 50 VP SEM microscope. For MFM and decoration studies, single crystals of EuFe 2 (As 0.8 P 0.2 ) 2 and EuFe 2 (As 0.79 P 0.21 ) 2 of 1 × 1 × 0.012 mm 3 size with an atomically smooth surface were obtained by mechanical cleavage. Temperature and field dependences of the magnetization were measured on Quantum Design MPMS-XL5 SQUID magnetometer at fields up to 5 T. The surface structure and the distribution of magnetic flux were studied using AttoCube AttoDry 1000 atomic force microscope (AFM) with a closed-cycle cryogenic system, and a base temperature of 4 K. For AFM and MFM studies silicon cantilevers were used coated by magnetic CoCr layer (MESP, Bruker) with the following characteristics at 4.2 K: the resonance frequency of the cantilever 87 kHz, the stiffness constant 2.8 N/m, and the coercive field ≈ 1400 Oe. AFM/MFM imaging was performed in an atmosphere of exchange gas (helium) at pressure P ∼ 0.5 mbar in the temperature range from 4 to 30 K, controlled with exceptional precision of 1 mK. Prior to MFM imaging, probes were magnetized at H = 2 kOe above the superconducting transition temperature T SC = 22 K of EuFe 2 (As 0.79 P 0.21 ) 2 sample. The topography of the surface was studied in the tapping mode and magnetic flux structure was imaged in the MFM lift mode at 110 nm above the sample surface with the feedback switched off and fast scanning direction along the Y axis. The MFM contrast was provided by the phase shift in the cantilever oscillation. The decoration of the surface of EuFe 2 (As 0.8 P 0.2 ) 2 single crystal was performed with magnetic iron particles (∼ 10 nm) in the field cooling (FC) regime at liquid helium temperatures. 21 Fig. 1 shows typical magnetic properties of EuFe 2 (As 0.79 P 0.21 ) 2 single crystal. Fig. 1(a) demonstrates temperature dependences of the magnetization measured in the FC and zero-field cooling (ZFC) regimes. The superconducting transition temperature T SC = 22 K is indicated by the right arrow.
III. RESULTS
Step features on the ZFC and FC temperature dependences of the magnetization are attributed to a ferromagnetic phase transition. It is noteworthy that a transition to the superconducting state is also accompanied by the appearance of residual magnetization upon cooling in an external field of 10 Oe. Fig. 1(b) shows the dependence of the magnetization on the applied magnetic field parallel to the c-axis of the crystal. For the sample with x = 0.20 the temperature dependence of the magnetization and magnetization curve at 4 K are similar but with a higher superconducting transition temperature and a wider hysteresis loop. Fig. 2(a) demonstrates the AFM topography of the 8 × 8 µm 2 surface area of EuFe 2 (As 0.79 P 0.21 ) 2 single crystal with the step of ∼ 100 nm height. Fig. 2(b) shows the distribution of the magnetic flux over the surface shown in Fig. 2(a) at T = 17.27 K. This structure is typical for the entire temperature range below the Curie temperature and disappears after heating above T C . Thus, the observed sign-alternating contrast can be attributed to the magnetic domain structure. Importantly, the domain structure is observed not only at zero external magnetic field, but also upon cooling in weak fields H < 100 Oe. Fig. 2(c) shows the distribution of the magnetic flux in the superconducting state in a narrow temperature range above T C . The observed contrast (light spots) corresponds to Abrikosov vortices with the magnetic flux den-sity Φ 0 /a 2 ∼ 6 G, where Φ 0 is the magnetic flux quantum and a is the average distance between vortices. Fig. 3 shows the typical magnetic flux structure observed by the decoration method on the (001) surface of EuFe 2 (As 0.80 P 0.20 ) 2 single crystal with the superconducting transition temperature T SC = 24 K. With MFM only a small ∼ 8 × 8 µm 2 surface area of the sample was studied, whereas the decoration method reveals the magnetic structure on the almost entire surface. According to the principle of the image contrast formation in the decoration method, 35,36 the region of higher density of magnetic particles (light) is treated as a domain with the magnetization along the applied field direction, whereas the region with lower density or without magnetic particles (dark) is interpreted as a domain with the opposite sign of the magnetization. As can be seen, the decorated domain structure (Fig. 3(b)) agrees with MFM imaged one (Fig. 2(b)) at corresponding scales. The period of the domain structure is about 0.9 µm. At the same time, finer details of the decorated domain structure can be resolved ( Fig. 3(b)). The magnetization measurements and both MFM imaged and decorated magnetic structure define explicitly EuFe 2 (As 0.79 P 0.21 ) 2 and EuFe 2 (As 0.80 P 0.20 ) 2 single crystals as superconductors with ferromagnetic ordering and the superconducting transition temperature T SC above the Curie temperature T C .
IV. DISCUSSION
The experimental results can be interpreted as follows. According to the dependences shown in Fig. 1(a), the ZFC magnetization is negative below the superconducting transition temperature T SC = 22 K. In the temperature range below T C , the diamagnetic response is weakened by the ferromagnetic transition in the Eu 2+ subsystem. The exact determination of the Curie temperature using the observed features on the ZFC and FC temperature dependences of the magnetization is complicated due to competing mechanisms of superconducting and ferromagnetic orderings. In particular, maxima on the FC and ZFC temperature dependences of the magnetization are observed in Ref. 25 at T ∼ 17.7 K, whereas according to measurements of the specific heat, the Curie temperature is T C = 19 K. In this work, the transition temperature to the ferromagnetic state T C is defined as a temperature at which the domain structure is first observed, i.e., T C = 17.7 K. The dependence of the magnetization on the applied magnetic field (Fig. 1(b)) is the superposition of a typical hysteresis loop of a type-II superconductor (within the Bean model the critical current density J c is proportional to the width of the hysteresis loop) and the magnetization curve of the Eu 2+ ferromagnetic subsystem. 25 The magnetic origin of the domain structure contrast (Figs. 2(b) and 3(a)) is confirmed by insensitivity of the MFM probe to small details of the surface topog-raphy, e.g. to the 100 nm step. Sign-alternating (phase) contrast on domains indicates perpendicular magnetic anisotropy and corresponds to the antiparallel direction of the magnetization in neighboring domains.
Individual vortices could not be resolved with decoration since the expected magnetic flux density within domains is about 0.9 T (M s = 714 cgs units/cm 3 ) at liquid helium temperatures, 25 whereas the resolution of the decoration method is limited by 0.2 T. 37 The spatial resolution of MFM also cannot identify individual vortices if the local magnetic flux density in domains is much higher than 10 mT. 38,39 At the same time, the fine structure of domains, which is shown in Fig. 3(b), can be explained within the framework of domain branching in ferromagnets. 40 An alternative origin of the fine domain structure is the so-called intermediate-mixed state, 41 which appears if the thickness of the superconducting crystal is much larger than the width of domains and is characterized by a mixture of flux-free domains (Meissner phase) and domains with Abrikosov vortices. In contrast to the structure of the intermediate-mixed state, the fields of vortices in neighboring branching domains should be oppositely oriented. Such a possibility was theoretically considered in Ref. 13. According to this model, different types of domain configurations can be formed in a ferromagnetic superconductor depending on the parameters (magnetic and superconducting): the saturation magnetization (M s ), London penetration depth (λ), the lower critical field (H c1 ), and the domain wall width w. Precise measurements of these parameters and studies of the fine structure of domains will provide further clarification of the mechanisms of the coexistence and mutual influence of superconductivity and ferromagnetism in EuFe 2 (As 1−x P x ) 2 single crystals.
V. CONCLUDING REMARKS
The main result of this work is the observation of the magnetic domain structure in EuFe 2 (As 1−x P x ) 2 superconducting single crystals with x = 0.20, and x = 0.21. This domain structure disappears in EuFe 2 (As 0.80 P 0.20 ) 2 single crystal after heating above the Curie temperature T C = 17.7 K. Thus, the magnetic domain structure has been observed for the first time in spatially homogeneous single crystals with the superconducting transition temperature T SC exceeding the Curie temperature T C , T SC > T C , which unambiguously indicates the coexistence of ferromagnetism and superconductivity in this material. The observations of the magnetic flux structure using low-temperature MFM and decoration methods in real space (in contrast to X-ray and neutron diffraction studies) provide important information on the topology, real sizes, and shape of domains. At the same time, only further combined studies employing, e.g. diffraction methods and scanning probe microscopy, in particular high-resolution scanning tunneling microscopy, as well as decoration with ferromagnetic nanoparticles in a broad range of temperatures and magnetic fields can clarify the mechanism of the coexistence of superconductivity and ferromagnetism in these ferromagnetic superconductors. | 2017-03-07T06:33:25.000Z | 2017-03-07T00:00:00.000 | {
"year": 2017,
"sha1": "ae6290da01c4ec21162f57321cc0d8468c5b0766",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1703.02235",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ae6290da01c4ec21162f57321cc0d8468c5b0766",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
36701427 | pes2o/s2orc | v3-fos-license | Development of a Two-Level Segmentation System for Iris Recognition Using Circular and Linear Hough Transform Algorithm
: Recently, the use of biometric systems is of increasing significance in both public and private sectors. It has gradually replaced traditional system security. Iris recognition biometric systems have proved to be efficient at personal recognition with highest recognition accuracy. In this work a hybrid segmentation technique for iris recognition using circular and linear hough transform algorithms was developed. The hybrid segmentation technique was evaluated in terms of segmentation and recognition time. False Acceptance Rate (FAR) and False Rejection Rate (FRR) were the parameters used for comparison in the iris recognition. The system was implemented in MATLAB. After evaluating the performance of the hybrid segmentation techniques it revealed that the system was faster during the two stages. The averages segmentation time was 0.21089 seconds while average recognition time was 0.04113 seconds. The result shows the approach is efficient for the purpose of authentication.
Introduction
With increase in emphasis on security nowadays, biometric technologies are becoming much more important than ever (Peihua and Xiaomin, 2008).In particular, iris recognition in recent years receives growing interests (Adegoke et al.,).Iris recognition is a method of biometric authentication that uses pattern recognition techniques based on high resolution images of the irises of an individual"s eye.It uses camera technology, with subtle infrared illumination, to create images of the detail-rich, intricate structures of the iris converted into digital templates; these images provide mathematical representations of the iris that yield unambiguous positive identification of an individual.Iris recognition efficacy is rarely impeded by glasses or contact lenses.Because of its speed of comparison, iris recognition is a biometric technology is well-suited for one-to-many identification.A key advantage of iris recognition is its stability, or template longevity, barring trauma, a single enrollment can last a lifetime.Although small (11mm) and sometimes problematic to image, the iris has the great mathematical advantage that its pattern variability among different persons is enormous.In addition, as an internal (yet externally visible) organ of the eye, the iris is well protected from the environment and stable over time.As a planar object, its image is relatively insensitive to angle of illumination, and changes in viewing angle cause only affine transformations; even the non-affine pattern distortion caused by papillary dilation is readily reversible.
The three main stages of an iris recognition system are image preprocessing, feature extraction and template matching.The iris image needs to be preprocessed to obtain useful iris region.Image preprocessing is divided into three steps: iris segmentation, iris normalization, and image enhancement.Iris segmentation detects the inner and outer boundaries of iris.Eyelids and eyelashes that may cover the iris region are detected and removed.Iris normalization converts iris image from Cartesian coordinates to polar coordinates.Feature extraction uses texture analysis method to extract features from the normalized iris image.The significant feature of the iris are extracted for accurate identification purpose.Template matching compares the user template with templates from the database using a matching metric.
Iris segmentation is to locate the valid part of the iris for iris biometrics, including finding the pupillary and limbic boundaries of the iris, localizing its upper and lower eyelids if they occlude and detecting and excluding any superimposed occlusions of eyelashes, shadows or reflections (Lim et al., 2001).The uniqueness and significance of segmentation to effectiveness of any iris recognition system cannot be overemphasized.It determines effectiveness of the system (Nakissa and Mohammad, 2008).According to Daugman (1993), the two most recognised iris segmentation approaches are attributed to Daugman and Wildes.Daugman developed integro-differential operator to find circular pupil and limbus boundaries.It can be interpreted as a circular edge detector, which searches, in a smoothed image by Gaussian filter, the parameters of a circular boundary along which the integra derivative is maximal.Wildes proposed a two-stage iris segmentation method: a gradient based intensity image, and next the inner and outer boundaries are detected using Hough transform (Wildes, 1997).It is reported that most failures to match in iris recognition system result from inaccurate segmentation (Ma, et al., 2004).www.ijsr.net
Licensed Under Creative Commons Attribution CC BY
The objective of this research work is to develop an object oriented system that can performs hybrid segmentation technique for iris recognition using Circular Hough transform algorithm.The system will them implemented using the algorithms to measure the accuracy of the hybrid segmentation technique.Finally, the performance evaluation of the system will be carried out.
Literature Review
A biometric system is essentially a pattern-recognition system that recognizes a person based on a feature vector derived from a specific physiological or behavioral characteristics that the person possesses.That feature vector is usually stored in a database after being extracted.A biometric system based on physiological characteristics is more reliable than one based on behavioral characteristics (Prabhakar et al,2003).
The history of iris recognition goes back to mid-19 th century when the French physician, Alphonsus Bertillon, studied the use of eye color as an identifier (Boles and Boashash, 1998).However, it was believed that the main idea of using iris patterns for identification, the way we know it today, was first introduced by an eye surgeon, Frank Burch, in 1936.(Daugman, 2003).In 1987, two ophthalmologists, Flom and San Francisco, patented this idea and proposed it to Daugman, a professor at Harvard University, to study the possibilities of developing an iris algorithm.After a few years of scientific experiments, Daugman proposed and developed a high confidence iris recognition system and published the result in 1993 (Daugman, 2003).The proposed system then evolved and achieved a better performance in time by testing and optimizing it with respect to large databases.
The reports have shown that Daugman"s system has zero false matching rates based on the test done by organizations such as British Telecom"s, US Sandia Labs, UK National Physical Laboratory, the National Biometric Test Center SJSU (Daugman, 2003).The information extracted from an iris is in binary format and stored in only 256 bytes to allow creation of nationwide Iris Code databases.The search engine is based on Boolean exclusive-OR operator (XOR) to allow extremely fast comparisons in the matching process.The overall characteristics of the proposed algorithm offers real-time and high confidence identification in applications such as passenger control in airports, border control and access control in high-security areas.A few years after the publication of the first algorithm by Daugman, other researchers developed new iris recognition algorithms
The Human Iris
The iris is a thin circular diaphragm, which lies between the cornea and the lens of the human eye.A front-on view of the iris is shown in Figure 2.1.The iris is perforated close to its centre by a circular aperture known as the pupil.The function of the iris is to control the amount of light entering through the pupil, and this is done by the sphincter and the dilator muscle, which adjust the size of the pupil.The average diameter of the iris is 12mm, and the pupil size can vary from 10% to 80% of the iris diameter (Daugman,2002).The iris consists of a number of layers; the lowest is the epithelium layer, which contains dense pigmentation cells.The stromal layer lies above the epithelium layer, and contains blood vessels, pigment cells and the two iris muscles.The density of stromal pigmentation determines the colour of the iris.The externally visible surface of the multi-layered iris contains two zones, which often differ in colour (Wolff, 1976).An outer ciliary zone and an inner pupillary zone, and these two zones are divided by the collarette -which appears as a zigzag pattern.
Formation of the iris begins during the third month of embryonic life.The unique pattern on the surface of the iris is formed during the first year of life, and pigmentation of the stroma takes place for the first few years.The only characteristics that is dependent on genetics is the pigmentation of the iris, which determines its colour.Due to the epigenetic nature of the iris pattern, the two eyes of an individual contain completely independent iris pattern, and identical twins possess uncorrelated iris patterns (Wolff, 1976).
Iris Recognition
The iris is an externally visible, yet protected organ whose unique epigenetic pattern remains stable throughout adult life.These characteristics make it very attractive for use as a biometric for identifying individuals.Image processing techniques can be employed to extract the unique iris pattern from a digitized image of the eye, and encode it into a Licensed Under Creative Commons Attribution CC BY biometric template, which can be stored in a database.This biometric template contains an objective mathematical representation of the unique information stored in the iris, and allows comparisons to be made between templates.When a subject wishes to be identified by iris recognition system, their eye is first photographed, and then a template created for their iris region.This template is then compared with the other templates stored in a database until either a matching template is found and the subject is identified, or no match is found and the subject remains unidentified.In line with the basic steps in Biometric systems operation, iris recognition involves the following processes: Image capturing, Iris Segmentation, Iris Normalization, Feature encoding, and Matching.
Image capturing
The first thing involved is capturing the image of the eye to be used.In capturing, the actual iris pattern, an iris camera is used and it should be at resolution of 70 pixels (Daugman 2003).Special cameras with an illumination of about 70mm-90mm in wavelength can also be used.There are also some other techniques used which involves using visual feedback through a mirror or video image to enhance cooperation in that user"s position; their eyes within the field of view of cameras.
Iris Segmentation
The first stage of iris recognition is to isolate the actual iris region in a digital eye image.The eyelids and eyelashes normally include the upper and lower parts of the iris region.Also, specular reflections can occur within the iris region corrupting the iris pattern.A technique is required to isolate and exclude these artefacts as well as locating the circular iris region.The success of segmentation depends on the imaging quality of eye images.
Iris Normalization
Once the iris region is successfully segmented from an eye image, the next stage is to transform the iris region so that it has fixed dimensions in order to allow comparisons.The dimensional inconsistencies between eye images are mainly due to the stretching of the iris caused by pupil dilation from varying levels of illumination.Other sources of inconsistency include, varying imaging distance, rotation of the camera, head tilt, and rotation of the eye within the eye socket.The normalization process will produce iris regions, which have the same constant dimensions, so that two photographs of the same iris under different conditions will have characteristic features at the same spatial location.
Feature Encoding
In order to provide accurate recognition of individuals, the most discriminating information present in an iris pattern must be extracted.Only the significant features of the iris must be encoded so that comparisons between templates can be made.Most iris recognition systems make use of a band pass decomposition of the iris image to create a biometric template.
Matching
The template that is generated in the feature encoding process will also need a corresponding matching metric, which gives a measure of similarity between two iris templates.This metric should give one range of values when comparing templates generated from the same eye, known as and separate values, so that a decision can be made with high confidence as to whether two templates are from the same iris, or from two different irises.
Iris Segmentation
The iris region, shown in figure 2.1 can be approximated by two circles, one for the iris/sclera boundary and another, interior to the first, for the iris/pupil boundary.The eyelids and eyelashes normally occlude the upper and lower parts of the iris region.Also, specular reflections can occur within the iris region corrupting the iris pattern.A technique is required to isolate and exclude these artifacts as well as locating the circular iris region.
Adaptive Thresholding
Thresholding is a technique used generally for detecting eyelashes, since they are dark compared to the surrounding regions.If the intensity value is less than the threshold value then the point belongs to the eyelashes.Eyelashes are of two types, separable eyelashes and multiple eyelashes that are grouped together.Separable eyelashes are detected using Gabor filters.If the resultant point is less than the threshold value then the point belongs to the eyelash.Multiple eyelashes are detected by calculating the variance of intensity.If the variance is less than the threshold, the center is taken as the point in the eyelash.
Hough Transform
The
Advantages of Iris over Other Biometric Features
The iris of the eye has been described as the ideal part of the human body for biometric recognition has been one of the most effectively developed biometrics to ensure secure, efficient, and expedited airport operations for both landside and airside applications.Iris recognition"s unique capabilities are proven to increase security, speed and user identification for several reasons; It is an internal organ that is well protected against damage and wear by a highly transparent and sensitive membrane (the cornea).This distinguishes it from fingerprints, which can be difficult to recognize after years of certain types of manual labour. The iris is mostly flat and its geometric configuration is only controlled by two complementary muscle (the sphinter pupillae and dilator pupillaer), which control the diameter of the pupil.This makes the iris shape far more predictable than, for instance that of face. The iris has a fine texture that -like finger -is determined randomly during embryonic gestation.Even genetically, identical individuals have completely independent iris textures, whereas DNA (generic "finger printing") is not unique for about 1.5% of the human population who has a genetically identical monozygotic twin. While there are some medical and surgical procedures that can affect the color and overall shape of the iris, the fine texture remains remarkable stable over many decades.Identifications have succeeded over a period of about 30 years.
System Methodology and Design
Circular Hough Transform is used to detect the presence of circular shapes in any image.For example detection of number of circular discs in an image.Another well-known application of Circular Hough Transform is the detection of number of coconuts in an image (Ritter N., 1999).Circular Hough Transform uses the parameterized equation of circle for this purpose.The equation can be written as Find the maximum number of in the accumulator space.
Plot the circle with parameters (r, a, b) corresponding to the maximum in the accumulator space.
The circle obtained is the desired circle with (r, a, b) as the radius and center of circle respectively.
Image Acquisition
The first phase of the work was to collect a large database consisting of several iris images from various individuals.CASIA Iris Image Database was used.CASIA Iris Image Database (CASIA-Iris) was developed by Chinese Academy of Sciences Institute of Automation research group.Among the public and freely available iris image database for biometric purpose is the CASIA iris database, the most widely used for iris biometric purpose (captured under good positioning and illumination condition).However, its images incorporate few types of noise, almost exclusively related with eyelid and eyelash obstruction.CASIA images were deliberately taken for biometric issues and standardized for such.But, in normal life biometric activities where user"s non-cooperation is expected, heterogeneity of image is mostly obtained.
The User Interface
The user interface (front end) was designed using MATLAB.It conjugates all separate tasks involved in the evaluation process on a single user friendly interface allowing a user to perform the various stages in sequential ways.The interface uses its input from a database of captured iris images which serves as the registered or authorized individual of the security concerns.
System Design
The design interface was implemented with.The designs as shown below gives stepwise method of the way the system implements the segmentation and the recognition processes.This helps to secure the system use.Now, before any further step can be taken, the iris image has to be converted to gray scale.The initial format of the original image is in RGB, and the iris image is set for the image pre-processing.MATLAB can only process a 2-D dimensional image and so it is necessary that the iris image is converted for further image pre-processing.This leads to the iris normalization which enhances the iris image for further stages in recognition process.However, to make the interface tidy and compact, the codes for image conversion to gray scale is located into the segmentation command.
The last stage is the cross-referencing the template with the database iris images.At stage of the recognition, the matching of the iris images is carried out.The matching extracted feature of the iris image is displayed.The figure below shows the last stage of the recognition process after the image has been recognized.This stage marks the third evaluation area.The recognition time is examined at the end of this stage via the "Recognition Time" column of the pin interface.
The Segmentation Process
To detect the iris and pupil boundaries accurately, Circular Hough transform is applied first to the iris-sclera boundary and the to the iris-pupil boundary, since pupil lies within the iris region, it involves canny edge detection for generating an edge map.In performing the edge detection, gradients are biased in the vertical direction for the iris-sclera boundary, and in horizontal and vertical direction for the iris-pupil boundary.
From the gradient image, weak edges are eliminated by thresholding.If the intensity values are greater than the high threshold then those pixels are considered as edges and if the intensity values are less than the lower threshold, then those pixels are considered as weak edges and eliminated.If the intensity values lie the high and low threshold, then the average of its neighboring pixels is taken and if the value is greater than the lower threshold, then the pixel is considered as the edge point.After eliminating weak edges, radius and center coordinates for iris and pupil are calculated from the edge map by applying Circular Hough transform.
Eyelids were isolated drawing a line to the upper and lower eyelids using the linear Hough transform.A second horizontal line is then drawn intersecting the first line at the edge of the iris that is closest to the pupil, to isolate maximum of eyelid regions.These lines lie exterior to the pupil and interior to the iris.This process is shown in Figure 3.6 and is done for both the eyelids (top and bottom.)canny edge detection is used for generating and edge map and only horizontal gradient information is taken for detecting the eyelids.Linear Hough transform was used to detect the upper and lower eyelids.If the maximum value in the Hough space is less than the threshold, then no lines are drawn.
Results and Discussion
Iris segmentation is a key in an iris recognition system.And iris segmentation result determines the efficiency and accuracy.Segmentation is the most critical stage of iris recognition system because data that is wrongly represented as iris pattern will corrupt the generated biometric template.
The system was implemented using MATLAB 7.0 in windows 7 operating system environment on Intel Pentium processor, 2.13 GHZ and 1GB RAM.In this approach 120 gray scale iris images of 20 individual from the Chinese Academy of Science Institute of Automation (CASIA) was used.Evaluations were made at two different areas or point in the course of the system execution.The first was at a point of segmentation where the segmentation time of the technique under examination was observed and recorded.
The second area of evaluation takes place at the point of template matching meant for observing and recording the recognition time.
The Evaluation or tests were made on various iris images available in the database, ensuring that each of them was examined using the hybrid segmentation technique.For CASIA database with 20 individuals, six (6) iris images were used per person, three (3) on the left eye and three (3) on the right, total of 120 iris image.False Acceptance Rate (FAR) means probability of identifying an imposter as an enrolled user.This means that, for every interclass match considered with CASIA, a total of 119 x 6 comparisons were done on each eye sample, a common threshold value obtained from analysis of each iris image was also used to evaluate the performance of the algorithm.Matching time describes only the average time taken by the program run in MATLAB to compare the digital codes of the iris images, done at matching stage alone while Average Recognition Time measures the average total time taken to segment, normalize, extract, and encode the features of a chosen iris and match with a chosen set of templates in a range.For intra-class recognition however, a chosen iris in a class is compared only with other samples in its class from the first subject to the last and since this image are from the same set of people, the likely error here would be due to a False Reject Rate (FRR).This investigation was repeated using different hamming distance in order to arrive at optimum settings for the recognition system.
Conclusion
In this work, new hybrid system was developed using CASIA irises (the most widely used).The system performance was evaluated in terms of the segmentation time, recognition time, False Acceptance Ratio (FAR), False Rejection Rate (FRR), Interclass and Intra-class matching scores.The average segmentation time is 0.21089 seconds and the average recognition time is 0.04113 seconds which is not bad for an authentication system.It was observed that segmentation is the most critical stage of iris recognition system because data that is wrongly represented as iris pattern will corrupt the generated biometric template with the CASIA database of 20 individual, 99% of the images are segmented properly.The method gave a better recognition rate of 99.9% with false acceptance and rejection ratio of 0.00877 and 0.100 respectively and also consumes less time for the matching of template.
Figure 2 . 2 :
Figure 2.2: Illustration if the normalization process for two images of the same iris taken under varying conditions.Top image "am 201b", bottom image "am201g" from the LEI database.Source: Daugman, (2003).
intra-class comparisons, and another range of values when comparing templates created from different irises, known as inter-class comparisons.These two cases should give distinct Paper ID: NOV161238 ISSN (Online): 2319-7064 Index Copernicus Value (2013): 6.14 | Impact Factor (2014): 5.611 Volume 5 Issue 2, February 2016 www.ijsr.netLicensed Under Creative Commons Attribution CC BY
Figure 2 . 3 :
Figure 2.3: Block diagram of the stages of iris recognition system (Wolff E., 1976) Hough transform is a standard computer vision algorithm that can be used to determine the parameters of simple geometric objects, such as lines and circles, present in an image.The circular Hough transform can be employed to deduce the radius and centre coordinates of the pupil and iris regions.An automatic segmentation algorithm based on the circular Hough transform was employed by Wildes et al, 1994, Kong and Zhang, 2001, Tisse et al., 2002.Firstly, an edge map is generated by calculating the first derivatives of intensity values in an eye image and then thresholding the result.From the edge map, votes are cast in Hough space for the parameters of circles passing through each edge point.A maximum point in the Hough space will correspond to the radius and center coordinates of the circle best defined by the edge points.
(3. 1 )
Where x, y are the points on the circumference of the circle, a, b is the centre of circle and r is the radius of the circle.The equation of circle can be written as (3.2) (3.3) Circular Hough Transform uses these equations to compute the CHT of a circle and detect the presence.The algorithm for CHT is follows Read an image file.Find edges in the image.Define a range of radius to be used.For each edge point, Draw a circle with that edge point as the center and a radius r and increment the number of votes by 1 for all the coordinates that coincide with the circumference of the circle drawn, in the accumulator Commons Attribution CC BY space.Find circles for an edge point for all the radius in the range.
Figure 3 . 2 :
Figure 3.2: The system login interface The implementation of this work was done by the design of the interface shown below.
Figure 3 . 3 :Figure 3 . 4 :
Figure 3.3: The user interface at the first stage of executionThe first step of the algorithm is to input an iris image, specifically CASIA Iris images.This iris images were stored in the database which the system can easily browse and locate for the recognition.In MATLAB, a reading task behind the screen was used with the command imread('image', file extension), and imshow("image ", file extension) to show the iris image on the tab meant for it on the interface.The figure below shows the interface for the iris image read into the system for recognition.
Figure 3 . 5 :
Figure 3.5: The interface of a recognized iris image and process
Figure 3 . 6 :
Figure 3.6: Showing iris image and circular Hough transform segmented image
Figure 4 .
Figure 4.1a below show the plotting of Average Segmentation and Recognition time i.e. plotting of segmentation time against number of tested iris for Average Segmentation Time and figure 4.1b shows the plotting of recognition time against number of tested iris for Average
Figure 4 .Figure 4
Figure 4.1a: shows Average Segmentation Time, Figure 4.1b: shows Average Recognition Time
Figure 4 .
Figure 4.1b The figure 4.2a below shows the graph of the interclass matching i.e. plotting the hamming distance against false acceptance rate (FAR), and figure 4.2b shows the graph of the Interclass matching i.e. plotting hamming distance against false rejection rate (FRR).
Figure 4 .
Figure 4.2a: plot of hamming distance against false acceptance rate (FAR)
Figure 4 .
Figure 4.3 showed a combined graph of FAR and FRR.The area of intersection between the two distributions represents errors in the system performance.
Table 4 . 2 :
The segmentation results of 15 iris images from 15 different individual tests are presented in the Table below.Where 0 = The table below shows the average segmentation time and average recognition time.Average segmentation and recognition time The average Segmentation time was calculated thus:The average Recognition time was calculated thus:
Table 4 .
1 and 4.2 clearly shows that the developed hybrid segmentation technique for iris recognition using Circular Hough and Linear Hough transform algorithm has a faster segmentation and recognition time.From the status column on the table shows that the entire 20 iris segmented was recognized when matched which is not bad for individual to go for iris authentication. | 2017-11-30T14:46:21.164Z | 2016-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "18260751c3e5c324fd73690799f8d140a6e2b6a8",
"oa_license": null,
"oa_url": "https://doi.org/10.21275/v5i2.nov161238",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "18260751c3e5c324fd73690799f8d140a6e2b6a8",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
21075185 | pes2o/s2orc | v3-fos-license | Axion dark matter detection by laser induced fluorescence in rare-earth doped materials
We present a detection scheme to search for QCD axion dark matter, that is based on a direct interaction between axions and electrons explicitly predicted by DFSZ axion models. The local axion dark matter field shall drive transitions between Zeeman-split atomic levels separated by the axion rest mass energy m a c 2. Axion-related excitations are then detected with an upconversion scheme involving a pump laser that converts the absorbed axion energy (~hundreds of μeV) to visible or infrared photons, where single photon detection is an established technique. The proposed scheme involves rare-earth ions doped into solid-state crystalline materials, and the optical transitions take place between energy levels of 4f N electron configuration. Beyond discussing theoretical aspects and requirements to achieve a cosmologically relevant sensitivity, especially in terms of spectroscopic material properties, we experimentally investigate backgrounds due to the pump laser at temperatures in the range 1.9 − 4.2 K. Our results rule out excitation of the upper Zeeman component of the ground state by laser-related heating effects, and are of some help in optimizing activated material parameters to suppress the multiphonon-assisted Stokes fluorescence.
a a 4 11 where f a is the Peccei-Queen symmetry-breaking energy scale, inversely proportional to the coupling strenghts with standard model particles 6,7 . A light and stable axion emerges as an ideal DM candidate if large f a are considered. Due to the resulting huge occupation number, galactic halo axions can be described as a classical oscillating field a, with oscillation frequency ν = m c h / a a 2 14 . The < < − − m 10 1 0 a 6 3 eV axion mass range has since long been favoured by astrophysical and cosmological bounds 15 , while very recent high-temperature lattice QCD calculations suggest that μ ⩾ m 50 a eV 16 . The axion is intensively searched in haloscope experiments 17 , mostly based on resonant axion-photon conversion in a static magnetic field via Primakoff effect. The Axion Dark Matter eXperiment (ADMX) is the most sensitive haloscope detector based on high quality factor microwave resonators at cryogenic temperature.
The axion-electron coupling, explicitly predicted by DFSZ models [11][12][13] , can be considered to envisage another class of haloscopes, thereby providing the opportunity to discriminate among axion models in case of detection. Complementary approaches may prove crucial to determine the fractional amount of axions as DM constituent. For instance, inhomogeneous filled cavities, in which the effective axion field is converted to magnetization oscillations of a ferrimagnet, are under study 21 . In this case, single photon detection is required, and it can be realized by e.g. superconducting circuit devices acting as quantum bits properly coupled to the cavity photons 22,23 , but as yet their dark count rate still exceeds the axion interaction rate.
Approaches described so far are affected by an extremely poor sensitivity for axion masses above 0.2 meV (~50 GHz), where the effective detector volume is a critical issue. Extension to the mass range up to 1 meV (250 GHz) may be rather accomplished in suitable condensed matter experiments, in which the space parameters hardly accessible to cavity technology could be tackled with the upconversion scheme investigated in this work, whereby cosmological axions cause transitions between Zeeman split levels of suitable atomic species.
As target atoms we consider rare-earth (RE) elements inserted as dopants in crystalline matrices, where they exist as trivalent ions, substitutional for one of the atoms of the host with the same valence state and similar ionic radius. Among RE ions, those with an odd number of 4f electrons are called Kramers ions 24 , and have electronic doublet levels with magnetic moments of the order of 1-10 Bohr magnetons μ B . Therefore, using Kramers doublets, axion-induced spin transitions can take place in the GHz range with application of moderate magnetic fields. For instance, in Er 3+ , the calculated splitting spans from 20 to 120 GHz with applied magnetic fields in the interval 0.4 to 2.5 T 25 , which translates to a large tunability in the favoured cosmological axion mass window.
In the direct axion-electron coupling 26,27 the interaction energy is μ ∇ → ⋅ → g e a ( /2 ) ae , where the term ∇ → g e a ( /2 ) ae plays the role of an effective oscillating field, μ → is the electron magnetic moment with electric charge e and g ae is the coupling constant 17 . Resonant condition is met when the Zeeman splitting energy is m a c 2 . As schematized in Fig. 1(a), the axion excitation is upconverted by a pump laser to photons in visible or infrared ranges, where single photon detection with ultra-low dark count rate has been already demonstrated [28][29][30] . The proposed detection scheme is based on electronic transitions between states within a f 4 configuration of the trivalent RE, with positions of the discrete energy levels minimally perturbed by the crystal-field due to the screening action of the 5s and 5p orbitals 24 . It is immediately evident that a first requirement for the feasibility of such a scheme is related to the the linewidth of the transition driven by the laser, which must be narrower than the energy difference between the atomic levels | 〉 0 and | 〉 i . Detectability of axions in this scheme can be at first discussed by considering only the thermal excitation of the atomic level as fundamental noise limit. Backgrounds of different nature are left for experimental investigations in the second part of the work.
We consider one mole of target atoms in the ground state | 〉 0 and, using Eq. 8 of ref. 31 , we establish the transition rate to the level | 〉 i by axion absorption on resonance: where R i is the transition rate of a single target atom, N A is the Avogadro number, ν = E h a a is the axion energy, g i is the coupling strength to the target atom and is of the order of one 31 , and v 2 is the mean square of the axion velocity. The value μ 330 eV (80 GHz) is a midpoint of the Zeeman splitting frequency interval reported for Er 3+ in ref. 25 . As in the considered galactic halo model axions are galaxy dominant component of dark matter, we take for its energy density ρ a the value 0.4 GeV/cm 3 obtained from galaxy rotational curves.
The experiment coherence time is set by τ τ ∇ t min( , , ) a , where t is the measurement integration time (inverse of the resolution bandwidth), and τ ∇a is the axion gradient coherence time at the resonant frequency of the experiment that can be calculated from the axion coherence time τ = h E v c /( / ) a a 2 2 32 . The latter is related to the width of the axion kinetic energy distribution in the laboratory frame. If we assume a Maxwellian velocity distribution in the Galactic rest frame and we take where the merit factor τ ν ≡ . × Q 2 19 10 a aa 6 qualifies the axion-microwave linewidth in haloscope experiments.
The lifetime of the Zeeman excited state τ is typically much longer than τ ∇a , and in the rare-earth doped materials considered in this work is strongly dependent on temperature, intensity of the static magnetic field, dopant concentration [33][34][35][36] . The magnetic field, beyond splitting degenerate levels and thus opening a channel for resonant axion detection, may also inhibit spin flips and thus increase the lifetime τ of the intermediate level. Lifetimes much longer than ms have been measured in several rare-earth activated optical materials at liquid helium temperature with magnetic fields comparable to those used in this work (∼ . 0 5 T) up to about 3 T 34 . Incidentally, for a given pump laser intensity, the efficiency of the mentioned upconversion process is greater for longer τ, thus allowing for mitigation of the laser power requirements when large detecting volumes are devised 37 .
As one might expect, the experiment must be operated in a ultra-cryogenic environment to minimize thermal population of the Zeeman excited level. To establish the working temperature of the apparatus, we treat the pumped crystal as if it were a single photon detector with overall efficiency η = .
0 5 (including the efficiency of upconversion, the fluorescence collection efficiency and self absorption) and consider thermal excitation of the target ions. For a given temperature T of the doped crystal, the corresponding excitation rate is related to the lifetime of the Zeeman excited level A a average number of excited ions in the energy level E a , and k the Boltzmann constant. It is worth noticing that the contribution of adjacent Stark sublevels (due to interaction with the crystalline field) is not considered when their energy is much higher than E a , as is the case analyzed in this work. To ensure a signal to noise ratio (SNR) of 3 with a statistically significant number of counts within one hour observation time 38 , the thermal excitation rate settles to = × − R 6 10 t 3 Hz. If a level lifetime of τ = 1 ms is taken, axions with mass greater than 80 GHz can be searched, provided the active crystal is cooled down to at least 57 mK.
Results
We demonstrate here that we are able to resolve the involved transitions in a rare-earth doped material, that is a prerequisite for the feasibility of the proposed detection scheme. We then focus our experimental investigations on possible backgrounds induced by the pump laser at cryogenic temperatures and sub-Tesla magnetic field. The samples utilized in these measurements have 1% and 0.01% Er 3+ -dopant nominal concentration, which correspond to . ⋅ 1 4 10 20 and . ⋅ 1 4 10 18 ions/cm 3 , with three f 4 electrons for each ion available as axion targets. Crystals with dopant concentrations well below 0.1% have been the subject of much scientific investigation for photon-echo-based optical data storage and data processing, owing to their narrow − f f 4 4 transition linewidths and long optical coherence times (see ref. 34 and references therein). In this work we are interested in the behavior of higher concentration samples to maximize the axion interaction rate given by Eq. (2) for a given laser-pumped, active detector volume. Moreover, the 1% concentration samples allow for higher sensitivity to laser related backgrounds in the measurements described in this section. μeV represent the ground state or the excited level I 9/2,9/2 4 splittings. We accomplish this task by laser excitation of the thermal population in the Zeeman-split first excited Stark level of the ground state. Independently of this limitation, the plots in Fig. 2 demonstrate that we are able to resolve the Zeeman splitting and therefore that it is possible to monitor the population of the upper Zeeman component of the ground state. Clearly, at = T 2 K thermal excitation of the level still prevents us from assigning a detection sensitivity to the present apparatus, but before we get to cool the sample to hundreds of mK temperatures, a thorough investigation of the pump laser related noise is accomplished as described in the following sections.
The LIF measurements in Fig. 2 have been repeated with the 1% concentration sample. In this case the Zeeman transition is hardly resolved due to increased transition linewidths, ascribable to spin cross relaxation processes due to direct interactions among Er 3+ ions 36,39 . However, such a limitation might be overcome in the high magnetic field and low temperature regime, required to achieve ultimate sensitivity in the proposed axion detection scheme. For instance, in a 0.1% concentration sample of YLiF 4 :Er 3+ , authors have investigated the four transitions connecting the Zeeman sublevels of the ground and lowest F 9/2 4 excited state and demonstrated that their linewidth can be as narrow as ∼1 MHz 40 . The applied magnetic field was about 3 T and measurements were conducted below 4 K by Zeeman-switched optical-free-induction decay technique. These results, together with our findings, foster the development of a few liters detector with intermediate concentration active materials, matching the axion-induced transition rate in Eq. 2 to dark count rates in available single photon counters [28][29][30] . As a final additional remark, we note that an intermediate concentration sample would allow for increasing the axion-electron interactions of six orders of magnitude compared to a gaseous target prepared by buffer cooling techniques 41 .
Laser-induced thermal noise. To assess heating effects in the active detector volume, we focus on the population of the first excited Stark (crystal-field) sublevel I 15/2,15/2 4 , that has a strong thermal coupling with the ground energy level. To enhance the sensitivity of our tests, we use the 1% concentration sample. The crystal-field splittings of Er 3+ ions in YLiF 4 have been calculated and measured by previous authors 42, 43 , where the sum is well approximated with the first two terms. As the fluorescence intensity F is proportional to the laser intensity I times the level occupation number n, we model a possible heating effect with the term βI in expression , where α, β are empirical parameters determined from a fit to the data.
From the data shown in the inset of Fig. 3, we infer that the β parameter is compatible with zero within one standard deviation, which allows us to limit the temperature increase to less than 4.5 mK/[W/cm 2 ]. We stress that such limit is obtained in an unfavorable upconversion scheme, where the de-excitation takes place also through non radiative channels as shown in Fig. 1(c). Therefore we can assign the temperature of the thermal bath to the entire crystal and calculate the ratio of the populations of the same Stark level at two different temperatures T 1 and T 2 . Such ratio is then compared to the LIF peak areas.
As shown in Fig. 3, with the pump laser tuned to the transition → I I 15/2,15/2 4 9/2,9/2 4 , we obtain peaks that differ only in their area for = .
T 2 16 1 K and = . T 1 93 2 K. The Er(1%):YLF crystal is immersed in superfluid He, and these points are obtained under λ-point operation at which bubbling disturbances are eliminated. A small satellite line is present on the right side of the main peak at both temperatures, which hinders an accurate fitting of the data. Therefore we compare the areas of the main peaks at = .
T 2 16 1 K and = . T 1 93 2 K by summing the amplitudes of the data recorded at four wavelengths around resonance. The ratio of . ± . 3 6 0 3 is in agreement with the expected value, confirming the assumption made in the introduction to calculate the rate of excited atoms via thermal bath temperature.
In addition, we confirm experimentally (Fig. 4) that also the population of the Zeeman-split levels follows Boltzmann statistics. In this case, the pump laser is set to probe the populations of the Zeeman-split levels of the first excited Stark sublevel ( to the ground state 43 , by pumping the transitions to the Zeeman levels of the I 9/2,9/2 4 , the previously measured energy differences (Fig. 2) can also be precisely identified. In fact, from the wavelengths reported in the first column of Table 1, we obtain the splitting of the first excited Stark level ∆ ≡ ∆ = ∆ = .
′ ′ E E 75 6 12 43 μeV, where the indices are assigned as described in the inset of Fig. 4. The latter value is consistent with the average of ∆ = . E 73 9 31 μeV, ∆ = . E 77 7 42 μeV obtained from data in Fig. 2 eV as the searched splitting of the ground state. To further confirm proper identification of the Zeeman split levels, we can use the ratios of reported g factors in the same material oriented with its c-axis parallel to the magnetic field 43 . We obtain ∆ ∆ = .
E E / 039 g e , in agreement with = .
That is as far as our LIF measurement of the ground level splitting is concerned. As for the investigations of possible laser-induced deviations from Boltzmann statistics, we consider the peak areas of LIF measured for the levels . The data displayed in Fig. 4. are fitted to a Lorentzian curve in the form is also plotted for comparison.
A similar exponential behaviour has been previously reported in Er:YLF and has been explained in terms of multiphonon-assisted, side-band absorption 44 . The RE manifolds E 1,2 can be in fact excited even by a nonresonant pump photon < < E E E 1 2 , when the missing/excess energy is bridged by absorption/emission of phonons via Anti-Stokes and Stokes processes, respectively. The related absorbed intensity is theoretically given by: Table 1. Lorentzian fit of the data reported in Fig. 4. The parameters λ ω A , , c (center, area and width respectively) are expressed in nm, in (nm · mV) and pm, respectively. Errors on the peak areas are assigned by considering the error on the measured background at T = 1.93 K.
where α S and α S A are the Stokes and Anti-Stokes coefficients, described in the model 45 through expressions: AS S In Eq. (6), ω eff is the crystal effective phonon energy, p is the number of photons needed to bridge the energy gap, n is the average occupation number and S 0 the Huang-Rhys coefficient that represents the electron-phonon coupling strength. Typical values of ω eff are smaller than 200 cm −1 in bromides, greater than 400 cm −1 in oxides 46 , and in YLF the value 400 cm −1 is reported.
The strong suppression of the LIF observed in our 4.2 K data as soon as the pumping wavelength differs from the pure electronic transition wavelength (Fig. 5) is ascribable to the expected suppression of the AS component with temperature (from eq. 7) and the exponential growth for increasing wavelengths is then mainly due to the Stokes process. Fitting of data with wavelength greater than 850 nm give an absorption coefficient α = . ⋅ − 9 2 10 S 3 cm −1 , in agreement with the value reported in ref. 47 . Ground state absorption measurements allow to estimate a ⋅ − 2 10 20 cm 2 cross section of the pure electronic transition (circled data in Fig. 5) and thus to quantify the upconversion efficiency and the multiphonon side band relative amplitude. This type of background hinders the application of the present scheme to axion detection, unless a suitable combination of rare-earth dopant, pumping pathway and matrix is chosen. In particular, relevant suppression of the background should be accomplished in low phonon energy host matrices 46 or, as suggested by Eq. (5), by exploiting pumping schemes with larger − E E 1 . It is worth mentioning that an ultimate laser-induced background might also originate from impurity absorption, the same process that is currently limiting the efficiency of optical refrigeration 48-50 .
Discussion
We have discussed a solid-state approach for direct detection of axion dark matter, and established the most important experimental parameters necessary to reach cosmologically relevant sensitivity in DFSZ models. The effect of the continuous, coherent axion field is searched in the excitation of the Zeeman upper component | 〉 i of the ground state of rare earth ions in crystalline matrices, at the energy scale ∆E g , corresponding to transitions in the ∼100 GHz range. The population of this excited level is probed by a pump laser tuned to the transition to a fluorescent level within the same f 4 atomic configuration, so as to convert the axion excitation into photons, detectable with state-of-the-art single-photon detectors. Assuming thermal excitation of the excited Zeeman level as fundamental noise limit, the active detector volume must be cooled down to ultracryogenic temperatures. The rate of thermal excitation of the | 〉 i atomic level is directly related to its lifetime τ, and the temperature of less than . 0 3 K, at which the final experiment must be performed, has been estimated for 80 GHz axion-induced transitions and τ = 1 ms by requiring SNR 3. As long as τ ⩾ 1 ms, a high upconversion efficiency is also ensured for tens of W/cm 2 pumping intensity.
In the proposed scheme it is important to address a thorough experimental study of pump laser-related backgrounds. As a first step we have probed the population of atomic levels close to the ground state via LIF measurements in the temperature range 1.9-4.2 K. Our main finding is that the pump laser does not affect the thermal population of the Zeeman excited level at least up to a few W/cm 2 intensity. In addition, we have shown that it is crucial to optimize the pumping pathway and crystal properties to minimize scattering of the pump photons on crystal phonons (Stokes process).
As for the detection scheme via laser induced fluorescence, at 4.2 K and with 370 mT magnetic field, we have demonstrated that the four transitions coupling the Zeeman levels of the ground and the excited I 9/2 4 can be resolved in the lowest concentration YLiF 4 :Er 3+ sample (0.01%). This was not possible in the 1% sample. However, the spin population dynamics in Kramers ions strongly depends on applied magnetic field, temperature, dopant concentration and species, and we argue that a tradeoff between these parameters can be found for the proposed experiment feasibility. A few liters active volume ensures ~mHz axion and thermal transition rates, corresponding to statistically relevant counts of upconverted photons in a measurement time of a few hours. Fortuitously, the dark count rate of state-of-the-art single photon detectors holds below the transition rate in the detector active volume.
We are witnessing a blooming of table-top experiments pursuing new observables for axion DM direct detection [51][52][53][54][55][56][57] . In such a multifaceted, dynamic scenario, our complementary proposal aims to probe the uncovered few hundred μeV axion mass region by exploiting the axion-electron interaction predicted in the DFSZ models.
Methods
The × × 5 5 5 mm 3 volume, Er 3+ :YLiF 4 crystals used in this work were grown with the Czochralski method. The starting raw material were powders of LiF, YF 3 and ErF 3 with 5 N purity (99.999%), provided by AC Materials (Tampa, Fl., USA). They have 1% and 0.01% Er 3+ -dopant concentration (atomic percent substitution for Y 3+ ). The growth was carried out in high purity (5 N) Argon controlled atmosphere. The pulling rate was 0.5 mm/h, the rotation rate 5 rpm and the temperature of the melt was computer-controlled around 880 °C to maintain a constant diameter of the boule; the seed was an undoped YLF monocrystal oriented along the c axis. The crystals SCIeNtIFIC REPORTS | 7: 15168 | DOI:10.1038/s41598-017-15413-6 resulted of good quality, free of internal cracks, microbubbles or inclusions. The structural analysis by Laue X-ray diffraction checked the mono-crystalline feature, and allowed the orientation of the boules along crystallographic axes.
To allow for Zeeman studies at LHe and superfluid He temperatures, the samples were housed in an immersion dewar located between two NdFeB magnetic discs that produced a field of 370 mT at the sample position. The c-axis of the crystal was parallel to the magnetic field direction. As shown in Fig. 1(b), the sample fluorescence is collected orthogonally to the laser pump propagation direction and coupled to the photon detector by means of a mirror M and a quartz guide. With optical filters we suppress scattered pump radiation at signal wavelengths, and at the InGaAs photodiode we detect the 1.5 μm component of the overall infrared fluorescence spectrum (see Fig. 1(c)). The employed optical source is a cw Ti:sapphire laser, which can be finely tuned by rotating intracavity ethalons. Zeeman studies are conducted around 809 nm wavelength, while laser-induced backgrounds are investigated at 810.1 nm. The laser linewidth is ⩽2 GHz (∼1-2 pm), comparable with the detected transitions widths. The incident light polarization angle is varied by means of a half-wave plate. A typical value of laser intensity used in our measurements is 10 W/cm 2 , compatible with 0.1 upconversion efficiency in trivalent ions 37 . For the laser noise studies the pump laser was chopped at 15 Hz to allow phase-sensitive detection. | 2018-04-03T01:20:21.087Z | 2017-11-09T00:00:00.000 | {
"year": 2017,
"sha1": "1500e042d61f65f886a505ce852487252dadbbb5",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-017-15413-6.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "be7183e904e941a2c847f7f5b9329d0fafafa0c2",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
233696901 | pes2o/s2orc | v3-fos-license | Experimental Study and Numerical Simulation of Plane Steel Frame with Rubberized Connecting Technology Subjected to Seismic Effect
This paper representsexperimental and numerical study the behavior of the rubberized steel frame connections. One single-bay, one-story without elastic buckling are cyclically tested. The experimental specimens are simulated and analyzed by the ABAQUS program. Four specimens of steel plane portal frame are investigated under horizontal reversed cyclic loads. The specimen connections are developed by using different diameters of composite steel bolts/rubberinstead of conventional steel bolts to connect the beams with columns. The yield and ultimate strength, ductility, envelope curves, and damping ratio of these specimens are analyzed and compared. The finite element method is used to establish and verify the results of the laboratory test. The results of the experimental and numerical tests gave a large load-carrying capacity, reduction in the stresses, excellent ductility and energy dissipation capacity, and remarkably improved damping ratio.
INTRODUCTION
Earthquakes are a geophysical phenomenon that appears a wave rises and falls to the ground. These waves move to the foundations of the buildings and move it and the body of the buildings will respond due to these waves and moves contrary to the foundation movements and failure occurs. Iraq is in a seismic zone; therefore, it is becoming necessary to study how to reduce the effect of seismic effect on buildings. There are many methods to reduce the effect of seismic loads on buildings such as shear wall system, isolation system, and damper system. This study scoped to develop the beamto-column connections in the steel buildings according to the principle of the isolation system. Failure of joints is the most common type of failure and the most repeated in steel structures where it is possible to design steel members very accurately, while for the design of steel joints becomes more complex, calculations through looking at the maximum load and then design the joint for the maximum possible strength. In general, the joints fail first in the event of an unexpected force because it is logical that any steel member can be resisted by secondary loads because the steel section is poured in one part, while the joints behaves as a brittle material where it takes some loads but not all, therefore, the idea of studying and developing the joints of steel building through the innovative of rubberized connecting technology capable to absorb and dissipate these loads was the main objectives of this research. The beams relate to columns by composites steel bolts/rubber, these bolts made by covering the traditional steel bolts by different rubber thickness, where these rubbers taken from waste tires. There are several studies within the field of earthquakes and shear tap connections such as: [1], this study presented the effect of seismic loads by applied the cyclic loads in the top ends of steel frames with shear tabs connections. This study found new approach to resist the seismic effect through creating gaps between steel frame and shear wall. [2], the behavior of the full-scale test of common shear tabs connection under combined load investigated by this study. Failure mode, ductility limits, and connection capacities were presented and discussed, the results were described to find an applied approach to finding the performance for beam-column connection under column removal scenarios. The failure of bolts-tear out was the governing failure under the combined load at shear tab connections. Shear connection with additional seat caused a significant low rotation and rise in the moments as well as improvement of overall capacity at the ultimate limit state. [3], the numerical study to determine the moment-rotation relationship of the shear tab connections type, by using the ABAQUS programis determined by this paper. This study assumed that the idealization of supposing that shear tabs will not resist the moment is incorrect while computing the lateral stiffness of a steel frame. Schroeder calculated the initial stiffness of the connections in a footnote of (2-12%), and the results proved that if the connection was partially restrained connections, the inter-story drift of structure might be decreased by (22%) compared to a situation of shear tabs that are supposed to be perfect pin connections.
EXPERIMENTAL PROGRAM
Four plane steel frames are designed and formed according to AISC-14 Th edition [4] consisted of two vertical steel columns sections HW125 125 with length 1500mm and one of horizontal beam section IEP160 with length 1000mm. The columns connected with beams by using a single shear tab connection with dimensions 110 mm welded in the columns from one hand and bolted with the web of the beams on the other hand. One of the columns represented a fixed column andthe other pinned. Two bolts are uses in each joint of the steel frame to connect the shear tabs with beams. The FR1 was reference frame for the specimens. The columns bolted with the beam by steel bolts only for the reference frame, while the other steel frames FR2, FR3, and FR4 the columns connected with beams by different diameters of composites steel bolts/rubber. Holes of shear tab connections driven to suitable with the diameter of the composites steel bolts/rubber such as standard holes, short-slotted holes, oversize holes, and long-slotted holes.
I. Material
This study includes the following materials: Mild carbon steel columns HW125 125 conforming to the Chinese specification, the mechanical proprieties for these sections found according to the standard test method [5] on laboratories of Diyala company-engineering industries as shown Table I. Mild carbon steel beam IPE160 conforms to European specifications, the mechanical properties for the beams sections established by tensile testing according to the standard test method [5] as shown in Table I. Single shear tab connections welded in columns and bolted with beams. The yield and ultimate loads for these plates are listed in Table I. Two of the steel bolts used to connect the shear plate with the beam web on each side of the specimen connections. Size of these bolts A307-N 7mm 35mm. Two washers are used for each bolt; one washer position between the head bolts and shear plate, the other one attaches with the bolt nut and beam web. Usually, the washers are used on the connections to prevent the damage on the face of the connection as well as to distribute the load under the bolt head and nut. The mechanical properties of the tensile strength for these steel bolts used are found according to Standard Specification for Carbon Steel Bolts and Studs [6] as Table II. Finally, the rubber used to cover the steel bolts in different ratio of the bolt's dimeters 0.5 , 1.0 , and 1.5 by length 13mm, where the rubber is taken from old and used tires. The composites' steel bolts/rubber were made in the industrial district using a Turing machine. The process of cutting, manufacturing and fixing for these composites steel bolts was carried out in the factory as illustrated in Figure (2).
Figure 2: Process of the covered bolts by rubber
Properties of these rubber established on the Applied Sciences Laboratories-University of Technology. The tensile rubber test is from the Standard Test Method for Tensile Properties of Plastics 1 ( D638-14), by using the Jianqiao Testing Equipment, the tensile test is founded equally to 15.09 MPa and the elongation was equal to1204.1%. Compression test for the rubber established according to American Society for Testing and Materials 2008 (D695 -15), [8],the rate of alteration in sample length at each load points are taken to the extent of failure than the results of the compression test rubber are achieved as the Table III. Finally, The hardness test was performed by using Shore (A) Hardness Tester Th200 according to Standard Test Method for Rubber Property-Durometer Hardness [9], where the rate of five readings are taken 84.7-89.9-82.7-84.7-86.3, the hardness of the rubber which is used in this study 85.66 Shore (A).
II. Test Setup
The behavior of the four specimens' steel portal frames was investigated due to the seismic effect. Load-displacement hysteresis can be used to simulation seismic effect based on Structural Analysis and Design of Tall Buildings, ANSI/AISC 341-10, [10], through of applied the cyclic loads on the top ends of specimens, Figure (3) illustrated the machine loads and test approach for frames.
Figure 3: Test setup of specimens
Horizontal cyclic loads -quasi-static loads applied on the specimens according to the guidelines for cyclic seismic testing of components of steel structures [11]. Where this protocol is recommended by the U.S. for steel buildings test are investigated. ATC-24 was developed and used especially for steel structure components. The ATC-24 uses the elastic deformation concept for controlling the specimens' test. These protocol statements to apply six cycles before the yield point following three cycles at yield than three cycles of double yield loads as shown in Figure (4). These sequins and increases in the loading and number of cycles continue until the failure occurs. The same test program and loading protocol were applied to the specimen's test.
Figure 4: Deformation history for multiple-step test
The variables of the specimens for this study are listed in Table IV. All specimens are tested under horizontal reversed cyclic loads.
III. Instrumentations
Load of specimens are measurement by use two of load cells fixed at ends of hydraulic jacks as shown in Figure 3. The displacements of specimens found by using two of the Linear Variable Differential Transducer (LVDT) were fixed at the top of the frame used to measure the displacement during the cyclic load test. The strain of steel sections measures by using six steel strain gauge device and was glued in the stress positions as the mentions in Figure (3). All these external instrumentations were cabled with electronic devices recorded data over time (data logger).
I. Predicate of P-Δ Values and Number of Cycles ( :
The tests were done for the four specimens, the yield and ultimate loads investigated as well as the yield and ultimate displacements found for the specimens. The experimental test appeared in the specimens with rubber content on its joints to resist loads higher than without rubber. Where the rubberized specimens with ratio 50%, 100%, and150% of the steel bolts dimeters, the ultimate loads increased by 33%, 33%, and 167% of the reference frame FR1. The rubberized steel connections enhanced the ability of the frames due to resisting displacement, where the displacement increased by 47%, 131%, and 163%. The results of the yield and ultimate loads and displacements are summarized in Table V. From Table V, notables that the maximum number of cycles for the reference frame FR1 was ten cycles only, while the frames with rubber ratio 50% and 100% resisted 16 cycles and the specimens which have rubber ratio 150% tolerance of 40 cycles. Using the rubber around the steel bolts prevented the contact between the shanks of steel bolts and holes of steel plates that leads to reduction in the pressure between the bolts and holes until the rubber reaches to the fatigue stage and the failure in the bolts are started in the steel joints. Increasing the diameter of composite steel bolts means that increased contact area leads to reduction in stress for the same load, therefore the yield and ultimate loads will be increased.
II. Ductility
All the ductility factors of the frames are reported in Table VI, the ductility index calculated based on the elastic displacement of the reference frame FR1. The results appeared that used the rubber by ratio 50%, 100%, and 150% of the steel bolts diameter increased the ductility of frames by 2.125, 3.375, and 3.875 respectively, while the ductility of a reference frame which did not use the rubber on its joints was 1.375. That means the rubber increased the ductility of specimens by 55%, 145%, and 182% of the reference frame. Use of the rubber in the steel joints dissipate the stresses and makes the steel frames work redistribution for these stresses so the steel frame component did not reach to the yield and ultimate loads for the same loads compared with the reference frame. On other hands, this smart connection delayed the yield and ultimate points for global frames and gave the capability of frames to withstand a larger offset displacement, where the calculation of the ductility index was according to equation D = / , leads to an increase in the frames ductility.
III. Damping Ratio
Damping ratio is calculated based on half-power bandwidth method as to [12], where this method statement of the amplitude displacement to the steady-state frequency can be used for determining experimentally by the damping in a system at its natural frequencies. As the researcher appeared the results that are achieved by this method are very good for a single degree of freedom systems with small values of damping. The method statement can estimate the damping ratio (Ƹ) from this expression: Where: = experimental natural frequency response at amplitude displacement ( ). = experimental natural frequency response at √ of amplitude displacement before .
= experimental natural frequency response at √ of amplitude displacement after .
= response amplitude displacement. According to equation (1), the damping ratio calculated for the frames FR1, FR2, FR3, and FR4 were the results equaled to 0.0203, 0.0208, 0.0223, and 0.0260. The results illustrated the damping increased by increased thickness of the rubber, where the damping of specimens FR2, FR3, and FR4 improved by 2.46%, 15%, and 28%. Enhanced on the damping ratio can be explained based on the bandwidth concept, where the isolation system of the rubberized frames has bandwidth waves wider than the frame without a rubber. According to the damping ratio equation, there is a direct correlation between the damping ratio and bandwidth wave when increasing the bandwidth, the damping ratio will be increased.
I. Finite element model
Based on the experimental results of a plane steel frame, 3-D nonlinear finite element models were analyzed by using the general finite element program ABAQUS (2017) to establish the mechanical properties of these experimental specimens. The influences of both geometrical and material proprieties are taken into account. Selected the element shape was the first important geometry condition. Different types and order of the element were used in the FE models through the research until the results attached to the experimental test. First-order reduced integration C3D8R was selected for the all extended steel frame modeling.
II. Material properties
Hardening isotropic materials were used in the elastic zone for Young's modulus, Poisson ratio, and density. In the plastic zone hardening kinematic case are used to simulation the models. The finite element program required input data such as density, elastic modulus, and Poisson's ratio as illustrated in Table VII for the steel components of the frame. 7.83 In ABAQUS program as in many simulation programs, the values of the stress-strain data should be converted from engineering values into the true values because of the program contracts with the true values ratherthan engineering. These are completed by using the following equations (ABAQUS, 2017):
III. Finite Element Mesh and Boundary Conditions
The first step of the mesh procedure was dividing each individual imported part. The ABAQUS program need to define the complex geometry shapes, therefore, it is necessary for the user todivide the regions into simple geometry forms like to wedge or cube shapes. First-order hexahedral cubic elements were used for regularly shaped regions of structural meshing. Regarding of boundary conditions, five points in the bottom web and flange of the bottom column represent the 16 anchor bolts that fixed the specimen into the foundation of the laboratory test which constrain the fixed column in three X, Y and Z directions and constrain the pinned column in two directions Z and Y only to prevent the model from moving during the test, and freely rotation about x-axis. Figure (5)shows the constrain points for the FE models which are used in this research.
IV. Numerical results
Load-deflection P-Δ curves achieved from the ABAQUS program reveal that the FE element analysis provides acceptable and agreement solutions with the results which were found from the experimental test in this study. It is notables that the numerical and experimental behavior in the load-deflection curves was convergent. Except however, the theoretical curves were stiffer at the initial loading, but this behavior changed with increasing the loading until the failure occurred, where the final failure was lower than the laboratory test in this study. The ABAQUS solutions, however, were credible regarding the horizontal loading system as well as simulation of the models with or without rubberized steel frame connections. The numerical and experimental analysis drawing together in the load-deflection curves, for the horizontal cyclic loads along with the numerical solutions for the displacements on the Z-direction. All the data of ultimate loads and amplitude displacements for the experimental and numerical analysis are listed in Table IX. The difference between the experimental test and the numerical analysis for the ultimate resisting loads ranged between 4% and 16% and between 2% and 14% for the amplitude displacements. These ranges are considered within acceptable agreements. The results in the table are significant that using the rubber in the steel connection models increased the resisting loads for the specimens which were analyzed under horizontal loading effect FR2, FR3, and FR4by 25.5%, 38.23, and 188.14% compared with reference horizontal system FR1. Figures (6) illustrates the comparisons between numerical and experimental for the horizontal loading effect for the models FR1, FR2, FR3, and FR4. Generally, the failure modes for the experimental specimens were fractured in the steel bolts at the peak loads and ended the test. The numerical solutions gave an acceptable mode failure and agree well with the laboratory test. Figure (7) illustrated the final deform shape for the shear tab connection conducted by the FE element and experimental test.The colors in Figure 6-A denote to the Von Mises stress distribution at the final of the simulation. Figure (
CONCLUSIONS
The numerical and experimental analysis was performed to study the behavior of plane steel frames with rubberized connections under the seismic effect. The performance of steel frames concerning strength, ductility, and damping ratio. 3-D nonlinear finite element models were analyzed by using the general finite element program ABAQUS (2017) to establish the mechanical properties of these experimental specimens. The P-Δ curves and mode of failure used for comparisons of the numerical and experimental tests. From the results, it turns out using the rubber enhanced the behavior of the specimens, where the experimental test appeared that the strength of specimens increased by 33.33% to 167%. So the FE methods performed the strength increased by 26% to 188%. Using the rubber technical in the steel connections increased the damping effect by 2.46% to 28% of the reference frame without the rubber. From the results, the important notes should be mentioned that enhanced in the ductility index for the frames was significant by 55%, 145%, and 182% which is a very important phenomenon for the building exposed to the earthquake effect, where for design the structure resist the earthquake that do not need to strengthen the structure, but needs to increase the ductility for these buildings. The process of manufacturing rubber from old tires is an economical process for recycling and the use of tires consumed. The practice of replacing traditional bolts with composites steel bolts/rubber is an easy application of steel structures constructed without resorting to the destruction or alteration of designs for the previous structures that did not design under seismic effect. Finally, the process of using the composites steel bolts/ rubber instead of the steel bolts ease of implementation for constructed steel structures without resorting to demolition or changing designs of facilities for which were not designed to resist the earthquake effect. | 2021-05-05T00:08:11.615Z | 2021-03-25T00:00:00.000 | {
"year": 2021,
"sha1": "e629ead99b5b5395763a1d74cd87bc96a7e52b0f",
"oa_license": "CCBY",
"oa_url": "https://etj.uotechnology.edu.iq/article_168122_795605757c2f62d0842389d939f609b9.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "41978572ed65ed9da3e4bda5b9a3202e90524f44",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
4898300 | pes2o/s2orc | v3-fos-license | Gastric metastasis from invasive lobular breast cancer, mimicking primary gastric cancer
Abstract Rationale: Gastric metastasis from invasive lobular breast cancer is relatively rare, commonly presented among multiple metastases, several years after primary diagnosis of breast cancer. Importantly, gastric cancer that is synchronously presented with lobular breast cancer can be misdiagnosed as primary gastric cancer; therefore, accurate differential diagnosis is required. Patient concerns: A 39-year-old woman was visited to our hospital because of right breast mass and progressive dyspepsia. Diagnoses: Invasive lobular carcinoma of breast was diagnosed on core needle biopsy. Gastroscopy revealed a diffuse scirrhous mass at the prepyloric antrum and diagnosed as poorly differentiated adenocarcinoma on biopsy. Synchronous double primary breast and gastric cancers were considered. Detailed pathological analysis focused on immunohistochemical studies of selected antibodies, including those of estrogen receptors, gross cystic disease fluid protein-15, and caudal-type homeobox transcription factor 2, were studied. As a result, gastric lesion was diagnosed as metastatic gastric cancer originating from breast. Interventions: Right breast conserving surgery was performed, and duodenal stent was inserted under endoscopic guidance to relieve the patient's symptoms. Systemic chemotherapy with combined administration of paclitaxel and trastuzumab was initiated. Outcomes: Forty-one months after the diagnosis, the patient is still undergoing the same therapy. No recurrent lesion has been identified in the breast and evidence of a partial remission of gastric wall thickening has been observed on follow-up studies without new metastatic lesions. Lessons: Clinical suspicion, repeat endoscopic biopsy, and detailed histological analysis, including immunohistochemistry, are necessary for diagnosis of metastatic gastric cancer from the breast.
Introduction
Breast cancer is the most common type of cancer among women worldwide and ranks as the second most common cancer among women in Korea. [1] Owing to the advances in systemic therapies, including intensive local treatment, chemotherapy, endocrine therapy, and targeted therapy, overall survival rates of breast cancer have increased; however, these therapies have not produced significant changes in the prognosis of patients with breast cancer accompanied by distant metastasis.
The main sites of metastasis for breast cancer are the bone, lung, and liver, whereas metastasis to the gastrointestinal tract is relatively rare; reported incidence rates are 2% to 18% of breast cancer cases. Importantly, gastric metastasis from invasive lobular breast carcinoma is reported to occur at a higher rate than the metastasis from invasive ductal carcinoma. [2] Gastric cancer has high incidence rates in Korea, [1] and differentiation between primary gastric cancer and metastatic gastric cancer based solely on clinical findings may be difficult. Inappropriate diagnosis can lead to unnecessary surgical procedures.
In the present study, we report a case of gastric metastasis from invasive lobular breast cancer, which can be misdiagnosed as primary gastric cancer. We also report the effectiveness and applications of immunohistochemical analysis in the differentiation of gastric metastasis from breast cancer and primary gastric cancer.
Case presentation
patient agreed to authorize us to share the figures and the experiences during her treatment procedure in our department. Informed consent was obtained.
A 39-year-old woman visited the clinic because of the presence of a palpable mass in the right breast that had developed 2 months ago, and discomfort in the upper abdomen. The patient was on medication with antacid for chronic gastritis that she had been diagnosed with at a private hospital based on her upper gastrointestinal endoscopy results. A solid mass in the upper outer region of the right breast and enlarged ipsilateral axillary lymph nodes could be palpated on physical examination. Ultrasonography and breast magnetic resonance imaging (MRI) results showed single masses on the upper outer quadrant of the right breast, as well as multiple axillary lymphadenopathy (Fig. 1A). The patient was diagnosed with invasive lobular breast carcinoma after a core needle biopsy. Blood test results, including liver function measurements, were all normal. The hemoglobin, carcinoembryonic antigen, and carbohydrate antigen 15-3 levels were 12 g/dL, 4.36 ng/mL, and 34.59 m/mL, respectively. No abnormal finding was observed on abdominal ultrasonography other than bilateral hydronephrosis. Positron emission tomography (PET) results showed lesions in the right breast and axilla, and no evidence of distant metastases (Fig. 1B-D).
Two weeks after the surgery, gastroscopy was performed again because discomfort and pain in the upper abdomen were persistent; a diffuse scirrhous mass was identified at the prepyloric antrum with pyloric obstruction (Fig. 3A). Abdominal computed tomography (CT) scans showed hypertrophy of the gastric wall of the pyloric region accompanied by pyloric obstruction (Fig. 3B). No evidence of lymphadenopathy or metastasis was found, whereas bilateral hydronephrosis was observed. Based on these findings, the patient was diagnosed with T3N0M0 gastric cancer according to the 7th edition of the AJCC staging system. Poorly differentiated adenocarcinoma partially resembling the morphology of signet ring cell carcinoma was observed on gastroscopic biopsy. The aforementioned lesion was positive for estrogen receptor (2+), progesterone receptor (1+), and GCDFP-15, and negative for CDX-2, cytokeratin (CK) 5, CK14, MUC-2, MUC-5, and E-cadherin. Finally, the lesion was diagnosed as metastatic gastric cancer originating from invasive lobular carcinoma. A difference in the level of C-erb-B2 expression was found between the gastric lesion and the primary breast carcinoma; overexpression of C-erb-B2 was observed in the metastatic gastric cancer (Fig. 4). During a medical examination, the patient stated that she had no family history of breast or gastric cancer, had never been diagnosed with lobular carcinoma in situ, and had never undergone hormone replacement therapy in the past. After a multidisciplinary team meeting, it was determined that no additional surgery would be performed for the metastatic gastric cancer, and a duodenal stent was inserted under the guidance of endoscopy to relieve the patient's symptoms ( Fig. 5A and B). Systemic chemotherapy with combined administration of paclitaxel (80 mg/m 2 ) and trastuzumab (2 mg/kg) at 1-week intervals was initiated. Pyloric obstruction was moderately reduced and evidence of reflux was found on gastroscopy performed 6 months after the diagnosis (Fig. 5C). Moreover, 7 months after the diagnosis, small bowel obstruction occurred as a result of stent displacement. A laparotomy for stent removal, and resection and anastomosis of the small bowel were performed.
Currently, 41 months after the diagnosis, the patient is still undergoing the same therapy and has been experiencing tolerable grade 1 peripheral neuropathy and changes in the nails. Gastrointestinal symptoms, including discomfort in the upper abdomen, have mostly been relieved. No recurrent lesion has been identified in the breast and evidence of a partial remission of gastric wall thickening has been observed on follow-up abdominal CT scans (Fig. 5D). Lastly, no evidence of new metastasis has been observed.
Discussion and Conclusion
Gastric metastasis of malignant breast carcinoma is relatively rare. Clinical symptoms include loss of appetite, feeling bloated soon after eating, upper abdominal pain, bleeding, and vomiting. [2] In the present case, the main symptoms were indigestion and upper abdominal pain resulting from pyloric obstruction caused by the mass. As observed in this case, metastatic gastric cancer is often presented as linitis plastica, which spreads to the muscular layers of the stomach and gastric mucosa, whereas it is rarely accompanied by external pressure or separate nodules. According to Taal et al, invasive lobular carcinoma accounts for 83% of all the metastases from malignant breast tumors and it often spreads throughout the stomach. [2] The most common site of metastasis from invasive lobular carcinoma is the stomach; gastric metastases are observed at rates of 6% to 18% during a biopsy. Most gastrointestinal metastases present as one among multiple distant metastases several years after breast cancer surgery, and the mean time from the primary diagnosis of breast cancer to metastasis is reported to be 6 to 7 years. [3] Therefore, the case observed in this study, in which a metastasis occurred only in the stomach at the time of primary diagnosis of breast cancer, is extremely rare.
In the present case, gastric metastasis was not detected during a systemic examination performed at the time of breast cancer diagnosis. PET scans obtained before surgery did not show any evidence of gastric tumors; tumors were observed only in breast and axillary lesions. The sensitivity of PET is reported to be relatively lower for the diagnosis of gastric cancer compared with that of other cancer types, and this is attributed to issues associated with physiological absorption of F-18 fluorodeoxyglucose and involuntary movements by the gastric wall. Gastric cancer morphology is also associated with sensitivity of PET. Although the sensitivity of PET is high for papillary or ductal carcinoma and poorly differentiated solid adenocarcinoma, high false-negative rates are reported for signet ring cell carcinoma and poorly differentiated nonsolid adenocarcinoma. [4] In general, it is difficult to differentially diagnose primary gastric cancer and metastatic gastric cancer based on gross endoscopy results alone. Because gastric metastases are mostly localized to the submucosal and seromuscular layers, endoscopy results may present as normal in 50% of the cases. [5] In the present case, the patient was not appropriately diagnosed even after multiple rounds of endoscopy and biopsies, possibly because a biopsy of the deeper gastric layers, including the submucosa, was not performed.
Although diffuse gastric adenocarcinoma and metastatic gastric cancer from invasive lobular carcinoma may exhibit a single-cell invasive pattern or morphologically resemble signet ring cells, they have different methods of treatment and prognoses. [6] In such a case, a detailed immunohistochemical analysis can be of great help. Connel et al compared immunohistochemical results between gastric metastases from breast cancer and primary gastric cancer. [7] The results demonstrated increased expression levels of estrogen and progesterone receptors, GCDFP, and CK5/6 in gastric tumors metastasized from breast cancer, compared with those in primary gastric cancer; 72% for estrogen receptor, 33% for progesterone receptor, 78% for GCDFP-15, and 61% for CK5/6 in metastatic cancer, whereas zero expression levels were observed for estrogen receptor, progesterone receptor, and GCDFP, and 14% for CK5/6 in gastric metastases from primary gastric cancer. Estrogen receptor, progesterone receptor, and GCDFP-15 were characteristics pertaining solely to gastric metastasis from breast cancer.
Various markers whose primary origin is the breast have been used in clinical practice. Among these, estrogen receptor is the most influential and sensitive marker for differentiating metastatic breast cancer. However, they have low sensitivity (∼50%) for other metastatic cancers, as well as low specificity because they can be expressed by other types of cancer aside from endometrial, ovarian, and breast cancer. Although there are rare reports on expression of estrogen or progesterone receptors in gastric adenocarcinoma, estrogen receptors are reported to be almost never expressed in gastrointestinal adenocarcinoma, especially in colorectal cancer. A gross cystic disease fluid is a pathological secretion released by the breast and is characterized by increased levels of the GCDFP-15. This protein has been used as a breast-specific marker with sensitivity of 11% to 73% and specificity of 93% to 100%. GCDFP-15 is known to be expressed in not only breasts, but also malignant tumors originating from the salivary gland, external genitals, eyelid, and apocrine duct of the bronchial tubes, and in certain instances of gynecologic adenocarcinoma (5%-10%); however, it is almost never expressed in gastrointestinal cancer. Other breast-specific markers include mammaglobin and GATA-binding protein, with sensitivity of 26% to 84% and 32% to 95%, respectively. [8] CK7 and CK20 are useful cytokeratin markers that can differentiate distant and metastatic gastric tumors. Although CK7+/CK20À are usually expressed in adenocarcinoma of the breast, lung, and ovary, CK7À/CK20+ is more commonly expressed in intestinal adenocarcinoma. In addition, homeobox protein CDX-2, which is necessary for intestinal formation and encodes transcription factors that are involved in the differentiation and proliferation of intestinal epithelial cells, is mostly expressed by malignant gastrointestinal tumors, specifically at a rate of 61% for gastric cancer and 96% for colorectal cancer. [9] Although the levels of expression of CK7 and CK20 were not measured in the present case, GCDFP-15, estrogen receptor, and progesterone receptor positive, and CDX-2 negative results alone suffice in the differentiation of metastatic gastric cancer from primary gastric cancer.
Curtit et al compared immunohistochemical results of primary and metastasis sites of breast cancer before and after treatment, and found differences in the levels of expression of the estrogen and progesterone receptors at 17% and 29% of the sites, respectively. A significant change in the level of expression of the estrogen receptor was found at metastasis sites after chemotherapy, and specifically anthracycline-based chemotherapy. [10] In the present case, although estrogen and progesterone receptors were expressed at both the primary and metastasis sites of breast cancer, C-erb-B2 was only expressed at the metastasis sites. The level of C-erb-B2 expression may differ between primary and metastasis sites of breast cancer in 5% to 10% of the cases, rates that are lower than those of estrogen or progesterone receptors. [11] The expression of C-erb-B2 in the metastatic gastric tumor observed in the present case demonstrated the potential for C-erb-B2-targeted therapy as an addition to chemotherapy and endocrine therapy. The present study is also meaningful in that a patient was maintained in partial remission through prolonged targeted therapy that was planned based on the immunohistochemical results, and in doing so, demonstrated the usefulness of long-term targeted therapy.
Chemotherapy, endocrine therapy, or combined therapy, which are treatment options for gastric metastasis from invasive lobular carcinoma, have remission rates of 32% to 53%, and can prolong survival time by 2 to 3 years. [12] Treatment decisions can be made based on clinical symptoms of an individual patient, and the role of multidisciplinary teams, including those in charge of the gastrointestinal tract, is crucial in the decision-making process. Although resection of liver or pulmonary metastases is reported to increase survival rates in selected patients, reports of resection resulting in a significant increase in survival rates for gastric metastases are rare. In limited cases, prolonged survival has been reported for patients in complete remission from Kim et al. Medicine (2018) 97: 13 www.md-journal.com primary breast cancer with solitary gastric metastases, who underwent gastric resection, relative to those that did not undergo resection; survival time of 38 months versus 14.38 months, respectively. [12] However, because gastric metastases are accompanied by metastases of the gastric wall and other regions of the gastrointestinal tract in most cases, surgical treatment of gastric tumors is not recommended as a primary treatment option. Surgical treatment, such as bypass surgery, may be used in cases of complete intestinal obstruction or in emergencies, such as perforation of the gastric wall. McLemore et al treated some patients with gastric metastasis using conventional surgical procedures for symptom relief, and reported no increase in survival rates after surgery. Specifically, they reported that the patient's history of undergoing chemotherapy or hormone therapy is an important prognostic factor, whereas old age and the presence of gastric metastasis are poor prognostic factors. [13] In conclusion, we have reported a case of invasive lobular carcinoma with gastric metastasis, which can be misdiagnosed as primary gastric cancer. Accurate diagnosis and patient-tailored treatment can be achieved through clinical suspicion, repeated endoscopy, and accurate histological examination including disease-specific immunohistochemical analysis. | 2018-04-04T00:05:04.623Z | 2018-03-01T00:00:00.000 | {
"year": 2018,
"sha1": "a4c493e8a9ab41b9a41ae803fc38fbc6ad2b8e6e",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1097/md.0000000000010258",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a4c493e8a9ab41b9a41ae803fc38fbc6ad2b8e6e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
244137754 | pes2o/s2orc | v3-fos-license | Prediction of Key Candidate Genes for Platinum Resistance in Ovarian Cancer
Purpose Ovarian cancer is one of the common malignant tumors of female reproductive organs, which seriously threatens the life and health of women. Resistance to chemotherapeutic drugs for ovarian cancer is the root cause of recurrence in most patients. The purpose of this study is to determine the differentially expressed genes of platinum resistance in ovarian cancer, and to screen out molecular targets and diagnostic markers that could be used to treat ovarian cancer platinum resistance. Methods We downloaded 5 gene microarray datasets GSE58470, GSE45553, GSE41499, GSE33482, and GSE15372 from the Gene Expression Omnibus database, all of which are associated with ovarian cancer platinum resistance. Subsequently, the intersection of the statistically significant differentially expressed genes in 5 gene chips was taken, and relevant bioinformatics and clinical parameters were performed on the screened differential genes. qRT-PCR was utilized to examine the mRNA expression levels in ovarian cancer sensitive and cisplatin-resistant cells. Results Three differential genes, IFI27, JAG1, DNM3, may be closely related to platinum resistance of ovarian cancer, were screened by microarray datasets. According to the combined verification of bioinformatics, clinical case analyses and experiments, it was inferred that the increased expression of DNM3 was beneficial to patients with platinum resistance, but the high expression of IFI27 and JAG1 may lead to the risk of platinum resistance. Conclusion IFI27, JAG1 and DNM3 screened by relevant gene chips may serve as new biomarkers of platinum resistance in ovarian cancer.
Introduction
Ovarian cancer is one of the common malignancies of female reproductive organs, which seriously threatens the life and health of women. Because the ovaries are deep in the pelvic cavity, small in size, lack of typical symptoms, and effective screening methods, it is difficult to detect early. About 75% of patients with ovarian cancer have been diagnosed as advanced stage, with peritoneal spread or distant metastasis. 1,2 Tumor cytoreductive surgery combined with postoperative adjuvant chemotherapy is a common clinical treatment for patients with advanced ovarian cancer. Chemotherapy drugs for ovarian cancer mainly include periodic non-specific platinum and cyclespecific paclitaxel. Platinum hinders DNA synthesis and mitosis in cancer cells by cross-linking with DNA, while paclitaxel can strengthen tubulin polymerization and inhibit tubulin depolymerization. 3 However, even if the initial chemotherapy treatment effect is good, most patients are prone to multidrug resistance, that is, as long as the resistance to one of the chemotherapeutic drugs, they will be resistant to other chemotherapy drugs with different structures and mechanisms. As the most commonly used chemotherapeutic drug in clinical practice, platinum is quite frequently resistant in the treatment of ovarian cancer. At present, the widely used detection indicators for ovarian cancer lack specificity and sensitivity. Therefore, it is urgent to discover the marker genes related to platinum resistance in ovarian cancer.
Source of Information GEO Microarray System Retrieval Strategy
Keywords: "ovarian carcinoma" OR "ovarian cancer" OR "chemotherapy resistance" OR "platinum resistance"; restricted species was "Homo sapiens", to search for gene expression profiles related to platinum resistance in ovarian cancer that had been publicly reported in the GEO before 2021. Finally, 21 datasets were retrieved.
Inclusion Criteria
(1) The dataset must be a comparative study on platinumsensitivity and resistance of ovarian cancer; (2) The downloaded data is the original dataset or standardized processing; (3) Each group of control (sensitive) and case (resistant) in the dataset must include or exceed 3 samples; (4) Clear information on the sensitivity and resistance of each sample to platinum must be given. Datasets that meet the above criteria will be included in this study. In the end, there are 5 datasets that meet our inclusion criteria, namely GSE58470, GSE45553, GSE41499, GSE33482, and GSE15372. Among them, GSE58470 contains sensitive cells IGROV-1 and oxaliplatin-resistant cells IGROV-1/OHP; GSE45553 is obtained from cisplatin-sensitive and resistant human ovarian cancer spheroids; GSE41499 is a dataset on platinumsensitive PEO1 and platinum-resistant PEO4 cells; the genetic data for GSE33482 is derived from A2780 cisplatinsensitive and A2780cis-resistant cells; and GSE15372 is based on the dataset of A2780 and cisplatin-resistant Round5 A2780 cells.
GEO2R for Expression Analysis
GEO2R, the official tool of GEO database, was used to analyze the differential expression levels of target genes in ovarian cancer platinum-resistant gene chips to observe the expression contents in ovarian cancer cells. GraphPad Prism 8 was performed for data visualization in this study.
To Verify the Expression of Target Genes in Ovarian Cancer Cells
CCK-8 Detection of Cell Resistance Index (RI)
The ovarian cancer sensitive cells A2780 and SKOV3 and the cisplatin-resistant cells A2780-DDP and SKOV3-DDP used in this work were purchased, sequenced and identified through regular channels from Shanghai Yiyan Biotechnology Co., Ltd (China). The cell density inoculated on the 96-well cell culture plate was 5×10^3 cells/ well. Six cisplatin concentration gradients were established in each group, which were 0, 0.39 μg/mL, 0.78 μg/mL, 1.56 μg/mL, 3.125 μg/mL, 6.25 μg/mL, with 5 replicate holes for each concentration. After culturing for 48 hours, 100 μL of complete medium containing 8% CCK-8 was added to each well, and the optical density (OD) at 450 nm was detected after 1.5 hours. The inhibitory rate is calculated according to Equation 1. The experimental wells contained cisplatin drugs, while the control group did not, and the blank wells had no cells. The IC 50 value was obtained by statistical software to calculate the drug concentration required when 50% of cells were suppressed, and the RI is computed according to Equation 2.
TCGA Public Database for Clinical Case Analysis
The gene expression and clinical follow-up information of ovarian cancer clinical samples were downloaded from the official application tool GDC apps provided by TCGA database in this part, and organized the information into a matrix through R software, and conducted clinicopathological parameters in combination with public databases. A total of 362 clinical samples were included, excluding 108 patients without chemotherapy outcomes, and 254 patients who received standard chemotherapy in combination with platinum-based agents, including carboplatin, cisplatin, and oxaliplatin. According to the 2020 National Comprehensive Cancer Network (NCCN) guidelines, after the initial cytoreductive surgery and receiving more than 3 courses of regular chemotherapy, the recurrence is confirmed in a relatively short period of time, it is generally considered that the relapse within 6 months after the completion of chemotherapy is regarded as chemoresistance. Chemosensitivity is defined as patients who have received more than 3 courses of regular chemotherapy after the initial cytoreductive surgery and have achieved clinical remission, and have recurred more than 6 months after stopping chemotherapy. Therefore, there were 205 cases in the platinum-sensitive group and 49 cases in the resistant group. Based on the average of target gene expression levels, ovarian cancer specimens in TCGA were divided into low-expression and high-expression to analyze the relationship between IFI27, JAG1, DNM3 and platinum resistance. SPSS was performed for data analysis, and χ 2 test was used for clinicopathological parameters.
To Research the Relationship Between Gene Expression and the Clinical Prognosis of Ovarian Cancer Survival Analysis of Sensitive and Resistant Tissues
Clinical follow-up information was downloaded from TCGA and arranged into a matrix. The samples of platinum-sensitive and resistant cases were grouped by the description in the previous item. Overall survival (OS) is defined as the time from diagnosis to death due to ovarian cancer. Progression-free survival (PFS) is defined as the time from the patient's initial treatment to tumor progression. Survival analysis was performed using Kaplan-Meier survival curve and Log rank test for comparison.
Binary Logistic Assessment of Risk Factors Affecting Platinum-Resistant Patients
Since the dependent variable "degree of resistance" was a binary variable, the binary logistic regression was used to identify the risk factors influencing patients with platinum resistance in ovarian cancer. Other covariates were grouped according to the description of clinical parameter, all of which were categorical covariates.
To Evaluate the Diagnostic Value of Genes as Indicators of Ovarian Cancer Sensitivity TCGA clinical cases were divided into platinum-sensitive and resistant group, and the accuracy of IFI27, JAG1 and DNM3 as diagnostic indicators was evaluated by ROC curve and corresponding data.
Screening of Differential Genes for Platinum Resistance in Ovarian Cancer
The genes with significant differences in 5 gene chips were filtered according to the rules of P. Value < 0.01, log FC < -1 or log FC > 1. GSE58470 had 678 significantly differential genes, of which 445 genes were down-regulated and 233 genes were up-regulated. There were 3191 evidently differential genes in GSE45553, 1561 genes were down-regulated and 1630 genes were up-regulated. There were 1426 differentially expressed genes in GSE41499, of which 706 genes were down-regulated and 720 genes were up-regulated. GSE33482 had 3334 notably differential genes, of which 1404 genes were down-regulated and 1930 genes were up-regulated. There were 2290 differentially expressed genes in GSE15372, and 1657 genes were down-regulated and 633 genes were upregulated. The significantly different genes in 5 gene chips were input into the Venn list box and the intersection was taken
8239
to screen out 3 common differential genes: IFI27, JAG1 and DNM3. The Venn diagram is shown in Figure 1.
Analysis Results of Differential Genes by GEO2R
GEO2R analysis tool was utilized to identify the expression levels of IFI27, JAG1, and DNM3 in 5 gene microarray datasets. Through calculation and comparison, it was found that the expression levels of IFI27 and JAG1 were higher in resistant ovarian cancer cells than that in sensitivity. Compared to the resistance, DNM3 expression was greater in the platinum-sensitive group. Figure 2 shows the statistical atlas after data visualization analysis.
To Verify Gene Expression in Ovarian Cancer Cells
CCK-8 Detects Cell Resistance Index (RI)
The results displayed that the resistance indices of A2780-DDP and SKOV3-DDP cells were 2.53 and 3.60, respectively, and the differences were statistically significant, as shown in Figure 3.
To Test the Gene mRNA Expression by Q-PCR
Since the results obtained by GEO2R were not completely consistent, PCR was used to further examine the levels of 3 genes in different ovarian cancer cells. According to quantitative calculations, IFI27 and JAG1 were highly expressed in A2780-DDP and SKOV3-DDP cisplatinresistant cells; while DNM3 expression was significantly reduced in resistance. See Figure 4 for details.
TCGA Dataset Analysis Results
The Relationship Between Gene Expression and Clinicopathological Parameters in Patients with Ovarian Cancer Two hundred and fifty-four clinical samples were collected after removing unavailable data, including 205 cases in the platinum-sensitive group and 49 cases in resistant group. The expression of differential genes in platinum sensitive and resistant patients with ovarian cancer was analyzed. The expression levels of IFI27 and JAG1 in platinumresistant patients were higher, it was regrettable that the corrected IFI27 group did not have statistical difference, while DNM3 in sensitive patients was significantly greater than that in resistant cases (P < 0.01). The graph made by GraphPad Prism is shown in Figure 5. The expression of IFI27 was correlated with resistance of patients (χ 2 = 8.352, P < 0.005), tumor residue (χ 2 = 11.401, P < 0.005) and survival status (χ 2 = 4.706, P < 0.05), but it may not be related to the other 4 clinicopathological parameters. There was a connection between JAG1 expression and platinum-resistance (χ 2 = 5.617, P < 0.05) or recurrence (χ 2 = 6.314, P < 0.05), but it may not be connected to the other 5 parameters. DNM3 was associated with resistance of clinical cases (χ 2 = 4.278, P < 0.05), recurrence (χ 2 = 5.632, P < 0.05) and FIGO stage (χ 2 = 8.175, P < 0.005), but there may be no correlation with age, histological grade, tumor residue and survival status. The analysis results of the clinicopathological parameters in each gene are shown in Table 2.
The Relationship Between Gene Expression and Survival Analysis in Platinum-Resistant Clinical Patients with Ovarian Cancer
There was no distinction between JAG1 and the overall survival (OS) or progression-free survival (PFS) of platinum-resistance (P > 0.05), but IFI27 expression was significantly different from the OS in drug-resistant patients, and DNM3 were statistically different from the OS and PFS. The survival time of patients with weak IFI27 expression was significantly improved, while the survival time was prolonged after increasing DNM3. The survival curves of each gene are shown in Figure 6.
To Judge the Diagnostic Value of Genes as Sensitive Indicators of Ovarian Cancer
ROC curves analysis data were as follows. The sensitivity, specificity, AUC and P value of IFI27 expression to predict chemosensitivity were 0.371, 0.449, 0.596 and 0.036 (P < 0.05), respectively. The values of JAG1 were 0.714, 0.459, 0.601 and 0.028 (P < 0.05). DNM3 were 0.298, 0.878, 0.581 and 0.078 (P > 0.05). The ROC curves are shown in Figure 7. Table 4 shows the positive and negative likelihood ratios. The above results indicated that IFI27 and DNM3 had good accuracy in deciding drug resistance, while DNM3 may need to be combined with other markers to diagnose ovarian cancer chemoresistance.
Discussion
Although the research on platinum resistance of ovarian cancer has made great progress in recent years, due to the complexity and variability of its biological characteristics, platinum resistance is still the main reason for the failure of anti-tumor treatment in most ovarian cancer patients. This study screened and proved IFI27, JAG1 and DNM3 may be differential genes related to platinum resistance in ovarian cancer.
Interferon α-inducible protein 27 (IFI27, also known as ISG12 or p27) is located on human chromosome 14q32 and is encoded by one small gene that is highly induced by interferon α with basic expression in almost all cell types. It is reported to be related to IFN-induced apoptosis, cell proliferation, and immune responses. The complementary DNA of IFI27 was originally cloned as an estrogeninducible gene in the human epithelial cell line MCF-7 and designated as p27. 4,5 In situ hybridization with some tumors overexpressing IFI27 shows that RNA is located in cancer cells and sometimes also in fibroblastic cells of tumor stroma. 6 IFI27 is up-regulated in breast cancer, 7,8 serous ovarian cancer, 9,10 hepatocellular carcinoma 11,12 and other malignant tumors. 6,13,14 But its role in ovarian cancer remains to be elucidated. Researchers have observed through a series of basic experiments that the overexpression of IFI27 induced epithelial-mesenchymal transition and promoted the migration, invasion, tumorigenicity, stemness and drug resistance of epithelial ovarian cancer cells. 10 In addition, some scientists have collected the gene expression profiles of ovarian cancer stem cells from 5 public cohorts, and obtained IFI27 (up-regulated), one of the core genes that co-expressed using bioinformatics methods, 15 and pointed out that IFI27 belongs to one of the most widely up-regulated genes in cancer, and is involved in apoptosis, metabolism, cell cycle, tumor growth and suppression. 16 Kim et al have used RT-PCR based on the annealing controlled primer system to identify differentially expressed genes in patients with stage III serous ovarian cancer, and found IFI27 was up-regulated in most stage III patients. 17 In summary, higher IFI27 expression is linked to the inferior OS in ovarian cancer patients.
Human JAG1 is encoded by the 20p12.2 chromosome and consists of 26 exons. JAG1 is an important Notch ligand, which can trigger Notch signal transduction through intercellular interactions. 18 JAG1 stimulates the abnormal activation of Notch signal to participate in tumor growth by maintaining the cancer stem cell populations, promoting cell survival, and inhibiting cell apoptosis. Many references have reported that JAG1 overexpression occurs in many different types of cancer and is associated with poor clinical prognosis. JAG1/Notch signaling cascade will activate plenty of oncogenic factors that regulate important cellular functions, such as metastasis, proliferation, angiogenesis, and drug-resistance. 19 JAG1 can crosstalk with the JAK/STAT3 pathway and jointly promote the abnormal occurrence of epithelialmesenchymal transition (EMT), thereby further reinforcing the invasion and migration ability of ovarian cancer in vivo and in vitro. 20 In addition, knocking down JAG1 can inhibit Notch signal activation, and significantly suppress the proliferation, migration, invasion, stemness, and resistance of ovarian cancer cells to chemotherapy drugs such as doxorubicin and cisplatin. 21 Studies have also concluded that siRNA can silence JAG1 and reverse taxane chemoresistance, and JAG1 plays a dual role in cancer progression through its angiogenic function in tumor endothelial cells, as well as proliferation and chemoresistance. 22 According to the literature, as an oncogene, the reduction of JAG1 expression is of positive significance for patients with chemotherapy-resistant ovarian cancer.
DNM3 (Dynamin 3) encodes a member of the guanosine triphosphate (GTP) binding protein family that associates with microtubules and participates in vesicular transport. 23 The protein it encodes plays a role in the development of megakaryocytes. 24 DNM3 is a newly discovered tumor suppressor gene. Genome Wide Association Study (GWAS) has found that compared with normal tissues, the promoter methylation level in
8244
cancer tissues is higher. 25 Gu et al have revealed that the overexpression of DNM3 may induce hepatocellular carcinoma cell apoptosis and inhibit tumor growth by activating the production of inducible nitric oxide synthase (iNOS) and the subsequent NO-ROS signaling pathway. 26 Zhang et al have found that the up-regulation of DNM3 reduces the proliferation and colony formation of hepatocellular carcinoma, and induces the arrest of cancer cells in G0/G1 phase. 23 Clinical trials have confirmed that patients with reduced expression of DNM3 in tumor tissues have a lower survival rate and a worse prognosis. 27 However, the research on the relationship between DNM3 and malignancy is still insufficient, and the exact molecular mechanism is unclear. Three genes we screened from the gene microarray datasets were verified the expression levels of ovarian cancer cells and clinical samples based on public databases. IFI27 and JAG1 in the platinum-resistant group were higher than those in sensitivity, while DNM3 was the opposite. This conclusion was consistent with the qRT-PCR verification results. In the TCGA database, hierarchical analysis showed IFI27, JAG1, and DNM3 expression levels were all related to the chemoresistance. The survival curves of platinum-resistant patients suggested that the survival time with reduced IFI27 was significantly prolonged, while the OS and PFS in DNM3 low-expressed group were decreased accordingly. Binary logistic regression calculated that the expression of IFI27 and DNM3, tumor residue and recurrence status seriously affected the prognosis of platinum-resistant patients. The P predicting the chemotherapy outcome revealed IFI27 and JAG1 had certain diagnostic values for the platinum-sensitivity in ovarian cancer. All in all, three genes have their own merits as molecular targets for the diagnosis of platinum resistance in ovarian cancer, but the results of this study are consistent with the conclusions of most of the above-mentioned references.
There are several shortcomings in this study. First of all, both bioinformatics and clinical case data demonstrated that IFI27, JAG1, DNM3 may be key candidate genes for platinum resistance in ovarian cancer, supplemented by a large number of references, but these results still need to be verified through functional experiments. Secondly, for a single biomarker, although the analysis based on bioinformatics has certain predictive values, it is still necessary to further improve the predictive effects by combining with other biomarkers.
Predicting chemoresistance for ovarian cancer is difficult. This study provides partial theoretical and clinical basis for platinum resistance in ovarian cancer. In future clinical practice, drugs that inhibit IFI27 and JAG1 (or inhibit JAG1/Notch signaling pathway) or activate DNM3 expression may enhance the sensitivity of platinum drugs. They work together to promote the effectiveness of platinum therapy on ovarian cancer cells in patients. Even if the development of drugs is subject to numerous obstacles, we still look forward to follow-up related research on these 3 genes.
Ethics Approval and Consent to Participate
Approval of ethical review board Number: LW2021088. The research on human genetic resources materials in the "Prediction of key candidate genes for platinum resistance in ovarian cancer" by Kaidi Guo was received by the Ethics Committee of Guangxi Medical University Cancer Hospital, and it was considered that the study met the requirements of medical ethics. | 2021-11-17T16:30:55.040Z | 2021-11-01T00:00:00.000 | {
"year": 2021,
"sha1": "d082488173ea4daf5d9b5959112a08c5b7aee028",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=75883",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "437d0131bbadc365fd0dcb0c6172671632e15d0a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235773327 | pes2o/s2orc | v3-fos-license | Effect of the Rare Earth Element Lanthanum (La) on the Growth and Development of Citrus Rootstock Seedlings
Rare earth elements (REEs) can affect the growth and development of plants. However, few studies have been carried out on the effects of REEs on citrus seedlings. In this study, the growth parameters, toxicity symptoms, chlorophyll content, and La content of three citrus rootstocks are analyzed under different concentrations of La, a representative REE. The results show that the growth of citrus rootstock seedlings was stimulated at La ≤ 0.5 mmol·L−1 and inhibited at concentrations above 1 mmol·L−1. The chlorophyll and carotenoid contents of trifoliate orange (Poncirus trifoliata L. Raf.) and Ziyang Xiangcheng (C. junos Sieb. ex Tanaka) leaves of plants grown at low concentrations of La (≤1.5 mmol·L−1) were similar to those of the control but were significantly reduced at 4 mmol·L−1 La. Toxic symptoms gradually appeared with increasing La concentrations, with yellowed leaves and burst veins appearing at 4 mmol·L−1 La. The symptoms of toxicity were most severe in trifoliate orange, followed by Shatian Pomelo (Citrus grandis var. shatinyu Hort) and then Ziyang Xiangcheng. Moreover, in leaves, the Ca content was significantly negatively correlated with La content (p < 0.01). These results indicate that La has a hormesis effect on the growth of citrus rootstocks. Of the studied citrus seedlings, Ziyang Xiangcheng is the most resistant to La.
Introduction
Rare earth elements (REEs) are a homogenous group of 17 chemical elements in the periodic table, and their increasing use in industrial and agricultural practices has resulted in REEs being widely studied in recent years to better understand their environmental effects [1]. Increasing the level of REEs in the soil directly affects the growth and development of plants [2,3]. Numerous studies have revealed that REEs have a hormesis effect on the growth and development of plants [4][5][6][7]. An appropriate amount of REEs is beneficial to plants, promoting plant growth and development as well as improving their photosynthetic capacity. However, high concentrations of REEs may have toxic effects on plants, mainly leading to the slowing of growth, wilting and yellowing leaves, and weakening, as measured by other physiological and biochemical indicators [4,5]. Ce(NH 4 ) 2 (NO 3 ) 6 at concentrations of 50 mg·L −1 can significantly inhibit the seed germination and root activity of soybeans [6]. At low concentrations, cerium (Ce) has been observed to promote the growth of scallions but is inhibitory at high concentrations, while the chromosome aberration rate increases with the concentration of Ce [7]. In addition, REEs also have a hormesis effect on the structure of the plasma membrane. Low doses of REEs may improve the structural stability of the plasma membrane [8,9], while high doses of REEs may cause changes in the membrane protein structure and increase membrane permeability [10].
China is the country with the largest reserves of rare earth elements (REEs) in the world; the main features of these reserves are their extensive areas and large va-
Changes in the Growth Parameters of the Three Citrus Rootstocks under La Treatment
The growth parameters of the three citrus rootstocks differed according to the different La treatments used ( Figure 1). In general, the plant height increment ratio of the three citrus rootstocks first increased and then decreased. For TO, when the La content was less than or equal to 2 mmol·L −1 , the plant height increment ratio was not significantly different from that of the control group. Nevertheless, it was significantly lower than that of the control (p < 0.05) when the La concentration was greater than 2 mmol·L −1 . When the concentration of La was less than or equal to 0.5 mmol·L −1 , the plant height increment ratio of ZYXC was almost the same as that in the CK group. However, the plant height increment ratio decreased gradually and was significantly lower than that of the control (p < 0.05) when the La concentration was greater than 0.5 mmol·L −1 . For SP, the plant height increment ratio first increased and then decreased with the increase in the La concentration. The maximum height increment ratio was 45.42%, which was observed for the 0. 25 ). An ANOVA was performed, and the data are the mean values ± standard errors (SEs) of three biological replicates. Identical letters indicate nonsignificant differences at the 5% probability level according to Duncan's test. Different letters indicate statistically significant differences with p < 0.05. TO = trifoliate orange (P. trifoliata); ZYXC = Ziyang Xiangcheng (C. junos); SP = Shatian Pomelo (C. grandis).
Investigation of Phenotypic Symptoms and Toxicity Indexes in the High-La Treatment
In the treatment with the highest concentration of La (4 La), the citrus rootstocks showed obvious symptoms of toxicity. As shown in Figure 2, the leaves of the three citrus rootstocks in the 4 La group showed chlorosis, which is characterized by yellow veins. In more severe cases, whole leaves had yellowed, and the veins had burst. A comparison of the morphological characteristics of the roots of the three rootstocks in the 4 La treatment and control groups ( Figure 2) showed that the roots of the plants under the high-concentration treatment were significantly shorter than those in the control group, and the number of lateral roots was significantly lower than those in the control group. This result reveals that root growth is significantly inhibited under high La concentrations.
Throughout the experiment, when the La concentration was lower than 1 mmol·L −1 , the three citrus rootstocks grew well, showing similar growth to the control, and did not have any symptoms of damage. Therefore, we calculated the damage symptoms and toxicity indexes of the three citrus rootstocks in the 1 La, 1.5 La, 2 La, and 4 La treatment groups. Table 1 shows that in the 1 La treatment group, both TO and SP had Grade 1 toxicity symptoms, while ZYXC did not show any toxicity symptoms until the end of the (b) biomass of 3 citrus rootstocks. HIR was determined based on the plant height of labeled seedlings at the beginning and end of the treatment, and biomass was measured at the end of the treatment with various concentrations of La (CK = 0 mmol·L −1 , 0.25 La = 0.25 mmol·L −1 , 0.5 La = 0.5 mmol·L −1 , 1 La = 1 mmol·L −1 , 1.5 La = 1.5 mmol·L −1 , 2 La = 2 mmol·L −1 , 4 La = 4 mmol·L −1 ). An ANOVA was performed, and the data are the mean values ± standard errors (SEs) of three biological replicates. Identical letters indicate nonsignificant differences at the 5% probability level according to Duncan's test. Different letters indicate statistically significant differences with p < 0.05. TO = trifoliate orange (P. trifoliata); ZYXC = Ziyang Xiangcheng (C. junos); SP = Shatian Pomelo (C. grandis).
The trends for the biomass of the three citrus rootstocks, which first increased and then decreased with the increase in La concentration, were similar to those of the height increment ratio. The maximum biomass of TO, ZYXC, and SP, which were 1.73, 2.21, and 2.51 g, respectively, occurred under the 0.5 mmol·L −1 La treatment. Furthermore, the minimum biomass values (0.28, 0.39, and 0.44 g, respectively) occurred in the 4 La group.
Investigation of Phenotypic Symptoms and Toxicity Indexes in the High-La Treatment
In the treatment with the highest concentration of La (4 La), the citrus rootstocks showed obvious symptoms of toxicity. As shown in Figure 2, the leaves of the three citrus rootstocks in the 4 La group showed chlorosis, which is characterized by yellow veins. In more severe cases, whole leaves had yellowed, and the veins had burst. A comparison of the morphological characteristics of the roots of the three rootstocks in the 4 La treatment and control groups ( Figure 2) showed that the roots of the plants under the high-concentration treatment were significantly shorter than those in the control group, and the number of lateral roots was significantly lower than those in the control group. This result reveals that root growth is significantly inhibited under high La concentrations. The toxicity index further revealed the difference in the La tolerance of the three citrus rootstocks. In the 1 La treatment group, the toxicity index of ZYXC was 0, but those of TO and SP were 2.08% and 2.50%, respectively. In the 4 La treatment group, the average toxicity indexes of the TO, ZYXC, and SP rootstocks were 50%, 38.33%, and 42.92%, respectively. As the tolerance of citrus rootstocks to La is negatively correlated with the toxicity index, it can be deduced that TO has the highest sensitivity to La, followed by SP, while ZYXC is the most resistant rootstock. Throughout the experiment, when the La concentration was lower than 1 mmol·L −1 , the three citrus rootstocks grew well, showing similar growth to the control, and did not have any symptoms of damage. Therefore, we calculated the damage symptoms and toxicity indexes of the three citrus rootstocks in the 1 La, 1.5 La, 2 La, and 4 La treatment groups. Table 1 shows that in the 1 La treatment group, both TO and SP had Grade 1 toxicity symptoms, while ZYXC did not show any toxicity symptoms until the end of the trial. In a higher-concentration treatment (1.5 La), TO was the first to present Grade 2 toxicity symptoms, followed by SP and ZYXC. Similarly, in the 2 La and 4 La treatment groups, TO was the first rootstock to show toxicity symptoms (Grade 4), followed by SP and ZYXC. The toxicity index further revealed the difference in the La tolerance of the three citrus rootstocks. In the 1 La treatment group, the toxicity index of ZYXC was 0, but those of TO and SP were 2.08% and 2.50%, respectively. In the 4 La treatment group, the average toxicity indexes of the TO, ZYXC, and SP rootstocks were 50%, 38.33%, and 42.92%, respectively. As the tolerance of citrus rootstocks to La is negatively correlated with the toxicity index, it can be deduced that TO has the highest sensitivity to La, followed by SP, while ZYXC is the most resistant rootstock.
Changes in Chlorophyll and Carotenoid Contents of the Three Citrus Rootstocks' Leaves under La Treatment
The leaves of the three citrus rootstocks were collected, and their photosynthetic pigment contents were measured. The chlorophyll a content of ZYXC in the 4 La treatment group was significantly lower than that of the control group (Table 2). At La ≥ 2 mmol·L −1 , the chlorophyll a content of TO was significantly lower than that of the control group. For SP, at La ≥ 0.5 mmol·L −1 , the chlorophyll a content was significantly lower than that of the control group ( Table 2). The chlorophyll b content of ZYXC and TO significantly decreased under the 4 La treatment, and the chlorophyll b content of SP was significantly lower than that of the control group at La ≥ 0.5 mmol·L −1 . The changing trend of chlorophyll a + b in the three citrus rootstocks was similar to that of chlorophyll a, while the changing trend of carotenoids was consistent with that of chlorophyll b ( Table 2). Table 2. Effects of La on the chlorophyll and carotenoid contents in the leaves of the 3 citrus rootstocks.
Rootstocks
Treatments Chlorophyll
Changes in MDA Content and Antioxidant Enzyme Activities of the Three Citrus Rootstocks' Leaves under La Treatment
The malondialdehyde contents of leaves of the three citrus rootstocks first decreased and then increased with increasing concentrations of La, with the lowest levels observed for 1 La out of all the treatments (Table 3). These values were significantly higher than the control when the concentration of La was 4 mmol·L −1 . The SOD activity of ZYXC and TO first increased and then decreased with the increase in La concentration. The SOD activity of SP was reduced to varying degrees under the La treatment( Table 3). The SOD activity of ZYXC under different concentrations of La showed no significant difference from that of the control group, and the maximum was 211.99 U·g −1 ·min −1 FW under the 0.5 La treatment. The SOD activity of TO and SP was significantly lower under treatment with concentrations ≥0.5 mmol·L −1 La than that of the control group. The CAT activity of the leaves of the three rootstocks first increased and then decreased with the increase in La concentration and reached a maximum at 0.5 mmol·L −1 La (Table 3). Table 3. Effects of La on the antioxidant enzyme activity and MDA content in the leaves of citrus rootstock seedlings.
Rootstocks
Treatments
La Absorption and Transport in Citrus Rootstocks
The distribution of La in the aboveground and belowground parts of the three citrus rootstocks under different treatments is shown in Figure 3. The La contents in both the shoots and roots of the three rootstocks increased with the increasing La concentration. Additionally, the content of La in the roots was higher than that in the shoots for each rootstock.
The La migration coefficients of the three rootstocks under different La treatments are shown in Table 4
Effect of La on the Calcium Content in the Citrus Leaves
Under different La treatments, the Ca content in the leaves of the three citrus rootstocks generally decreased ( Table 5). The Spearman correlation coefficient between the Ca and La contents in the three citrus rootstocks' leaves was −0.48, indicating that the Ca content in the leaves was significantly negatively correlated with the La content ( Figure 4).
Effect of La on the Calcium Content in the Citrus Leaves
Under different La treatments, the Ca content in the leaves of the three citrus rootstocks generally decreased ( Table 5). The Spearman correlation coefficient between the Ca and La contents in the three citrus rootstocks' leaves was −0.48, indicating that the Ca content in the leaves was significantly negatively correlated with the La content ( Figure 4).
Effect of La Stress on the Growth of Three Citrus Rootstocks
Previous studies have shown that REEs can significantly regulate the growth and development of plants [21][22][23][24][25]. REEs enhance plant biomass by stimulating the uptake of mineral nutrients and promoting the synthesis of chlorophyll [21]. La increases crop yield by modulating the activity of RuBP carboxylase [22]. However, many studies have shown that the promotion of plant growth and development occurs under low concentrations of REEs, and an inhibitory effect has been observed under high concentrations [4,7,23]. An appropriate concentration of La 3+ had a promoting effect on the germination rate and germination potential of Salvia miltiorrhiza seeds, and the promotion effect was highest at 30 mg·L −1 La 3+ . Meanwhile, the soluble sugar and soluble protein contents and the activity of the antioxidant enzyme system were improved, showing increases. In contrast, La 3+ could
Effect of La Stress on the Growth of Three Citrus Rootstocks
Previous studies have shown that REEs can significantly regulate the growth and development of plants [21][22][23][24][25]. REEs enhance plant biomass by stimulating the uptake of mineral nutrients and promoting the synthesis of chlorophyll [21]. La increases crop yield by modulating the activity of RuBP carboxylase [22]. However, many studies have shown that the promotion of plant growth and development occurs under low concentrations of REEs, and an inhibitory effect has been observed under high concentrations [4,7,23]. An appropriate concentration of La 3+ had a promoting effect on the germination rate and germination potential of Salvia miltiorrhiza seeds, and the promotion effect was highest at 30 mg·L −1 La 3+ . Meanwhile, the soluble sugar and soluble protein contents and the activity of the antioxidant enzyme system were improved, showing increases. In contrast, La 3+ could inhibit the growth of the plants at high concentrations [24]. Circulation of the appropriate amount of REEs during plant growth has been previously observed to promote plant growth and development [25]. Similarly, we found that the three rootstocks did not show any symptoms of damage and that the height increment ratio and biomass of the plants increased, to some extent, when the La concentration was less than or equal to 0.5 mmol·L −1 . However, when the La content exceeded 0.5 mmol·L −1 , the height increment ratio and biomass of the three rootstocks were reduced and significantly lower than those of the control group. Moreover, the La migration coefficients of ZYXC and SP first decreased and then increased with increasing La concentrations. These results indicate that REEs have a hormesis effect on plant growth and development.
A large number of studies have shown that REEs have a hormesis effect on the growth and development of plants [26,27]. Appropriate concentrations of La (5-10 mg·L −1 ) and Ce (5-20 mg·L −1 ) can effectively increase the antioxidant enzyme activity of pea seedlings and reduce the MDA content and significantly reduce the toxic effect of Cu stress (p < 0.05). However, high concentrations of the two REEs will aggravate the damage to the antioxidant enzyme system of pea seedlings caused by Cu, showing a collaborative toxic effect [26]. Ce(NH 4 ) 2 (NO 3 ) 6 solutions promoted root growth and increase the chlorophyll content under treatments of 1 and 10 mg·L −1 , while 30 and 50 mg·L −1 Ce(NH 4 ) 2 (NO 3 ) 6 solutions were inhibitory [27]. We found that none of the three rootstocks showed toxicity symptoms under a low concentration of La (0.5 mmol·L −1 ), while the three rootstocks showed varying degrees of toxicity symptoms under treatments of a high concentration of La (≥1 mmol·L −1 ). During the test, the earliest toxicity symptoms and the highest average value for the toxicity index were observed in TO, while ZYXC was the last rootstock to show toxicity symptoms, and it also had the lowest average value for the toxicity index. These findings indicate that of the studied rootstocks, TO is the most sensitive to La, followed by SP, and ZYXC is the most resistant.
The chloroplast is important for green plants, being the site where photosynthesis occurs. Various photosynthetic pigments that can absorb, transmit, and transform light energy are distributed on the thylakoid membrane of the chloroplast, including the majority of chlorophyll a, which collects and transmits light energy, all of chlorophyll b, and carotenoids. A few special chlorophyll a molecules directly contribute to the conversion of light energy into chemical energy [28]. A large number of studies have demonstrated that REEs can affect the chlorophyll content of plants. Treatment of the green algae Chlorella with low concentrations of the rare earth element Eu resulted in increases in the chlorophyll a and b contents to levels significantly higher than those of the control. A high concentration of Eu resulted in chlorophyll reduction and the inhibition of Chlorella growth [29]. The chlorophyll content and photosynthetic rate of Lonicera japonica increased when treated with low concentrations of La and decreased when the concentration of La was above 30 mg·L −1 [30]. Our study found that under low concentrations of La (2 mmol·L −1 ), the contents of chlorophyll a, chlorophyll b, chlorophyll a + b, and carotenoids in leaves of the three citrus rootstocks were similar to those of the control group. However, the chlorophyll and carotenoid contents of the three rootstocks were all significantly reduced under the 4 La treatment, which is consistent with the toxic symptoms observed in Figure 2.
Effects of La on Physiological and Biochemical Indexes of Three Citrus Rootstocks
MDA is a lipid peroxidation product of cell membranes, and its content can, to an extent, reflect the degree of oxidative damage to plants [31]. Previous studies have shown that low concentrations of La can effectively reduce the contents of MDA and H 2 O 2 in Lonicera japonica leaves, and high concentrations of La will increase the content of MDA and H 2 O 2 in the leaves. Similar results were obtained in research on horseradish and Dendrobium hancockii [32]. The results of our study showed that the MDA contents of the three citrus rootstocks' leaves first decreased and then increased with increasing concentrations of La, but they were significantly higher than the control at a high concentration of 4 mmol·L −1 .
During the growth and development of plants, a large number of free radicals are produced that peroxidize membrane lipids, destroy the membrane system, and even cause cell death [33]. SOD and CAT are the main enzymes of the plant antioxidant system that remove reactive oxygen species [23]. REEs also have a significant impact on the active oxygen metabolism system of plants. In a study of Arabidopsis, it was found that a low concentration of La 3+ can enhance the enzyme activity of SOD [34]. A 20-30 mg·L −1 La 3+ treatment significantly increased the SOD and CAT enzyme activities of longan leaves while improving the efficiency of the AsA-GSH cycle and the activity of related metabolic enzymes, thereby improving the La resistance of longan [35]. Our study found that the SOD activity of ZYXC and TO first increased and then decreased with the increase in La concentration. The SOD activity of ZYXC under different concentrations of La was not significantly different from that of the control group. The SOD activity of TO and SP under La treatment was significantly lower than that of the control group. The CAT activity of the three rootstocks' leaves first increased and then decreased with the increase in La concentration, reaching a maximum at 0.5 mmol·L −1 La.
Effect of La on the Ca Content of the Three Citrus Rootstocks
Previous studies have found that the REE content was highest in the roots of plants, accounting for approximately 80%, followed by the stems, leaves, flowers, fruits, and seeds [36,37]. The distribution of five REEs (La, Ce, Nd, Y, Gd) and single REEs in crops has been previously studied, and the absorption capacity of crops for REEs has been found to be positively correlated with the amount available in the soil. The order of the distribution of the REE content in each part of the plant is generally root > leaf > stem > fruit [38]. La is referred to as super calcium. La 3+ often occupies the position of Ca 2+ and binds to biomacromolecules, even replacing bound Ca 2+ , which interferes with the normal physiological functioning of Ca 2+ [39,40]. Additionally, La 3+ binds to the site of Ca 2+ on the plasma membrane, which affects the absorption and transportation of Ca 2+ ; therefore, La 3+ is considered an inhibitor of Ca 2+ channels [41]. In this study, under different La treatments, the La content in the roots of the three citrus rootstock seedlings was found to be significantly higher than that in the shoots, which is consistent with the results of previous studies. We also found that the Ca content in the leaves of the rootstocks was significantly negatively correlated with the La content (p < 0.01). The correlation coefficient was −0.48, which indicates that high concentrations of La would affect the absorption of calcium by plants. These results indicate that La has a hormesis effect on the growth of citrus rootstocks. With the increase in La concentration, the order of La tolerance of the three citrus rootstocks is Ziyang Xiangcheng > Shatian Pomelo > trifoliate orange. Hence, our results also show that La treatment affects the absorption of calcium in citrus rootstocks.
Plant Materials and Treatments
Three main types of citrus rootstocks in China were used in this study: trifoliate orange (Poncirus trifoliata L. Raf., TO), Ziyang Xiangcheng (C. junos Sieb. ex Tanaka, ZYXC), and Shatian Pomelo (Citrus grandis var. shatinyu Hort, SP). The culture of rootstock seedlings can be divided into three stages. At the first germination stage, the seeds with plump and consistent grains were selected and soaked in distilled water for 24 h. Then, the seed coats were peeled off. After soaking in 0.1% potassium permanganate solution for 3-5 min, the seeds were repeatedly cleaned with distilled water and placed on a floating plate in a plastic box filled with distilled water. The seeds were cultured in an incubator in the dark for 3 to 5 days. The second stage was the cultivation period of rootstock seedlings. From the rooting of the plants to the growth period of 3 to 5 true leaves, the plants were planted in a sandy medium (quartz sand:Perlite = 1:1) and then placed in the culture room, with conditions of 5000 LX light intensity, 12 h/Day, and a day/night temperature of 25/18 • C. Distilled water (50 mL) was poured on the plants once every 2 to 3 days. The third stage was the nutrient culture stage. After 3 to 5 true leaves sprouted from the rootstock seedlings, the culture was gradually adjusted from distilled water to modified Hoagland solution [42], and the rest of the culture conditions remained unchanged.
After two months, the seedlings were treated with La(NO 3 ) 3 ·6H 2 O solution at concentrations of 0 (CK), 0.25 (0.25 La), 0.5 (0.5 La), 1 (1 La), 1.5 (1.5 La), 2 (2 La), and 4 mmol·L −1 (4 La). Three replicates were used, with 10 seedlings per variety and replicate. Each labeled pot was watered with a corresponding concentration of treatment solution (400 mL) every three days. The experiment was terminated when more than 50% of the rootstocks had died in the highest concentration treatment.
Determination of the Growth Parameters
Fifteen seedlings of each treatment were selected and marked. The plant height of the labeled seedlings was measured at the beginning and end of the treatment to determine the height increment ratio. At the end of the experiment, whole seedlings were collected from each treatment. Subsequently, the biomass of the samples was measured by electronic scales.
Height increment ratio = measured value after treating − measured value before treating measured value before treating × 100%
Investigation of Phenotypic Symptoms and Toxicity Indexes
Changes in the morphology and color of the rootstock leaves were observed daily, and the date of symptom onset, the corresponding toxicity grade, and the displayed symptoms were recorded. Toxicity grades were assigned according to the salt damage grading standard and the morphological changes in the citrus rootstocks under La stress [43]. Grade 0 indicates that there is no significant difference between the leaves of the treatment group and those of the control group and that no damage symptoms are visible. In Grade 1, the leaves are mildly damaged, with yellow leaves accounting for 10-25% of all the leaves on the plant. Grade 2, moderate toxicity is characterized by yellow leaves covering 25-50% of the plant. In Grade 3, the severely toxic stage, burst veins or symptoms of dehydration are observed, and yellow leaves cover 50 to 75% of the whole plant. In Grade 4, extremely severe poisoning, more than 75% of the leaves have yellowed, the leaves are dry, and the branches are dead.
Toxicity index (D) = ∑(toxicity grade × number of plants in a grade) total number of plants investigated × highest toxicity grade × 100%
Determination of Photosynthetic Pigment Content in Leaves of Citrus Rootstock
Our method was based on a measurement method used in a previous study [44]. Firstly, the leaves without the main veins were cut into small pieces, weighed to 0.1 g accurately, and used for extraction with 10 mL of 96% ethanol. Secondly, the mixed liquor was placed in the dark for 24 h; then, it was mixed well, and the supernatant was collected. Finally, the OD values at 665, 649, and 470 nm were read using a TU-9101 UV spectrophotometer and used to calculate the contents of chlorophyll a, b, and a + b and carotenoids.
Determination of Physiological and Biochemical Indicators of Citrus Rootstock Leaves
We took the last 3-5 fully expanded leaves of each treatment on the 7th day after treatment and quickly froze them with liquid nitrogen and stored them in a −80 • C freezer until they were used for testing. The content of malondialdehyde (MDA) and the activities of superoxide dismutase (SOD) and catalase (CAT) were measured using a kit provided by Suzhou Comin Biotechnology Co., Ltd. (Suzhou, China) (MDA: MDA-2-Y; SOD: SOD-2-Y; CAT: CAT-2-W). We followed the manufacturer's specific instructions that were included with the kit.
Determination of La and Ca contents
At the end of the experiment, the seedling shoots and roots of each treatment were harvested separately, with three replicates. The shoots and roots were digested using the electrothermal plate method [45], and the La content was determined using an Agilent 5110 (Agilent, Santa Clara, CA, USA) inductively coupled plasma optical emission spectrometer (ICP-OES). Standard curve: using a stock solution of 100 µg/mL La, we prepared solutions of 1, 2, 5, 8, and 10 µg/mL by dilution, using HNO 3 as the medium. Instrument settings: RF transmitter power 1000 W, carrier gas flow 15 L/min, auxiliary gas flow 0.2 L/min, atomizer flow 0.8 L/min, peristaltic pump speed 1.5 mL/min, axial observation, repeat 3 times. The Ca content in the plant leaves was determined using an AA-800 atomic absorption spectrometer (PerkinElmer, Waltham, MA, USA) [46].
La migration coefficients =
La content in shoots La content in shoots + La content in roots × 100%
Conclusions
(1) La has a hormesis effect on the growth of citrus rootstocks. When the La concentration is less than or equal to 0.5 mmol·L −1 , the growth of citrus rootstock seedlings is stimulated. However, when the La content is higher than 1 mmol·L −1 , citrus rootstock growth is inhibited. (2) TO is the rootstock most sensitive to La, followed by SP. ZYXC is the most resistant rootstock. | 2021-07-08T20:40:21.355Z | 2021-07-01T00:00:00.000 | {
"year": 2021,
"sha1": "26e5ca70fdf4acdd0e1e6440ad088dad5e962acb",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2223-7747/10/7/1388/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "26e5ca70fdf4acdd0e1e6440ad088dad5e962acb",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
8652955 | pes2o/s2orc | v3-fos-license | Evaluation of the Effectiveness and Implementation of an Adapted Evidence-Based Mammography Intervention for African American Women
Breast cancer mortality disparities continue, particularly for uninsured and minority women. A number of effective evidence-based interventions (EBIs) exist for addressing barriers to mammography screening; however, their uptake and use in community has been limited. Few cancer-specific studies have evaluated adapted EBIs in new contexts, and fewer still have considered implementation. This study sought to (1) evaluate the effectiveness of an adapted mammography EBI in improving appointment keeping in African American women and (2) describe processes of implementation in a new practice setting. We used the type 1 hybrid design to test effectiveness and implementation using a quasi-experimental design. Logistic regression and intent-to-treat analysis were used to evaluate mammography appointment attendance. The no-show rate was 44% (comparison) versus 19% (intervention). The adjusted odds of a woman in the intervention group attending her appointment were 3.88 (p < 0.001). The adjusted odds of a woman attending her appointment in the intent-to-treat analysis were 2.31 (p < 0.05). Adapted EBI effectiveness was 3.88 (adjusted OR) versus 2.10 (OR) for the original program, indicating enhanced program effect. A number of implementation barriers and facilitators were identified. Our findings support previous studies noting that sequentially measuring EBI efficacy and effectiveness, followed by implementation, may be missing important contextual information.
Background
Breast cancer is the most common cancer in the United States and is the second leading cause of cancer mortality in women [1,2], with lower incidence in African American women but higher stage at diagnosis and greater mortality as compared to non-Hispanic white women [2,3]. Enhancing guideline adherent mammography routines among these women may be important to address this disparity [3]. While a number of effective evidence-based interventions (EBIs) exist for addressing barriers to mammography screening, like other EBIs, their uptake and use in community settings have been limited [4][5][6][7]. Reasons for lack of uptake include cancer planners' anticipation of a misfit between interventions tested in controlled efficacy trials and the needs of their settings [8][9][10]. Both the perception of lack of fit and the possibility of real deficits in an EBI fit for a new community can be addressed by judicious and systematic adaptation of EBIs by researchpractice partnerships and consultation with the community to improve fit [8][9][10][11][12][13][14][15].
Planners face the challenge of striking a balance between program fidelity, that is, implementation of an EBI as intended, and adaption to the needs of the adopting site [15]. Some efforts to promote use of evidence-based programs suggest that the primary concern should be fidelity rather than adaptation because of the lack of data to suggest that 2 BioMed Research International adaptation improves program effectiveness [16]. However, in a review of over 500 studies that demonstrated that program implementation affected outcomes of prevention programs, Durlak and DuPre point out that while higher levels of fidelity were closely tied to improved program outcomes, levels of fidelity were well below 100% across interventions [17]. Therefore some adaptation occurred and might have been seen as necessary for program implementation. Elliott and Mihalic have outlined four ways that programs are typically adapted: adding or deleting program components; changing program components or content; changing the process or intensity of implementation; and making cultural modifications [16]. Barrera Jr. and colleagues found that behavioral interventions were more effective when adapted for a new cultural group than usual care and other control conditions and that most planners agreed that adaptation begins with data collection, to inform the need for adaptation, and ends with testing in the new setting [18].
Best practice is to always evaluate an EBI used in a new setting, however, particularly one that has been adapted. Evaluation of adapted EBIs is recommended, since adaptation may harm the effective elements of an EBI (i.e., core elements) [11]. Besides this need for impact evaluation, there is a need to evaluate the feasibility and fidelity of intervention implementation in the new population and setting [10]. However, few cancer-specific studies have evaluated effectiveness of adapted evidence-based interventions in new contexts, and fewer still have evaluated implementation in real-world contexts specifically [19,20]. Of the few studies that have evaluated implementation of cancer-specific EBIs, facilitators for implementation and fidelity included the use and enthusiasm of program champions, academic detailing, and training (a higher degree of control) and team involvement/communication. Barriers included lack of attendance at training sessions, incomplete exposure to EBI tools/components, and competing demands at the practice level [20]. The authors could find no published studies that discussed real-world implementation of mammography EBIs in particular.
Therefore, the objectives of this study were to (1) evaluate the effectiveness of an adapted mammography EBI in improving appointment keeping for mammography in African American women and (2) describe processes of implementation of the EBI in a practice setting. Study results will test the hypothesis in which the effectiveness of the original EBI will be retained after adaptation and provide lessons learned for future intervention implementation in the real-world setting of mammography screening.
Evidence-Based Intervention.
For this study, we adapted the intervention "Breast Cancer Screening among Nonadherent Women," originally developed by Duke University and Kaiser Foundation Health Plan [21]. The intervention is a tailored telephone counseling reminder based on the Transtheoretical Model of Change [22]. The program assessed a woman's stage of readiness to attend her appointment through a series of survey questions and counseled her through barriers to attendance. Following the Transtheoretical Model, the five stages were as follows: precontemplation, no intention to attend appointment; contemplation, intends to attend appointment; preparation, intends to attend appointment and is making preparations for taking action; action, has attended the appointment; maintenance, keeps attending appointments [22]. In the original trial, women who were off schedule with screening were more than twice as likely to get a mammogram if they received the telephone counseling (OR = 2.10).
We adapted the intervention using Int Map Adapt, a modified version of intervention mapping (for full details of the intervention adaptation, please see Highfield et al. also in this issue) in the following ways: (1) performing needs assessment among local African American women to identify salient barriers and include them in the barrier scripts; (2) developing a foundational communication process based on active listening to make it easier for the patient navigator to hold a real world rather than research conversation (when not dealing with a specific barrier) and to develop rapport with the patient; (3) changing assessment of stage of readiness to include only two categories precontemplation/contemplation or preparation/action and then matching the script to whether the women intended to keep her appointment or is unsure; (4) pretesting the changes with local women to assess acceptability and fine-tune scripts; and (5) developing an implementation protocol and training the navigator [14]. The adapted intervention aimed to increase scheduled mobile mammography screening appointment attendance rates among low-income African American women with care provided by a mobile mammography provider which was the largest nonprofit breast cancer screening organization in the greater Houston area. The systematic and collaborative adaptation process of the original EBI for use in local practice is reported elsewhere (see Highfield et al., this issue).
Study Design.
We used the type 1 hybrid design to test the intervention's effectiveness and to gather information on the implementation [23,24]. This type of design focuses on effectiveness evaluation and answers questions such as "what are possible facilitators and barriers to real-world implementation of an EBI?" and "what potential modifications could be made to maximize implementation?" in addition. We originally planned a randomized controlled trial but found that the navigator could not alternate between usual care and the adapted intervention. Therefore, we changed to a quasi-experimental, sequential recruitment design in which we assigned contacted women to usual care or adapted intervention in sequential groups of 50 patients. See enrollment and study limitations for further detail. The time period for enrollment and collection of patient data was predetermined based on funding and availability of the clinical partner and took place from February to December 2012. We sought to contact as many patients as possible within this time window. This study operated under Institutional Review Board approval from St. Luke's Episcopal Hospital Institutional Review Board.
Study Setting.
A local mobile mammography partner served as the site for implementation of the intervention (including recruitment and data collection). In 2011, the organization provided 33,784 screening and diagnostic procedures for those able to pay; 19,369 screening and diagnostic procedures at no charge to low-income, uninsured women; and 8,857 free patient navigation services to patients without insurance. Mobile screening mammography services are provided to over 7,000 women a year, covering a 15county region centered on Harris County, TX. Services are provided in a variety of settings, including schools, worksites, federally qualified health centers, churches, and other community settings. The mobile mammography provider in this study serves a diverse population including Caucasians, Hispanics, Asians, African Americans, and immigrant populations. Approximately 20% of the low-income, uninsured patient population at the time of study was African American (2,200 women). The baseline expected no-show rate for uninsured, low-income African American women was 38% (unpublished data).
Patient Enrollment.
Inclusion criteria were as follows: African American, female, age 35-64, uninsured, income of ≤200% of the federal poverty level (FPL), and an upcoming appointment for a mobile screening mammogram at a program partner site. We identified eligible patients from the electronic patient scheduling records. The patient navigator made three calls to reach all eligible patients including calls at different times of the day and weekends for those who were not reached in the initial attempt. Reached individual patients received one phone call from the patient navigator in order to deliver the intervention. We expected intervention calls would take on average 6-10 minutes. Reached individual patients were initially enrolled into each group by randomization (using a randomized controlled trial (RCT) design) from February to April 2012; however, we ran into implementation issues with the patient navigator (see Section 3), so we adjusted to a sequential enrollment procedure from May to December 2012. The navigator called patients in the comparison group and provided them with a standard appointment reminder which included the date, time, and location of their upcoming appointment. If a patient did not answer the phone, the navigator left a voicemail message containing the reminder. The navigator read to patients in the intervention group an oral consent over the phone and after consent asked the following staging question: "How confident are you that you will keep your upcoming appointment?" The navigator then counseled as needed for any barriers uncovered in the phone call per the intervention protocol. No blinding was used in this study.
Measures and Data
Tracking. The primary outcome of appointment keeping was ascertained from mobile mammography clinical records (nonattendance = 0, attendance = 1). In addition, we collected information about sites, age of patient, number of days between reminder call and appointment, and study stage (i.e., design coded as 0/1 for randomized controlled trial versus quasi-experimental one). Appointments were scheduled to 41 different sites across 8 counties. The sites were divided into 2 categories, community sites (local nonprofit organizations or government agencies, community initiatives, schools, health fairs, or other community organizations) and hospital/clinic sites (local hospital, federally qualified health center, or charity clinic). Age and days to appointment were categorized in the following categories: 35-39, 40-49, and 50-64 years old and 0 days, 1 day, 2 days, 3-4 days, and 5 days or more between phone call and appointment.
We evaluated the secondary outcome of implementation fidelity by monitoring of intervention phone calls and comparing them to the protocol, making site visits to the mammography site, and meeting with implementation staff (researchers and practitioners). A series of three phone calls made by the patient navigator were recorded at the beginning of implementation. These recordings were evaluated by the research team for compliance in asking the staging question and using active listening and scripted responses to patient identified barriers during the phone call. Following review, the navigator received feedback on the staging question and active listening. During implementation, phone calls were periodically monitored on-site by a member of the research team for the same compliance issues. In addition, we made postintervention follow-up phone calls to a randomly selected subset of intervention patients ( = 50) to assess their perception of the EBI calls and systems barriers encountered (see Topic-List).
Topic-List for Follow-Up Calls and Implementation Evaluation
(1) Is there anything you want to tell me about your mammogram appointment so we can make the experience better?
(2) Do you remember talking to anyone from [mobile mammography program name] before your mammogram appointment? Do you remember who?
(3) What was the conversation about?
(4) What stands out about your experience talking to (navigator name)? We tracked all data for the pilot either in an Access database or in paper data collection forms. The database included fields for a unique identifier for each patient, date and time of attempted call(s) with outcome of each (reached, not reached, left message, and bad number), barriers, and systems barriers encountered during the session, such as the patient was not aware they needed a doctor's order to receive a mammogram. We also included an open text field for the patient navigator to record notes during the call. Data available from the mammography partner's data system included age, sponsored status (lack of insurance and ≤200% FPL), site of screening, date and time of appointment, and contact information including phone number. The research assistant cleaned the data by comparing the Access database with the paper forms and existing records from the mammography partner. Any inconsistencies between the database and paper forms were investigated with the site and patient navigator for clarification. Data from both databases were combined into one Access database and exported to Stata for analysis.
Data Analysis.
We used Stata (Stata Corp., College Station, TX, USA) for statistical analysis. We calculated descriptive statistics and then conducted logistic regression analysis to report attendance in the intervention group as compared to the comparison group. Chi-square tests (and Fisher's exact tests when cell sizes were less than five) were used to evaluate group differences between potential confounding variables, including age, days between reminder calls, mammography site (community versus clinical setting) and appointment time, and the study stage (i.e., design change). Both unadjusted and adjusted logistic regression models were fitted to determine intervention's effectiveness in improving mammography attendance. Factors in the adjusted model included, besides group (intervention; control), mammography site, age of patient, number of days between reminder call and appointment, and navigator making the reminder calls. Study stage (design) was not included in the model as it was highly collinear with navigator as only one navigator made calls during each phase. Following the basic analysis, we further evaluated the effectiveness of the EBI using intentto-treat analysis [25][26][27][28]. In this study, we used intent-to-treat analysis which considered the outcomes (appointment attendance) for all women based on their group designation at the time of phone call attempt (intervention or control) and not just those who were reached and treated following protocol by the patient navigator. Intent-to-treat analysis ignores deviations in protocol, noncompliance, and anything that may happen after group assignment [25][26][27][28]. We conducted power analysis using a two-tailed two-sample frequencies Fisher's exact test with = 0.05 and adjusted for unequal sample sizes to evaluate ability to detect a difference between the groups. Figure 1 shows the CONSORT/TREND diagram with the total number of enrolled patients per study stage (randomization and sequential enrolment stage), those assigned, allocated, exposed to the intervention, followed-up, and analyzed, both in the basic effectiveness ( = 151) and intent-to-treat analysis ( = 198). The intervention and comparison groups were similar with regard to age and number of days between reminder call and appointment as shown in Table 1. The average and median age for patients in both groups was 51 years (range: 36-64). The average and median number of days between reminder call and patient appointment was 3 days for both groups (range: 0-13 days). No effect was observed for the study stage (design change) ( 2 = 0.292). The groups were different in regard to the type of mammography site, with women in the intervention group being screened in community settings more frequently than the control group in both the basic analysis and intent-totreat analysis (see Table 1). The no-show rate for patients in the comparison group was 44%. The no-show rate for patients in the intervention group was 19% meaning that the EBI in this study led to a 57% reduction in the no-show rate in the basic analysis (calculated as percent change).
Effectiveness
Results. The unadjusted and adjusted results are presented in Table 2. The unadjusted odds of a woman in the intervention group attending her appointment was 3.38 times higher than for a woman in the control group ( < 0.001) in the basic analysis. The adjusted odds of a woman in the intervention group attending her appointment were 3.88 as compared to the control group ( < 0.001). No effect was found for the change in study design. In the intentto-treat analysis, the unadjusted odds of a woman attending her appointment if she was in the intervention group were 1.84 ( < 0.05). The adjusted odds of a woman attending her appointment in the intent-to-treat analysis were 2.31 as compared with the control group ( < 0.05). With the noshow rate of 44% observed in the comparison group, using a two-tailed test and = 0.05, there was 87% power to detect a change in the no-show rate to 19% in the intervention group for this study in the basic analysis.
Implementation Results.
We encountered a number of systems barriers to implementation. These included the following: confusion about responsibility for implementation of usual care reminder calls; lack of clear communication about the prerequisites of a doctor's order and clinical exam prior to screening; and inconsistent notification about costs associated with screening.
Fourteen out of 96 (15%) patients in the intervention group reported encountering systems barriers, including the fact that they were unaware of their upcoming appointments, unaware of the need for a doctor's order to obtain a mammogram, and unaware of the out-of-pocket cost of the mammogram. Additionally, some sites reported issues with the mobile units going to the wrong location, sites being cancelled with short notice due to mechanical issues (mobile unit broke down or mammography machine needed service), and unclear communication about scheduling procedures, such as how many patients could be seen and at what time for scheduled mobile screening dates.
Of the 50 randomly selected intervention patients for follow-up phone calls to assess patient perception of the EBI 42 completed the interview (84%). In these calls, we found that 34 patients remembered their reminder phone call from the navigator (81%). The patients who remembered their call and conversation reported positive interactions with her. They reported, for example, "She was warm, friendly, helpful, sweet, supportive and sincere." When asked if there was something about the phone call from the navigator that helped them to keep their appointment, 18 patients reported positive impact, such as "The encouragement from her [the navigator] went beyond a reminder call," "she cared," "put me first," "helped me overcome my misconceptions," and "was nice." When asked how helpful they found the phone call, all patients that remembered the conversation ( = 34) rated it as 5 out of 5, except one patient who rated it 4 out of 5. Finally, the patients who attended their appointment were asked to share their thoughts of the reminder phone call program. Patients reported that "It's important. Catch it (breast cancer) early to have a chance," "They (women) need to go and have it done!," "Taking care of yourself is a major point to bring up," "It's a wonderful program," "It's a must," and "I think it's great that we are talking to women to let them know that mammograms are important," and recommended "Use media and marketing to reach women without insurance," "Transportation is a very big deal and would be a help. Maybe find a church with a van that could help out," "Spread the news about breast health-put it in churches and schools. I was telling people at my church about the mammography program and they had never heard about it." Seven of the 42 patients we conducted follow-up calls with did not attend their appointment. When asked if there was anything we could have done to have helped them keep their appointment, three reported they were sick on their appointment day, three had last minute transportation issues, and one reported that she did not have the money for the copay. Due to the nature of the mobile program, in many cases our patient navigator was not able to reschedule patients directly if during the intervention call they indicated a desire to change their appointment. Patients had to be routed through the mobile program coordinators or the site coordinators in the community in order to reschedule. This meant a loss of continuity with the patient and in some cases patients reported having difficulty reaching the coordinators to reschedule.
Discussion
This study used a hybrid type 1 design to evaluate both effectiveness of an adapted EBI in a practice setting and the implementation process. The effectiveness of the adapted EBI was 3.88 (adjusted OR) versus 2.10 (OR) for the original program [21]. This is consistent with the findings of Barrera Jr. et al., which systematically adapted EBIs improvement program effectiveness when compared to a control reminder [18]. The adapted EBI in this study reduced appointment noshows by 57 percent from baseline in the clinical practice and would be suitable for scale-up.
Few published studies provide a detailed description of EBI effectiveness with implementation outcomes in a single study, particularly for cancer-specific EBIs [19,20,23]. The "Communicating Health Options Through Information and Cancer Education" (CHOICE), "Improving Systems for CRC Screening at Harvard Vanguard Medical Associates" (HVMA), and "Improving CRC Screening and Follow-up in the Veterans Health Administration" (VHA) programs all considered implementation context during their evaluations [20]. These studies encountered some of the same implementation barriers and facilitators we found in this study [20]. We found that monitoring of implementation was valuable and that the study approach needed flexibility to deal with evolving implementation issues, such as the lack of consistent standard care reminder calls and the navigator struggling with the simultaneous implementation of the control and 7 intervention process. This was consistent with the recommendations from the Cool Pool trial where they noted that continuous monitoring of implementation was critical. Other authors have also noted the importance of continuous monitoring of implementation [20]. Additionally, the ability to identify and measure all implementation issues at the beginning of a study is limited and has been noted as a barrier in previous studies [20]. For example, in this study, we did not know prior to implementation monitoring whether reminder calls were implemented consistently. Implementation issues like these are likely to arise only once monitoring begins and may also appear over time, requiring subsequent intervention or changes in protocol to address them.
Our findings further highlight themes from previous studies which have noted that the predominant research paradigm of sequentially measuring EBI efficacy and effectiveness, followed by implementation studies, may be missing important contextual information [7,23,29]. We were able to find and correct problems with implementation based on the results of our process evaluation which we monitored continuously throughout the study. The process evaluation also allowed us to find problems with fidelity early in the study and correct them. The major correction was to change the design so that the navigator did not have to conduct two different interventions in the same period of time. We also added plans for an intent-to-treat analysis to increase our confidence in the validity of our results. After an unsuccessful attempt to train the original navigator to adhere to protocol, we replaced the navigator and conducted repeated trainings and monitoring of calls more frequently with the new navigator. This monitoring process may have contributed to the increased effectiveness of the EBI that we observed in this study in addition to the systematic adaptations made to the EBI.
Strengths and Weaknesses of the Study.
This study has a number of strengths and limitations which should be considered. This study is one of only a very few studies to evaluate both EBI effectiveness and implementation in a community context and provides critical insights for the future translation of EBIs, particularly for mammography interventions. A major strength of this study was using a systematic process for adapting an EBI [11]. The process included working in a research-community partnership with an advisory board comprised of researchers and practitioners who worked together to perform a community needs assessment, select an EBI, adapt the EBI based on the needs assessment, pretest it, implement it, and evaluate it [13].
A number of limitations must also be considered for this practice-based evaluation study. First, this study was conducted in a mobile mammography practice and the findings may not be generalizable to other implementation contexts. Best practice indicates that anytime an intervention is being considered for a new population or context, that needs assessment and evaluation is needed (see Introduction and Highfield et al., 2014 [13]). Our findings show that by retaining core elements of an intervention, such as stagebased telephone counseling on barriers for mammography appointments that effectiveness can be maintained or even improved in a new population. We have no reason to believe this would not be the case when extending our intervention into a broader population context. For instance, many of the barriers to mammography screening we found among African American women are complementary to barriers to screening faced by all underserved women [30]. The most significant weakness of this study was our initial inability to train the navigator to keep the control and adapted intervention groups separate in the first study design. However, the process evaluation in this study enabled us to correct the behavior of the navigator and redesign the study from a RCT to a quasiexperimental design that we expected would be more feasible in practice. In the quasi-experimental design, we enrolled 47 patients in the intervention. The main threats to validity from this type of enrollment were selection bias, where patients reached during the sequential enrollment time period may not have been representative of the larger patient population in the clinic. We dealt with this validity threat by comparing patient demographics between the randomized and sequential design process, by including a design change variable in the regression analysis and by conducting an intent-to-treat analysis, which is useful for dealing with deviations in protocol. Also, even though this study lacked a true nonintervened group, the baseline no-show rate of 38% serves as a proxy control group since reminder calls were made only rarely. Additionally, the effect of using a control group receiving standard reminder calls as opposed to a true no-contact control group could have downward biased the results of the study. In other words, this may have made it harder to find an intervention effect in our study. Our control and intervention groups in both the basic analysis and intent-to-treat analysis differed in where they received their screening, with women in the intervention group being more likely to be screened in a community as opposed to a clinical setting. However, all women in the study were required to have a doctor's order to obtain screening and the differences were consistent in both the basic analysis and intent-to-treat analysis, so we believe the effect would be minimal on the results. Further, no significant difference was observed in the regression model between screening sites when controlling for other factors. Finally, we evaluated the EBI using patient data available from the clinical provider in this study. There may have been important factors such as educational level and occupation which we were not able to evaluate due to a lack of data availability from the provider. While these factors may be important, it is important to note that these are nonmodifiable factors and have been shown to have limited value when designing and evaluating EBI programs [30]. Lastly, women enrolled in our study were low-income and uninsured, two factors that generally correlate with education and occupation, so while we did not measure those directly, we believe their effect would have been minimal on the outcomes.
Lessons Learned.
Interventions are rarely implemented with complete fidelity and in this study the navigator struggled to implement either intervention protocol, but especially the usual care group with fidelity. The navigator stated that she wanted to help all women attend their appointment and seemed not to be able to adhere to protocol. Even when we hired a second navigator, protocol adherence continued to be somewhat difficult. We believe the important learning from this is about staffing in a research study versus staffing in a clinical or other professional setting. In the original evaluation of an EBI in a research setting, research assistants (usually students) do not have a particularly strong professional identity or habitual way of doing tasks closely approximating the research protocol. In contrast, in a practice implementation, new protocol driven tasks are given to professional care providers who may be unable to divert from their normal practice. It is important for future researchers to consider issues of fidelity when adapting, training, monitoring, and measuring EBIs in community contexts.
Additionally, best practice indicates the need for EBI testing in new contexts (e.g., effectiveness testing in the new setting) [11,31]; however measuring and addressing context specific implementation issues remain challenging [32][33][34]. Currently, there is a lack of standardized and validated measures that can be used to assess implementation [35]. Additionally, studies have noted the need for multilevel interventions that consider implementation context; however there currently is no packaged approach to implementation available in the published literature [36]. Future studies should consider creating a packaged implementation intervention which could be tested and evaluated in the context of EBI implementation in the community.
Conclusion
This study provides an example of the real-world implementation of an adapted EBI. It demonstrates best practice for adaptation and evaluation of an EBI using a hybrid type 1 design and can be used for a model of blending research and practice to increase the uptake of EBIs and to make sure that they show effectiveness in new settings.
Disclaimer
The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Cancer Institute or the National Institutes of Health. | 2016-05-12T22:15:10.714Z | 2015-10-04T00:00:00.000 | {
"year": 2015,
"sha1": "7d9c9691f30901f5548f51d92120522bb4bf2542",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/bmri/2015/240240.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "95423de78cbf39f27e108aeba24f83dce0168fd0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237555395 | pes2o/s2orc | v3-fos-license | CORONAVIRUS: Third Wave Looms
Africa has so far received 66m doses and has administered 50m of them. Moeti urged governments to expand vaccination sites and take other measures to take advantage of the vaccine deliveries when they come. (© AFP 8/7 2021) Strive Masiyiwa, the Zimbabwean telecoms tycoon and one of the lead figures in the African Union (AU)’s effort to speed up deliveries, told Bloomberg News: “Now is the time for Europe to open up its production facilities so we can buy vaccines. . . not a single dose, not one vial has left a European factory for Africa.”
The criminal complaint says the property is "traceable to an international conspiracy to launder money embezzled from" the SNPC. It says that from 2011 to 2014 Sassou-Nguesso "funneled the misappropriated funds into accounts in the names of his various shell companies. . . at the Congo subsidiary of BGFI Bank Group S.A." The complaint also says that Sassou-Nguesso "accepted bribes worth over $1.5m in exchange for awarding lucrative oil licence contracts on behalf of SNPC from approximately 2014-2016".
CORONAVIRUS Third Wave Looms
Vaccination rates remain sluggish with less than two percent of Africans fully vaccinated.
Coronavirus cases have been rising in Africa since the start of the third wave on the continent in May. Sixteen African countries are now seeing a resurgence of the virus, with the more contagious Delta strain detected in 10 of them. Vaccination rates remain sluggish, with only 16m people, 2% of the African population, fully vaccinated. But, said Dr Matshidiso Moeti, the World Health Organisation (WHO)'s regional director for Africa, there was some room for optimism because vaccine deliveries were picking up after grinding to a near halt in May and early June.
In the previous two weeks up to July 8th, more than 1.6m doses were delivered to Africa through the Covax scheme, which was set up to ensure equitable distribution of vaccines to poorer countries. A US shipment of 20m Johnson & Johnson and Pfizer-BioNTech is due to arrive soon, to be distributed to 49 countries. Donations from Norway and Sweden are due to follow. Africa has so far received 66m doses and has administered 50m of them. Moeti urged governments to expand vaccination sites and take other measures to take advantage of the vaccine deliveries when they come. (© AFP 8/7 2021) Strive Masiyiwa, the Zimbabwean telecoms tycoon and one of the lead figures in the African Union (AU)'s effort to speed up deliveries, told Bloomberg News: "Now is the time for Europe to open up its production facilities so we can buy vaccines. . . not a single dose, not one vial has left a European factory for Africa." He added, ". . . when we've gone to talk to their manufacturers they tell us they are completely maxed out meeting the needs of Europe".
However, in what looks like a case of vaccine diplomacy, Rwanda has signed a $3.6m partnership with the European Union (EU) to upgrade its laboratory capacity to attract investors to manufacture Covid-19 vaccines. Rwanda is the first country to get EU funding for vaccine production. The main hurdle now is getting private investors on board, although President Paul Kagame told the Qatar Economic Forum in late June that negotiations with private sector firms to manufacture vaccines have advanced and that the production process would start "in a few months". (Africa Confidential 2/7) The US International Development Finance Corporation (DFC) announced on June 30th a joint financing package of €600m for Aspen Pharmacare Holdings Ltd., headquartered in South Africa, to expand local vaccine manufacturing capacity. A statement by the US State Department said DFC is working together with DEG of Germany, Proparco of France and the International Finance Corporation (IFC), an affiliate of the World Bank, to provide financing support. The vaccines will be primarily distributed to the AU, South African government and Covax, the statement said. (PANA
1/7)
African finance ministers and the World Bank met on June 21st to fast-track vaccine acquisition on the continent. In a boost to the AU's target to vaccinate 60% of the continent's population by 2022, the World Bank and the AU announced that they are partnering to support the Africa Vaccine Acquisition Task Team (AVATT) initiative with resources to allow countries to purchase and deploy vaccines for up to 400m people across Africa. AVATT, which is an initiative of the AU Commission, Africa CDC, Afreximbank, the AU special envoys for Covid-19, and UNECA, has already successfully negotiated 220m doses of Johnson & Johnson's Janssen Covid-19 vaccine for use by African countries, with an option for 180m more based on demand. (PANA 22/6) Developing African Vaccines Prof. Godwin Bazuaye, Chief Medical Director at Nigeria's privately-run Igebinedion University Teaching Hospital (IUHT), said the risk of new strains arising from increased human interactions remained a threat until Africa creates its own vaccine. "The environment in Africa which is closer to the Equator has played a major role in the low number of caseload fatalities reported in Africa. We also have the relatively good health enjoyed by the population as well as our genetic make-up which combined have played a key role in these low caseloads," said Prof. Bazuaye. Dr Philip Onyebujoh, a disease surveillance expert, said the development of a vaccine would help generate crucial data bank on genes. "We have the greatest genetic diversity in Africa. We need to capture this genetic data in order to come up with at least three different vaccine candidates which could make it easier to develop our own African vaccines," said Dr Onyebujoh. (PANA 21/6) Africa currently imports 99% of its vaccine needs.
Egypt has recently launched the production process for manufacturing vaccines. Prime Minister Mostafa Madbouli has said that the country will increase the domestic production of the Chinese Sinovac vaccine to reach 80m doses before the end of 2021. At a press conference on July 5th, Madbouli stressed that such an increase will allow the government to inoculate 40m citizens before the end of 2021, adding that the state will also receive millions of other jabs from several international companies. In order to be able to achieve this significant increase in the production of vaccines, Madbouli said that the manufacturing capacity of the Egyptian Holding Company for Biological Products and Vaccines (Vacsera) would be increased from 300,000 to 600,000 jabs per day. The PM noted that Vacsera has locally produced about 1m doses of the Sinovac vaccine so far.
Moroccan media lauded an agreement signed between the government and Chinese firm Sinopharm which will see the kingdom producing 5m vaccines per month, gradually increasing over time. Privately owned website Hespress called the decision "historic", saying that it put Morocco "at the cross-roads of southsouth cooperation", providing vaccines not only for its own citizens but for all of Africa.
A B C
Morocco's vaccination campaign has had one of the fastest rollouts in the region, thanks to a steady supply of Sinopharm and AstraZeneca jabs. The Health Ministry said on June 5th that it had fully vaccinated more than 9.8m people, over a quarter of the population.
ETHIOPIA Economic Woes
Covid, conflict and debt hinder reforms.
Shortly after taking office, Prime Minister Abiy Ahmed promised a spectacular overhaul of Ethiopia's tightly controlled economy: reforms to spur growth, unshackle the country's potential, and lift millions out of poverty. But three years on, Abiy's agenda remains largely unrealised, and the country burdened with debt, the economic pain of the coronavirus, and a costly war in Tigray. "Things are worse now. . . The country is broke and on the verge of defaulting," said one European diplomat, who asked not to be named.
One of Africa's fastest-growing economies, Ethiopia took massive loans to fund some of its flashiest infrastructure projects. But paying back its external debtsome US$30bn, mostly to Chinahas proven difficult. In 2021 alone, Ethiopia owes about $2bn to its creditors and has sought unsuccessfully to defer payment. "We are not now in a position to pay," said Alemayehu Geda, a professor of economics at Addis Ababa University. Alemayehu said the problem is not the amount of borrowing -Ethiopia's external debt to GDP ratio has fallen under Abiybut a dire lack of dollars. The country of 110m people imports far more than it exports, fuelling a structural deficit of much-needed foreign exchange. This currency crisis also hurts businesses, which are often forced to wait months to secure the dollars they need to run their ventures. A nationwide shortage of cement, for example, is because manufacturers cannot import the spare parts needed to run their factories, not due to a lack of raw materials, said Ashenafi Endale, editor of the Ethiopian Business Review magazine. Inflation meanwhiledescribed recently by Abiy as "the cancer of the economy" remains high at over 13%, and food costs are soaring. Compounding the pain, Alemayehu estimated some 3-4m Ethiopians have been driven into poverty by the Covid-19 pandemic. The International Monetary Fund (IMF) said economic expansion slowed in Ethiopia from 9% in 2019, to 2% in 2021. However, agriculturethe backbone of Ethiopia's economy, contributing onethird of GDPresisted the downturn, and the IMF forecasts that growth will rebound to 8% in 2022.
Abiy has acknowledged the unexpected cost of the pandemic to his reforms. But he also blames other factorsa record locust invasion, serious floods and above all, the occurrence of widespread conflict. Nevertheless, Ethiopia's economy has begun opening under Abiy. Less than 10% of the various economic sectors were open to foreign investment when Abiy was appointed prime minister in 2018, said Olivier Poujade, founder of East Africa Gate, a consulting firm. "Now, the opposite is true," he said, praising a "very different mentality" under the new administration. In June, the government started the process to partially privatise Ethio Telecom, and earlier awarded a telecom licence to a consortium led by Kenya's Safaricom, marking the historic end of a state monopoly over the key sector.
Costly War
The human toll of the war has also been devastating, with the United Nations (UN) reporting that 5.2m people in Tigray need urgent food assistance and an emerging famine threatening some hundreds of thousands of people. Senior UN officials appealed on July 2nd for immediate and unrestricted humanitarian access to Tigrayand for an end to deadly attacks on aid workersas the New Covid Rules: Egypt has decided to increase the capacity of hotels, cinemas, cafes and restaurants to 70%, up from 50%. The new rules, which were taken in light of the improving Covid-19 situation in Egypt, were to come into effect on July 6th, cabinet spokesman Nader Saad told the privately-owned Sada el-Balad TV on July 4th. (BBC Monitoring 5/7) Building Collapse: Four women were killed by the collapse of a residential building in the Al-Attarine district of Alexandria, a security official said on June 26th. Civil protection workers also extracted four survivors. The five-storey building was the subject of a renovation order in 2018, according to the Alexandria governorate's Facebook page, and authorities had ordered that the top floor be dismantled.
Egypt has suffered many deadly building collapses in recent years, due to poor or non-existent maintenance and low enforcement of construction standards. | 2021-08-23T13:28:40.759Z | 2021-08-01T00:00:00.000 | {
"year": 2021,
"sha1": "4ccea41820c8721ebbd2f7f6055bb11a2da19c7b",
"oa_license": null,
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/j.1467-6346.2021.10105.x",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "09df9c9abe199973e0aeef0ff9410c4f5fc9322b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
26998093 | pes2o/s2orc | v3-fos-license | Online Exhibitions and Archives : a Collaborative Project for Teaching and Learning in Design
Museum and the Royal College of Art. The project entailed creating a virtual exhibition drawn from the archives of the partner institutions, with the goal of encouraging students to use archives for practice-based inquiry and in historical/theoretical research. Designing an exhibition that incorporates student involvement through the use of Web 2.0 technologies has presented various challenges: technical, conceptual and pedagogical. The paper is thus a case study of how virtual access to archives can contribute to teaching and learning in the design disciplines.
INTRODUCTION
Since the 1990s, online exhibitions have been a way for museums and archives to promote public awareness of their collections.Many institutions also provide extensive image databases based on their holdings and offer online catalogues for researchers who need access to museum and archival materials.In March 2008 the Victoria and Albert Museum hosted a workshop entitled "Widening Access to the V&A+RIBA Architecture Partnership Collections."The aim of this event was to explore how these important research collections could better support higher education (HE) courses in architecture, using both physical and online resources to broaden access for HE users.During the discussions, attendees from twelve UK HE institutions articulated the lack of clarity that they experienced in trying to find out what was in the archives and how to plan coursework around the collections.The implicit expectation in many museums and archives is that HE researchers should know how to find their 'way in' and what questions to ask of the archivists and curators.In the digital age, this assumption no longer holds true, as many users are further and further removed from hands-on primary research and rely increasingly on digital surrogates which are often only a partial representation of an archive's range and content.
With the establishment of the Centre for Excellence in Teaching and Learning through Design (CETLD) in 2005, four major UK institutions joined together to EVA 2009 London Conference ~ 6-8 July Jane Devine Mejia _____________________________________________________________________ collaborate on shared research objectives.CETLD is a five-year partnership between the University of Brighton, the Royal College of Art, the Royal Institute of British Architects (RIBA) and the Victoria and Albert Museum (V&A) that aims to enhance learning and teaching in design through research that brings together resources and expertise from higher education and collections-based partners [1].The archivists across the CETLD partnership saw this collaboration as an opportunity to explore how to engage practice-based design students and tutors with archival resources.
While most of the archivists had been involved in curating and contributing to physical exhibitions and, in the case of the RIBA and the V&A, to the development of online versions of these exhibitions, none of the partners had developed a reusable online exhibition framework that would support teaching and learning in the design disciplines.The Online Exhibitions Project received CETLD funding for two years to support a part-time lead researcher, with archivists, curators, education specialists, tutors and students expected to volunteer their time as needed.Other members of the project team included CETLD ICT specialists Sina Krause and Roland Mathews and University of Brighton postgraduate student Heloisa Candello (through the 2008 CETLD Student Placement Programme), all of them contributing on a part-time basis.
The Online Exhibitions Project was divided into five main phases: 1. a survey of best practice in online exhibition design and a review of the literature on learning and teaching with virtual collections and online exhibitions 2. selection of the exhibition content using the CETLD partner collections, including consideration of learning and teaching potential and copyright issues related to digital images 3. development of the technological infrastructure for the exhibition 4. testing the Online Exhibition with design students and tutors in both practicebased areas and design history to assess its pedagogical value 5. using lessons learned from this experience to make recommendations on a sustainable online exhibition framework for the CETLD partner archives.
INITIAL INVESTIGATION
In conducting the literature review and survey of best practice, we looked at websites that had been received the annual Museums and the Web Online Exhibition Award, an international juried prize, from 1997-2008 [2].We also assessed virtual exhibitions at major museums, library and archives such as the Smithsonian, the National Archives (UK) and the Cooper Hewitt National Design Museum.Our research indicated that scant attention is paid to HE as a principle audience for virtual exhibitions.Most of the art, archives and museum sites reviewed by the author and her student research assistant were conceived for the general public, and in some cases offered material for primary and secondary school audiences.While university archives, libraries and museums frequently produce virtual exhibitions relating to their collections, few are explicit about how these might be used in post-secondary teaching and learning and very few allow any form of HE student participation [3].None of the literature on virtual exhibitions EVA 2009 London Conference ~ 6-8 July Jane Devine Mejia _____________________________________________________________________ that we reviewed dealt with how HE design students and educators might use them for practice-based learning.
In addition to the kind of professionally curated online exhibitions created for museums and archives, we studied the growing trend towards user-defined exhibitions based on institutional collections, like the Fine Arts Museums of San Francisco My Gallery and the Metropolitan Museum's My MetMuseum [4].Empowering visitors to select digital images of museum objects and arrange them in online exhibitions without the mediation of museum staff changes the nature of interactions between museums, their collections and their virtual visitors by democratizing curatorship.The concept of user control was central to our investigation of how to develop a participative virtual exhibition for design students and tutors.
SELECTING THE EXHIBITION CONTENT
In choosing content for the Online Exhibition, it was important to find a guiding theme that would represent the diversity of the partner archives and encourage collaboration across the CETLD partnership.After visiting the four partner archives, it was clear that their common strength lay in British design of the twentieth century.Not all partners had digitised their collections to a comparable extent, which limited what existing digital material could be mined for the Online Exhibition.We considered various possible themes for the prototype exhibition, including student graduating exhibitions, institutional anniversaries and physical exhibitions at the partner institutions, but none of these allowed us to represent the richness of more than one collection.Eventually the 'design process' emerged as the most flexible, universal and feasible theme.We understand 'design process' to mean an approach to design creation through the four phases of Discover, Define, Develop and Deliver, outlined by the Design Council [5].Archival documents related to the design process offered the potential to engage tutors from several design fields in the project, enabling them to use the material in various ways.
The two collections with best online access to images of archives were the RIBA and the University of Brighton Design Archives, but only the RIBA had a collection of digital images for which they held the copyright and that focussed on the design of one project: the Ernö Goldfinger archive for 2 Willow Road, one of the first Modernist homes to be built in Britain, now a National Trust property.Further advantages to this choice were the presence of RIBA education specialists to help develop the project with HE students and the value of the Goldfinger material as an example of a varied and extensive archive of a lifetime's design work encompassing architecture, furniture, graphic design and writing.By focusing on an architectural design from preliminary sketch to completion, the Willow Road Online Exhibition offers students and tutors various ways of approaching the material, whether from a design history perspective (i.e.situating Goldfinger's work within the context of design during his era) or from a design practitioner's point of view (i.e.examining how the house design evolved, the materials used and drawings and photographs created to document the design).
The Perret The theme of an architect designing for himself and representing his design philosophy through a small-scale residential project The significance of Willow Road in architecture, three-dimensional and interior design, with its custom-designed furniture, innovative storage and room arrangements and collection of objects and artworks.It also made sense to base our pilot exhibition around this theme because it is a 'way in' to Goldfinger's entire archive, and offers students the opportunity to appreciate the range and depth of information that can be found in such research collections.A sequence of thirty-six images was selected from the RIBA's RIBApix database of architectural drawings and photographs and from the University of Brighton Design Archives to form the core illustrative material for the exhibition.
TECHNICAL CHALLENGES
Project funding did not include a budget for software acquisition or development, on the assumption that we would use the CETLD Elgg Web 2.0 site as the host for the prototype exhibition.Elgg was adopted at the University of Brighton three years ago as the platform for its social network, Community@Brighton, one of the first in the UK to provide a shared space in which staff, students and tutors could interact.Elgg is a versatile system that provides blogging, file sharing, image presentation and social networking capabilities [6].
As the initial work began on the Willow Road Online Exhibition, we found that Elgg Web 2.0's Photo Gallery feature did not offer all the functionality needed to create a visual narrative of Ernö Goldfinger's design process.While it could display images in a slideshow format, the associated metadata could not be shown simultaneously and the images could not easily be sequenced in a specified order.Instead we explored a number of open source web album packages in the hopes of finding one that met most of our requirements.In the end, we chose Jalbum and its Fotoplayer template as the best option given time and technical constraints.Fotoplayer allowed the creation of a visually attractive slideshow with large browsable thumbnail images, space for image metadata display, enlarging, zooming and panning functions and a 'guestbook' comment box for each image [7].Although it was not possible to run Fotoplayer from within our Elgg Web 2.0 site, we designed it to launch from an intermediary html page on the site.This hybrid approach simplified the user experience and allowed us to exploit Elgg features like blogging and file uploading while using Fotoplayer's full functionality.
Images courtesy of RIBA British Architectural Library and the University of Brighton Design Archives.
ENGAGING WITH TUTORS AND STUDENTS
Finding tutors who were willing to volunteer their time and involve their students in the online exhibition research was one of the major challenges of the project.Having already defined the exhibition theme, we were in a sense working backwards, but we wanted to have something demonstrable to show tutors in order to elicit their interest in the project.We were fortunate to find a collaborator in a design tutor who was eager to involve his undergraduate students in an archives-related 'adventure' as part of the visual research class in 3D Design at the University of Brighton.While the brief for the class was that students take an object of personal significance (e.g. a watch, a purse, a doll) and explore it using a variety of visual media, the adventure gave them a chance to step outside the studio and explore something they would not otherwise encounter (in this case a Modernist house in London and its archive).These students had no background in design history and so were approaching the house purely as a design object.
In working with design students, there are a number of characteristics that distinguish them from other HE groups.In the craft-based disciplines (woodwork, metalwork, ceramics and plastics) they are predominantly female, dyslexia is a significant factor (up to 35% of students), they are makers, not writers, and so have a strong interest in materials, processes and images, rather than text.Our undergraduate group at the University of Brighton was consistent with this profile.An initial questionnaire revealed that none of the students in the 3D design visual research class had used archives before and none had any experience of online exhibitions.
EVA 2009 London Conference ~ 6-8 July
Jane Devine Mejia _____________________________________________________________________ Briefly, the project was structured so that the twelve students who volunteered would visit the house at 2 Willow Road for a tour with the National Trust curator and have free time to photograph, sketch and ask questions.They then received an introduction to the related RIBA Goldfinger archival material at the V&A Museum, this time with an education officer, who facilitated discussion and exploration of the Willow Road archive.This hands-on archive exploration and tour acted as a preparation for the virtual experience which began a week later when we invited the students to view the Online Exhibition for the first time.They were then invited to join the Elgg Web 2.0 Online Exhibitions Community where they could upload their own photographs and commentary to a password protected blog, view the Online Exhibition, write comments in the image guestbooks and share other information with the group.
We held two further meetings to see how the Willow Road experience, both real and virtual, was influencing the visual research project around their personal objects and to view the work they were producing for critiques at the end of term.In accordance with University research ethics guidelines, the students were free to withdraw from the project at any time and were not required to present their work or participate in the final evaluation meeting unless they wished to.This meant that only four attended all the meetings and presented their work at the end of term, while others filled out the evaluation questionnaires and uploaded images to the Online Exhibition blog and some chose to withdraw from the project after the second meeting.The challenge for us (and them) was how to fit a voluntary research project into the academic calendar without distracting them from course-related work.
To evaluate pedagogical impact, we asked students to complete a set of three questionnaires, to self assess their learning by asking them to evaluate their experience of the Online Exhibition and what they had learned from their encounter in the archives.An interview with their tutors also identified learning outcomes of which the students were not necessarily aware, but that were manifested in the approach to drawing, their awareness of materials and attention to the use of space in designing their projects.Although we worked with a very small group, the value lay in assessing their learning over a period of eight weeks and in seeing the work they produced during this time.While it would be rash to generalise from this sample to all practice-based design students, our observations and their questionnaire responses elucidated points that are important to consider in future work, such as the importance of images rather than text and the need for greater user control.
Learning from students
From the second evaluative questionnaire, we determined that nine of the twelve students used Facebook, but only one used the University's Elgg Community@Brighton site.Although familiar with social networks, they did not necessarily see the point of using an academic social network that links students, tutors and staff in the same virtual environment.The assumption that students would use a social networking environment simply because they knew how was proven incorrect.Several students posted images to the Online Exhibition blog, but they did not comment on each others' postings nor contribute much text related to their own photographs.They valued the Willow Road Online Exhibition as a reminder of the archival drawings and photographs they had seen at the V&A, as well as a complement to their experience of the house in its current form (the archives date from 1937-49).
At the same time, there was a great appreciation for the real over the virtual as a result of their contact with the National Trust Curator and RIBA/V&A Education Officer, the physical experience of being in the house and the opportunity to examine archival drawings and learn from them.Several students wanted more time to explore the archives and to sketch in the house.Certainly the Online Exhibition did not replace the actual visit, but offered students a chance to re-examine and reflect on the material they had viewed in person.Most of them felt motivated to use archives again for design-related research after this introductory experience.
We had originally hoped to incorporate student work into the Online Exhibition, but given that the class brief did not require them to directly relate Willow Road to the visual research they produced, the link between their work and the Online Exhibition's content was not always apparent.Nevertheless their tutors saw a benefit in exposing the students to real life design research through their participation in the Online Exhibitions Project and several in the group felt that the experience had influenced their drawing, the approach to their projects and their awareness of materials.Three students produced work directly influenced by Willow Road and reported that they wanted to continue their exploration of ideas discovered there even after the project had ended.
EVA 2009 London Conference ~ 6-8 July Jane Devine Mejia _____________________________________________________________________ After completing the work with the BA 3D Design students from the University of Brighton we undertook a second project, this time with Royal College of Art (RCA) MA History of Design tutor and students.With this group, we also visited 2 Willow Road and viewed the RIBA archive at the V&A.The students then had the option of choosing themes that interested them and developing PowerPoint mock-ups of their own virtual exhibitions on a very small scale.The goal was to give them the opportunity to think about archives and curatorship in an online environment.In contrast to the undergraduate cohort, this group was trained in contextual research and approached the house from a historical and theoretical standpoint.The RCA students were highly critical of the images they viewed online and were aware of the strengths and weaknesses of the archive; whereas the undergraduate 3D design students accepted the images for their informational and design-related value.The RCA students contributed valuable ideas for ways of further developing the Online Exhibition; however it was evident that they did not need an introductory archives experience through the Online Exhibition as they were already proficient researchers.This helped us to confirm that the primary audience for the Online Exhibitions Project should be undergraduate practice-based students who were unfamiliar with archives, rather than postgraduate design history students with established archival research knowledge.
CONCLUSION
Online exhibitions offer the potential of demystifying the archive and making it relevant to students as a source of design research.Our project enabled design students to explore and learn from archival material on site and in a virtual environment.The Online Exhibition dissolved some of the physical boundaries that deter students from using archives: opening hours, location, fragility of materials and handling restrictions, the need to know what to request and to accept archivist mediation of research (i.e. the lack of browsability) by enabling them to engage in self-directed exploration of the Willow Road archive.The Online Exhibition and its associated blog also invited student participation and the sharing of creative work, ideas and informal images taken on the tours.
Despite these advantages, the technology employed did present some frustrations for the project leader, the tutors and the students.We could not, for instance, enable users to organise the images and add their own photographs, drawings and narratives, because the Fotoplayer software that we used did not permit re-sequencing of the images or editing and commenting on existing descriptive text.RIBA copyright restrictions meant that we could not link the Willow Road Online Exhibition to our public website.The opportunity for students as curators was also not fully developed, in part due to time restrictions and because of the technological limitations on user participation described above.We wanted the students to have the security and freedom of working in a password protected Web 2.0 environment rather than a public website; however wider access after their project was finished would have been desirable, both to encourage other HE groups to interact with the Online Exhibition and to promote general awareness of archives to the HE design community.
EVA 2009 London Conference ~ 6-8 July Jane Devine Mejia _____________________________________________________________________ From a broader perspective, the Online Exhibition created the expectation among users that online access to the partner archives was more transparent than is in fact the case.Navigating between online catalogues, printed finding aids, virtual image collections and the actual archive is complex for those outside (and even within) the institutions.Virtual collections only represent a fraction of the institutions' physical holdings.For instance, the RIBApix database contains some 30,000 images, whereas the RIBA holds over 2 million drawings, photographs and other archival documents.Similarly the V&A Archives of Art and Design are only beginning to add their holdings to the museum's 'Search the Collections' public interface.A student researcher must be determined and diligent to get beyond these hurdles and approach the actual archival collections.
Developments like the Flickr Commons, the public domain image space pioneered by the Library of Congress on the popular photo sharing website, are attempting to bring archives to existing user groups, rather than expecting users to come to them.Similarly the V&A is a partner in the new Creative Spaces National Museums Online Learning Project, which links users with nine UK museum collections in a Flickr-like environment.The Royal College of Art's online ShowGallery of student work will go public later this year, albeit not in a Web 2.0 environment.Similarly, the University of Brighton is using the Archives Hub and the Visual Arts Data Service (VADS) to bring images and descriptions of its collections to researchers worldwide.In the final stage of our project, we will concentrate on using lessons learnt to make recommendations on a sustainable and reusable online exhibition framework for the CETLD partner archives.We anticipate that participative online exhibitions, such as the one we are developing in the CETLD Elgg Web 2.0 environment, will be another medium for bringing archives and users closer together and for opening archives to new audiences in higher education, particularly those in practice-based disciplines.
His connection with European architects such as Le Corbusier and Auguste 2 Willow Road archive is interesting for a number of reasons: Ernö Goldfinger's role as an advocate for Modernism in Britain | 2016-01-11T18:29:14.669Z | 2009-07-06T00:00:00.000 | {
"year": 2009,
"sha1": "4adcf55b8f021b0c773a05917a61cd3549f4c0f1",
"oa_license": "CCBY",
"oa_url": "https://www.scienceopen.com/document_file/e8d62561-1861-4402-8c04-3b4f0adb54de/ScienceOpen/074_Mejia.pdf",
"oa_status": "HYBRID",
"pdf_src": "Grobid",
"pdf_hash": "4adcf55b8f021b0c773a05917a61cd3549f4c0f1",
"s2fieldsofstudy": [
"Art",
"Computer Science",
"Education",
"History"
],
"extfieldsofstudy": [
"Engineering",
"Computer Science"
]
} |
255030893 | pes2o/s2orc | v3-fos-license | Defining artificial intelligence for librarians
The aim of the paper is to define Artificial Intelligence (AI) for librarians by examining general definitions of AI, analysing the umbrella of technologies that make up AI, defining types of use case by area of library operation, and then reflecting on the implications for the profession, including from an equality, diversity and inclusion perspective. The paper is a conceptual piece based on an exploratory literature review, targeting librarians interested in AI from a strategic rather than a technical perspective. Five distinct types of use cases of AI are identified for libraries, each with its own underlying drivers and barriers, and skills demands. They are applications in library back-end processes, in library services, through the creation of communities of data scientists, in data and AI literacy and in user management. Each of the different applications has its own drivers and barriers. It is hard to anticipate the impact on professional work but as information environment becomes more complex it is likely that librarians will continue to have a very important role, especially given AI’s dependence on data. However, there could be some negative impacts on equality, diversity and inclusion if AI skills are not spread widely.
Introduction
Many technologies pass through a 'hypecycle' of hope and disillusion before they may come accepted into common use, but the current wave of excitement (and anxiety) around Artificial Intelligence (AI) is remarkably strong.In the UK, for example, there is a national strategy for AI, but many other institutions, such as the National Health Service, have their own.UKRI the main research funding body has an AI strategy; as does JISC the national body supporting digital solutions for UK higher and further education and research.The same is true for many other countries: at the time of writing the OECD AI policy observatory lists over 700 policy initiatives from 60 countries (54 from the UK alone) (https:// oecd.ai/en/).Equally, globally, there are dozens of statements on the ethics of AI from international bodies, governments, tech companies and civil society groups (Jobin et al., 2019).
The power of these narratives reflects a number of things.AI is not one technology but a bundle of technologies with general applications across many sectors of activity.Significantly, the current wave of AI is also part of a long running story that has entered the popular imagination.Unlike many other technologies there are rich cultural meanings attached to the idea of AI such as those projected through science and speculative fiction in books and movies.For example, according to a listing on wikipedia there have been at least 150 movies featuring robots and AI, since the first, Metropolis, in 1927 (https:// en.wikipedia.org/wiki/List_of_artificial_intelligence_films).Different cultures have different versions of such stories (http://lcfi.ac.uk/projects/ai-narratives-and-justice/global-ai-narratives/).These stories are probably more often dystopian than utopian.Reflecting specifically on the emergence of AI based virtual assistants (VA), Moran comments that 'The advent of AI VA is drenched with fantasy' (Moran, 2021: 31).But as she goes on to argue there are deep biases in these fantasies that reinforce social inequalities.We could sidestep some of this debate by restricting the discussion to a less storied term such as 'machine learning'.Yet arguably we need, even within library work, to acknowledge the hope and fear attached to the idea of AI.The questions AI raises about the nature of humanity in the context of automation relate to library work too.
At the same time the use of AI in library work is very much in its infancy (Cox et al., 2019;Hervieux and Wheatley, 2021).There is immense potential for it to increase access to knowledge in fundamental ways, for example through improved search and recommendation, through description of digital materials at scale, through transcription, and through automated translation.Equally, the use of AI in libraries poses a number of ethical issues (Cox, 2022) and there is a recurrent fear that AI may in some way replace human librarians' work.There could be impacts on equality, diversity and inclusion (EDI) in the profession because AI is usually represented as white and male (Cave and Dihal, 2020) and as a trend it emphasises the IT aspect of library work where men are over-represented.In this context, the purpose of this paper is to build up a definition of AI from a librarian's perspective.It draws selectively on the literature to offer an early description of what AI might mean in the library context.It is pitched at an audience of readers who think AI could be a strategic priority, rather than at a technical level.The approach is descriptive and in parts speculative, but this is justified because it is an emergent area of practice.Our approach is fourfold: ○ To analyse formal definitions of AI ○ To define some key technologies and explain how they might relate to library work
○ To identify key AI use cases in libraries
○ To reflect on the potential implications for professional work and particularly equality, diversity and inclusion within it The paper is based on an exploratory literature review.
Although systematic searches of Scopus, LISA and Google scholar were undertaken, the emergent nature of the literature prevented following a systematic SLR methodology.Firstly, there are ambiguities in the terminology about what counts as AI and many of the issues raised by previous trends are relevant, such as those around data, text and data mining and even learning analytics.Secondly, as an area of professional practice many of the most valuable resources are non-peer reviewed items, that are not visible within bibliographic databases.Thirdly, the wider technical literature is vast and hard to relate to the library context, so can only be referenced selectively.
Formal definitions of AI
An obvious starting point to help clarify what AI is for librarians would be to examine some formal definitions.
There have been so many reports and strategy documents exploring AI since at least 2018 with most containing their own explicit definition of AI (but also acknowledging the disagreement about its definition).Box 1 includes just a few of these.
Box 1. Definitions of artificial intelligence.
a. 'Technologies with the ability to perform tasks that would otherwise require human intelligence, such as visual perception, speech recognition, and language translation' (quoted by House of Lords Select Committee on Artificial Intelligence, 2018: 14) b. 'Machines that perform tasks normally requiring human intelligence, especially when the machines learn from data how to do those tasks'.(UK Government, 2021: 16) c. 'AI is the ability of a computer system to solve problems and perform tasks that would otherwise require human intelligence'.
(US National Security Commission on AI, 2021) d.Artificial intelligence (AI) is 'a machine-based system that can, for a given set of human defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.AI systems are designed to operate with varying levels of autonomy'.(OECD, 2020).e. 'Simply put, AI is a collection of technologies that combine data, algorithms and computing power'.(European Commission, 2020: 2) f. 'A suite of technologies and tools that aim to reproduce or surpass abilities in computational systems that would require "intelligence" if humans were to perform them.This could include the ability to learn and adapt; to sense, understand and interact; to reason and plan; to act autonomously; or even create.It enables us to use and make sense of data'.(UKRI, 2021: 4) g. 'Theories and techniques developed to allow computer systems to perform tasks normally requiring human or biological intelligence' (JISC, 2022: 3) h.'Machines that imitate some features of human intelligence, such as perception, learning, reasoning, problem-solving, language interaction and creative work' (UNESCO, 2022: 9).i. 'A system's ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation' (JISC AI explore) Examining these definitions, some common patterns emerge.One is that many are rooted in the idea that AI has the 'ability to perform tasks' that humans normally do (A, B, C, F, G).The exact list of sensory or cognitive processes varies (emotion is not mentioned) but these tend to imply quite high order activities.Definition A seems to link to specific classes of technologies.Definition B stresses the idea of computers learning and I gives more detail on how this might work.H in contrast uses the word 'imitate' to stress the difference between actual human intelligence and AI and imply the latter's inferiority.'D' stresses that humans control the whole process.Definition E is a bit different placing stress on an infrastructure of data, algorithms and computing power, usefully emphasising the importance of data to AI.But certainly an implicit concern in nearly all the definitions is to relate AI to human capabilities.The list in F (from a UK research funding body) is perhaps the most expansive, acknowledging ways it might surpass human capabilities or perform tasks, such as creativity, often seen as beyond computers.However, there is relatively less consideration across the definitions of the way AI might do things humans do not or could not do, or doing things in usefully non-human ways (bias is very human!).What is apparent is that the definitions are quite abstract and open ended.They tend not to specify technologies, reinforcing that AI is an idea, and one that is evolving and also suffused with cultural meaning and significance that even in its most professional applications cannot be ignored.Evidently, AI is something to do with technologies that either perform or at least imitate human sensory or cognitive processes.Just as importantly, it is an evolving idea with rich cultural meanings attached to it.Yet such definitions only take us a small way to understanding their relevance to libraries.If abstract definitions of AI only offer a broad picture of what it is, perhaps we can turn to an analysis of underlying technologies to understand how library work might be affected by them.
AI technologies
AI is a broad term that encapsulates a range of approaches, machine learning being one of the most important of them (Hu et al., 2019).Machine learning is the use of statistical techniques to derive models from data without the need to program parameters of the model (Valiant, 1984).In contrast to machine learning, traditional computer programmes are developed by programmers who define the rules and parameters for the models, drawing upon their experience, understanding and analysis of the data.This involves the process of discovering relationships between predictor and response variables using statistical methods (Hastie et al., 2009;Witten and Frank, 2002).A generic Machine Learning approach requires completion of this set of tasks: collection and preparation of data, selection of appropriate features (e.g. which variables are relevant), choice of machine learning algorithm, selection of models and parameters, training and performance evaluation (Alzubi et al., 2018).In machine learning, this process is automated to enable the machine to discover patterns, set rules and parameters of models by studying the data.Machine learning thus involves developing models that have been influenced by the data that has been fed-in and therefore, the role of data is critical.
Critical to machine learning is allowing the computer to discover, 'train' it to develop a model within data.The process of training involves the machine using historical data as inputs and learning patterns from them to fine-tune model parameters in an iterative manner to ensure a reasonable (pre-defined) level of accuracy in estimating an output for an unknown input.For example, a machine learning model might use historical data on visitors to a library together with weather and events information as an input dataset in order to develop a model to predict how many visitors could be expected in a given day.This is an example of a supervised machine learning task, where a training dataset is used to train the model, and then to predict outputs for an unknown dataset.Where the machine learning model does not require previous examples to learn from, unsupervised machine learning methods are used.An example of this could be a clustering task where a large number of visitors regularly attend a library, seeking different services and there is a need to automatically segment the visitors into different categories.Semisupervised learning is another type of machine learning that is used when sufficient training examples do not exist or are difficult to create.In cases such as these, a few manually labelled examples (from a larger collection) can be used to train a model, which is then used to predict outputs for a large number of unknown examples.The original labelled dataset can then be combined with the outputs that have the highest confidence to create a larger training dataset to improve the model.For example, in a use case of categorising millions of large documents into different genres or categories, it is difficult to create a large enough sample of examples to learn from.In this scenario, a fewer number of hand-created examples could be sufficient to develop a larger training dataset.Another type of machine learning involves reinforcement learning, which does not require training data but the model learns through a system of rewards for positive responses and corrections where responses suggest a mismatch.For example, a real-time recommendation system could train itself based on user interactions to identify which types of content they are likely to engage with and which ones they would reject.
While we have explored the different AI approaches in terms of how they function/are designed, it might also be interesting to explore the possible application contexts based on the data itself.For example, employing Natural Language Processing (Olsson, 2009) to quickly analyse large volumes of unstructured text from unknown datasets could offer an insight into the content, sentiment and genre of the text.Topic modelling (where a model uses statistically significant keywords) can help explain how the different topics in the collection have evolved over time.Using speech recognition, oral history recordings could be converted to digital text, which then can be indexed, for future retrieval or analysed for understanding topics of discussions, or identifying specific mentions of names, places or events (using named entity recognition).Handwritten manuscripts, through a process of optical character recognition could be digitised into text, which could be further indexed or analysed to determine topics of interest or entities.Images (photographs or collections) could be analysed to identify specific objects and entities to support more accurate retrieval.This could be done by training existing manually annotated images as training data for deep neural networks to identify known objects in large image collections.Such automated processes can sift through a large number of documents and offer recommendations on the most appropriate descriptors or keywords for cataloging.The range of applications of AI can be wide-ranging, for example, in classification of books (using Natural Language Processing), managing repositories and library resource usage analysis, metadata services and citation analytics (text data mining), preservation and archival of imagery and video library databases (image processing) and so on (Ali et al., 2020).
A slightly more holistic application of AI systems is the example of digital assistants in consumer devices such as Amazon Alexa or Google Assistant, where a system interacts with a user to simulate a conversation or at least respond to questions and offer guidance on a collection (Aghav-Palwe and Gunjal, 2021;Seeger et al., 2021).Such systems combine a range of AI technologies such as speech recognition, recommender systems, natural language processing to deliver an enriched experience for users.Other systems that combine multiple AI technologies could also focus on a specific aspect of the library system such as the search engine, for example, the CiteSeerX system using AI to extract metadata, de-duplicating documents, author disambiguation and table extraction (Wu et al., 2015).The possibilities, therefore, of libraries engaging with AI technologies, although yet to become mainstream are immense.Furthermore, most of the examples given so far refer to applications of AI to data from libraries themselves.In reality, the library role may be more general.Services such as to discover and license non library data may be more likely services that libraries will provide in the context of AI.Here it is the data stewardship and governance functions of libraries that will be central.As well as coming to see library collections as themselves data; what is in the collection may be expanded to include data, with library notions of collecting therefore shifting.They can also manage data through the whole lifecycle from creation through to preservation.
Robots are physical machines that are programmed to carry out a series of actions in an autonomous way.Often these are simply repeated without learning, but AI can be combined with robotics.For example, a robot picking items in a warehouse might use an algorithm to select the best path to navigate around the warehouse or to learn to place items in different places based on their attributes.In this paper, because we are trying to think inclusively about the AI field, we have included some examples of robots in a library context.
This section has established in a general way how AI techniques may touch library work.But it does only begins to hint at how libraries and library work might change in practice.The next section examines practical examples of application organised by the area of library operation.
Library applications
This section examines more closely the range of specific AI applications that seem to be emerging as relevant to libraries today.It builds and extends a few other attempts to define the range of AI applications in libraries, such as in Cox (2021), Hervieux and Wheatley (2022) and Huang (2022).The approach taken here is to adopt an expansive and inclusive definition, as a way to help the reader navigate the breadth of what is a rather complex scene.In particular library sectors some applications may feel much more relevant, for example knowledge discovery applications in research libraries and archives.Public libraries are arguably more likely to be centrally concerned with the links between AI and information literacy.Our purpose is to paint the widest picture so we can gain an understanding of change across the sectors.
We suggest here that there could be five conceptually different use cases of AI in libraries.Table 1 summarises these, with some indication of the skills and knowledge required (column C), key drivers (column D) and barriers (column E).Our taxonomy is organised not by differing technologies but more by which aspect of library work is impacted (column B).
Use case 1: Application of AI to backend library operations
This is where AI is applied to routine clerical and manual tasks.Two contrasting examples can be identified: one is the use of Robotic Process Automation (RPA) in automating often repeated clerical tasks (Lin et al., 2022;Milholland and Maddalena, 2022).RPA allows one to get a computer to perform tasks usually performed manually which involve a set of repetitive steps where limited human judgement is required to manipulate text or other data, such as to take data from a number of sources, process it in some way and record the output.Such applications are highly relevant to all libraries in automating workflows such as processing inputs from forms, migrating data from one system to another or reconciling inputs from a number of sources (Lin et al., 2022).Lin et al. (2022) offer a case study of such uses.Skills required include knowledge of how to analyse workflows and technical knowledge to use RPA tools.
The second example is the use of robotics in the manual tasks around sorting returned books or checking shelves for out of sequence books (Tella, 2020;Vlachos et al., 2020).We could also see the spread of use of generic robots, such as robot cleaners.The most striking use of robot power in a library context is Automated Storage and Retrieval Systems (AS/RS) or 'bookbots' where books are stored in mass storage and retrieved on demand for use (McCaffrey, 2021;Sproles and Kuehn, 2014).Pioneer libraries began using such systems around 2005.A key driver of this is to free up space from bookshelves for other uses.But introducing such systems is only likely to be undertaken as part of a major rebuilding project and has fundamental impacts on systems and the organisation, as the McCaffrey (2021) case study makes clear.Free standing robotics are likely to be more widely used.Both these types of use are driven by efficiency and appear to be less controversial because generally they seem to be relieving humans of mundane tasks.
Use case 2: Application of AI to library services to users
This is where AI is applied directly to library services for users.Two main examples are the application to knowledge discovery and chatbots.
The use of AI in knowledge discovery involves applying AI techniques to describe library collections, usually special collections of unique archival material, such as found in research libraries (Cordell, 2020;EuropeanaTech, 2021).But equally it could be applied to legal texts and knowhow in a law library.This would often be driven by the scale involved, where the amount of material defies human creation of metadata for discovery.Automatically created descriptive data could also add a new dimension to discovery where it created new ways to navigate content (Coleman et al., 2022).AI techniques could be applied to a wide range of types of collections, including texts, handwritten manuscripts, sound files or images.This type of work has a fairly long tradition, though it has not always been called AI.Here AI relates to the central library role of collection management.
One of the main barriers is that algorithms trained on modern handwriting or images will be less effective with historic script or images.This implies the need to create costly training data for unique collections.There may be limited ability to reuse such training data.Currently there are limited off the shelf solutions, so there are significant technical development costs and many libraries would probably not have the resources to undertake them.Further, even big institutions with significant resources and a long history of developing such solutions acknowledge the challenge of turning projects to services at scale (EuropeanaTech, 2021).Hence much of the relevant literature cites projects working on particular collections, often in digital humanities contexts.
There are a number of ethical concerns here too (Padilla, 2019).Such developments of AI arise at a time when the provenance of collections is being increasingly closely questioned: particularly those that were gathered about indigenous peoples during the time of colonialism, which reflect collecting practices that are far removed from contemporary notions of consent and represent indigenous people in ways that they have not agreed to.AI may in some ways entrench or exacerbate these problems.There is also an issue around how to make outputs intelligible to scholars and other users not expert in the technology (Terras, 2022).It is also important to mention that there are sustainability issues around some machine learning (Brevini, 2020).
What could be considered a variation on this application, is the use of AI techniques to create living systematic literature reviews (SLRs) (Grbin et al., 2022;Jonnalagadda et al., 2015).Given the scale and velocity of publication the ability of researchers to keep up with the literature by manual methods is under threat.A living systematic review would employ automated techniques in part of the SLR process.This is particularly critical in the health context, where SLRs are relied on as the basis for advice on evidence-based healthcare interventions.For example, new publications could be sifted for material relevant to an SLR and some filtering occur for non-relevant material.This is typically a librarian role because of their knowledge of bibliographic databases and precise search techniques.Here AI is being applied to texts but published texts rather than rare or unique collections as in the previous application.Some element of automation, but likely that librarians continue to have a role such as in choosing databases, training tools and interpreting licence agreements (Grbin et al., 2022).Grbin et al. (2022) offer a case study of library involvement in building such an SLR system.
A rather different application of AI for use within library services for users are chatbots and digital assistants.The potential merits of chatbots have been forwarded for some time, based on their 24/7 availability to respond to user enquiries and ability to deal with scale (McNeal and Newyear, 2013;Vincze, 2017).The benefits would appear to apply across the library sectors and relate to library roles in interacting with users.Not all chatbots are based on AI, but AI can help produce more adaptive responses and move away from very programmed interactions.There does seem to be some evidence that the technology is maturing so that chatbots can be created without programming skills; although there are still limited off the shelf solutions for libraries.Chatbots have a wide range of potential uses in libraries: the most obvious being to respond to routine information requests or even handle the early stages of complex reference enquiries.Digital assistants, such as Alexa, can also be customised to answer user queries, to explain collections or offer guided tours (Williams, 2019).Chatbots and digital assistants could also be used to gather routine information from users, such as students or support performance of certain tasks (such as requesting an ILL).Chat interfaces to search may come.Chatbots that are buddies or offer emotional support are also being trialed in educational contexts.It seems that people can develop quite rich complex relations with chatbots (Skjuve et al., 2021).Such applications do raise a number of ethical issues, such as about how to ensure an unbiased response to all users and, if user data is collected to make the service adaptive, how consent is obtained for this and privacy protected.
To date the take up of chatbots and voice agents seems limited (though the authors are not aware of a systematic study of the spread of use of chatbots in libraries), perhaps because the cost of development is seen to outweigh the value of the benefits in the context of the complexity of user enquiries.Reference interview theory emphasises the way that a presenting question is not the 'real' question that the user has and that the interview process is a complex interaction to elicit the true information need.If that is the case chatbots are unlikely to be fully effective.But it seems that they could address many routine information requests that libraries receive and refer others to a human.
These types of applications of AI are central in many cases to how AI might change the way libraries are experienced, at least in the long run.As such they are likely to all be met with some resistance in terms of how they seem to change professional work and replace professional skill with automation.Change management will be critical to success.
Use case 3: Supporting communities of data scientists
Our third major application of AI in libraries is where the library supports data scientist communities, drawing on their expertise in stewardship of information.Libraries have a number of capabilities that could be highly relevant to data scientists in an organisation, such as: In this context the library is one among a number of service providers which could support data science communities.The library's role is particularly plausible where it is a research collection based in the library around which numbers of scholars are working.This would most typically be researchers in the digital humanities where library collections are often a primary source.Libraries might well play a leadership role in creating such communities, though the focus here might be outward facing to scholars from any institution interested in a particular collection.But many institutions, especially universities also have within them growing communities of data scientists or researchers using some data science techniques across all the disciplines, to whom the library could offer expertise.These types of community are perhaps more likely to be led by computer scientists or even platform vendors, but one can see an important role for the library, especially in trying to expand interest in data science beyond engineers.It could act as a neutral space for interdisciplinary working.AI labs or collaboratories hosted in libraries have begun to emerge to do this and offer case studies of what might be involved (Dekker et al., 2022;Wang et al., 2022).The activities in such units could include organising training programmes and reading groups.Libraries might support data science teaching programmes with data management teaching, which tends to get neglected in curricula (Shao et al., 2021).
Use case 4: Data and AI literacy as a dimension of information literacy
The previous examples mostly relate to using AI in some way directly in library work.But with the pervasive use of AI in wider society, adding a dimension of data and AI literacy to citizen literacies becomes a key issue.Citizens need some data and AI literacy because AI is being used in decision making in many sectors of life, both by the state and commercial companies (Ridley and Pawlick-Potts, 2021).More directly in the realm of information seeking and use, search and recommendation tools that most of us encounter every day, such as google and amazon, are based on AI.Furthermore, other applications of AI are encountered increasingly in daily knowledge work such as transcription of online meetings, translation of texts and the widening array of writing tools, from auto-suggest and auto-correct, grammar and style checking to content writers, which will compose text based on a short prompt.The availability of these tools is exciting for access and creation of knowledge, but they need to be used appropriately, based on some understanding of how they work.AI is also used in creating false information such as deepfakes.
Citizens need some understanding of this too.These issues point to the need for information literacy training to encompass some basic understanding of data and AI (Long and Magerko, 2020).In developing this role an important driver is the need to protect freedom of expression and to search, and so core ethical values of librarianship (IFLA, 2019).Hence Toane et al. (2022) describe a case study of a library using a Finnish created Open Educational Resource on AI to educate their user base about what AI is.More specifically libraries may be involved in initiatives to inform users about particular classes of AI relevant to their activities, such as teaching international students to better understand how to use translation tools appropriately (Bowker et al., 2022).Incorporating data and AI literacy into IL and other user training, implies some basic knowledge of AI combined with librarians already increasing understanding of pedagogy.Ridley and Pawlick-Potts (2021) identify a further role in helping to make AI explainable in general.
Use case 5: Use of data to analyse, predict and influence user behaviour
AI is about using data to identify patterns.Libraries have collections that can be treated as data, but they also have many forms of data about users.AI techniques are highly relevant to analysing user data (Litsey and Mauldin, 2018).This could be to predict or perhaps even influence user behaviour.A simple example is the use of sentiment analysis to investigate positive or negative feelings towards the organisation.Predictive modelling would seek to anticipate levels of use of the library at certain times of year or to predict book circulation: Iqbal et al. (2020) offer a case study of doing this.If the driver for use 4 is ethical, the main challenges of use 6 are themselves ethical.Clearly it is a legitimate activity, indeed a key strategy and planning requirement for libraries to study users to design better services.Applying AI to data about users is simply an extension of this.It is generally accepted that data science is a combination of domain knowledge, statistical analysis and computational methods (Shao et al., 2021).Librarians have the first.They would need the other two types of skills.But there are risks in this type of use.
Given that this is all about using data, it is useful to reflect on some of the failings in how libraries have used data in another, related context: learning analytics.This is the movement to use all sorts of data about student learners to inform them and their teachers about their learning.What critics of library use of learning analytics have uncovered is a number of ethical failings (at least in the US) in how this data has typically been used (Jones et al., 2020).Students often had not given consent to these uses and had no awareness that their data was being used in such ways.Projects had not undergone ethical review and few libraries had responsible data use statements.There were clearly privacy issues about how such data had been employed and a potential for a chilling effect on free thought and expression.Ultimately it often seemed quite unclear whether students benefited from the use of their data or whether it was institutions that were the main beneficiaries.There is also evidence that the statistical analyses applied to learning analytics by libraries is often flawed (Robertshaw and Asher, 2019).Applying AI to collections as data is clearly problematic ethically at some level, but this appears to be much more true this is of personal data about users and their behaviour.In Europe GDPR may also place legal limits on how data about users can be analysed.
Discussion: How will AI impact employment and equality, diversity and inclusion in libraries?
Because it is a complex picture, concerns about the impact on the work and jobs of librarians cannot easily be answered.Global Partnership on Artificial Intelligence (2020) elaborate the wide range of potential impacts of AI on employment: whereby jobs can replaced or reduced (automation destroys jobs), divided (some benefit, others are deskilled and so workforce divisions deepened), complemented and supplemented (where AI adds to what the professional is able to do) or even rehumanised (where jobs are enriched by having routine elements removed).All these are likely to happen to some degree and we do not yet know the balance of effects will play out.The skills required for the different types of application of AI seem different (Table 1 column C).At least we can be pretty sure librarians will still be needed, because we are entering a more complex information landscape than ever before so information mediating roles will shift but not disappear.Given the reliance of AI on good quality, well managed data, the fact that librarians have good quality data in their collections will become important.Librarians' understanding of things like data structures puts them in a good position to play future roles.Probably they need to do more work to translate their existing skills in finding, managing and preserving information to apply specifically to data, and talk more about data.
But we should also think about the potential impact of these uses of AI on equality, diversity and inclusion within a profession which has a female majority but still has persistent gender pay gap (Hall et al., 2016;Howard et al., 2020).AI should not be seen simplistically as a neutral technology.AI symbolically privileges white male identity, not because of some essentialist link between IT and gender and race, but because there is a deep-seated link between cultural notions of (information) technology as rational and neutral, and cultural constructions of masculinity.AI itself -as currently conceived and represented in Western thought -is masculine (Adam, 2006) and white (Cave and Dihal, 2020), it has been argued.More directly, IT is stereotypically a male profession, and specifically in AI workforces, especially in technical roles, white males are over-represented (Gehlhaus and Mutis, 2021).Furthermore, AI is often today being developed by very powerful, global tech companies which are embedded within the capitalist system with its historical links to patriarchy, colonialism and neo-liberalism (Crawford, 2021;Jimenez et al., 2022).The manifestations of AI produced within these technological systems are likely to perpetuate sexist and racist assumptions.This is relevant to wider developments of AI in society, but also touches on how AI might manifest itself in the library world.
Thus there is a widely recognised problem of underrepresentation of women in the design of AI systems.This applies not just to roles in IT, but groups such as librarians and other information professionals who have both some role in the direct creation and development of AI, but also in maintaining and managing AI systems (Collett et al., 2022).Unchallenged AI might well reinforce inequalities within the information profession because men are overrepresented in technical roles, which are seen as driving automation, and in more senior roles, which are less vulnerable to automation.
At a symbolic level, the treatment of technology in LIS/ librarianship has often been solutionist, portraying technology as an almost magical solution to complex societal problems (Morozov, 2013).This reinforces aspects of LIS which also portray libraries as neutral objective rational spaces (Mirza and Seale, 2017).For example, when the US information professional body represents future trends (including AI) it emphasises individualistic entrepreneurialism, while 'Emotion and care work, reproductive labor, service, maintenance work, and manual labor are disproportionately seen as feminised labor and "non-skilled" service labor'.(Mirza and Seale, 2017: 178).Future trends are described by the professional body without acknowledgement of the precarious ghost work on which it is actually based and without acknowledging its environmental impact.Such discourses around future technologies like AI play an important role in defining what is important in the work of a profession but do so in ways that entrench inequality by emphasising forms of work in which white males are over-represented (Mirza and Seale, 2017).Fortunately, this does not mean that inequalities are predetermined.They can be challenged especially by initiatives to expand access to learning the relevant skills (Collett al., 2022).
Conclusion
This paper has sought to help librarians to navigate the potentially widening impact of AI on libraries.The general definitions in part one assist the reader in understanding the scope of AI and point to the underlying concern about the relation between machines and humans, but also reveal it as an evolving idea.Part two offers some insights into the technology itself with library-based examples.Part three zooms in to identify how AI can be applied in different areas of library operation.This reveals the potential of AI to raise the efficiency of library operations; improve user services such as through enhanced knowledge discovery, dynamic SLRs and user interaction through chatbots; create new roles around supporting communities; add a new dimension to information literacy; and support understanding and control over user behaviour.There remain significant barriers such as ethical and legal issues, lack of off the shelf solutions, cost and implementation challenges, skill gaps, collaboration challenges and simply the pull of other priorities and innovations.The discussion turns to consider the potential impact of AI on library work and particularly the implications for equality, diversity and inclusion within the profession.
Having been through this process of analysis it is easier to understand the wider picture of the pervasive but also uneven impact of AI.AI developments are quite well developed in some library sectors, at least in some institutions within them.Equally there are some applications that have been proposed on good grounds for many years but seemingly not yet materialised at scale.Some of these potential applications may not, in the end, have significant impact on services at all.So, there is a sense here of possibility not inevitability.We feel it is an important message to say that everything we have written about are possibilities that we can shape not an inevitable 'wave of the future'.
We may be able to say something about what use cases of AI are most likely in most contexts.It is reasonable to expect the applications closest to what libraries already do and that have the lowest resource implications to be most quickly and widely adopted.This would make it likely that work around AI literacy will develop faster than anywhere else.Applications to the collection for knowledge discovery are developing in libraries with rich unique collections and large resources.Yet as in relation to most technologies, it seems likely that libraries will turn to third party commercial vendors to supply pre-packaged solutions, for example easy to customise chatbots.So it remains unanswered how AI will start being applied by library system vendors, bibliographic service providers and other Tech companies in the library space.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
••
Data search as an extension of support to searching for literature • • Data licensing as an extension of their licensing of other types of content • • Copyright advice, as an extension of their expertise in IPR • • Data management as an extension of collecting activities • • Data preservation, for example providing a repository for derived data and for code, as an extension of their preservation function within collection management • • Open methods, as an extension of the wider commitment of many libraries to openness, for example open science.
Table 1 .
Use cases of AI in libraries. | 2022-12-24T16:19:53.007Z | 2022-12-22T00:00:00.000 | {
"year": 2024,
"sha1": "35c338c7fa55f7ca5db610c75699465322f96f26",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1177/09610006221142029",
"oa_status": "HYBRID",
"pdf_src": "Sage",
"pdf_hash": "f2d67a15970746860b659ebc5fbc3eca943facb4",
"s2fieldsofstudy": [
"Computer Science",
"Education"
],
"extfieldsofstudy": []
} |
228613669 | pes2o/s2orc | v3-fos-license | Reducing Internet Gambling Harms Using Behavioral Science: A Stakeholder Framework
Internet gambling provides a unique environment with design mechanics and data-driven opportunities that can impact gambling-related harms. Some elements of Internet gambling including isolation, lack of interruption, and constant, easy access have been argued to pose specific risks. However, identifiable player accounts enable identification of behavioral risk markers and personalized private interfaces to push customized messages and interventions. The structural design of the Internet gambling environment (website or app) can have a strong influence on individual behavior. However, unlike land-based venues, Internet gambling has few specific policies outlining acceptable and unacceptable design practices. Harm minimization including responsible gambling frameworks typically include roles and responsibilities for multiple stakeholders including individual users, industry operators, government regulators, and community organizations. This paper presents a framework for how behavioral science principles can inform appropriate stakeholder actions to minimize Internet gambling-related harms. A customer journey through internet gambling demonstrates how a multidisciplinary nexus of collaborative effort may facilitate a reduction in harms associated with Internet gambling for consumers at all stages of risk. Collaborative efforts between stakeholders could result in the implementation of appropriate design strategies to assist individuals to make decisions and engage in healthy, sustainable behaviors.
INTRODUCTION
Gambling is a relatively common activity, however, for a minority of people gambling can lead to the development of gambling disorder, a mental disorder categorized as a behavioral addiction. Gambling disorder is highly co-morbid with other mental disorders and is characterized by a preoccupation with gambling and persistence and lack of control despite wide-spread negative consequences (1). Gambling problems may include sub-clinical but serious harms, which are experienced by 0.4-2.0% of adults internationally (2). Of those who experience gambling problems, the minority (7-29%) will seek treatment for these problems (3). The global online gambling market is expected to grow 13.2% between 2019 and 2020, from USD$58.9 billion to USD$66.7 billion (4). This growth appears to be due to COVID-19, which is limiting access to land-based gambling opportunities and resulting in more people gambling online.
Internet gambling occurs in a unique environment containing design mechanics and data-driven opportunities, with the potential to impact gambling-related harms. Just as the layout of land-based venues has been shown to influence gambling behavior (5)(6)(7), the design of websites has been shown to influence general ecommerce behavior (8). However, there has been minimal research investigating the impact of the design of Internet gambling websites. Some elements of Internet gambling, including isolation, lack of interruption, and constant, easy access, have been argued to pose specific risks (9). There is minimal research to guide evidence-based policies to design a sustainable online gambling environment in which individuals gamble at a level that is affordable for them and free from coercion or undue influence. We present here a framework for the role each key stakeholder can play in reducing harms from Internet gambling.
Persuasive design combines the theory of behavioral design with computer technology (10) and has been popularized by nudge theory (11). Nudge theory uses choice architecture and choice framing to ask questions in a way that nudges individuals' behavior in certain directions without restricting the available options-such as through opt-out default retirement funds. Systems of rewards and punishments in online gambling products are designed to encourage continued use and attention, additional payments, or other behaviors that are not always beneficial to the user, or consistent with their own plans and values. Examples include push notifications of time-limited promotional offers or matched deposits with complicated terms and conditions and limited benefits for users; excessive friction creating difficulty in withdrawing deposited funds; targeted push messages promoting a betting or spending options matching the user's profile ("people like you bet on. . . , "); and encouraging continuous use by eliminating natural breaks in play or the ability to pause (e.g., infinity scrolling). Most of these features are effective as they exploit natural human weaknesses in exercising self-control (12). In the heat of the moment, people often make decisions that favor immediate pleasure over later costs, in a way that is not consistent with their initial plans. Online gambling providers exploit this universal feature of human behavior to encourage more time and money spent on gambling.
On a positive side, behavioral science can identify nudges that steer users toward healthier levels of engagement with online gambling (which, for some people, may not include any gambling). Technological nudges are adaptable across settings with varying political and societal preferences around autonomy and paternalism, as the strength of the nudges can be adjusted accordingly. Software has been developed to monitor gambling and user activity, identify risk indicators, and enable welltimed interventions, including personalized, normative feedback, and encouragement to moderate play through pre-commitment devices (13)(14)(15). Dynamic messages can create a break in play and encourage self-appraisal (16,17). Electronic gaming machines have been developed with customisable alarm clocks and ringfenced winnings to prevent re-gambling (18). Digital wallets can limit gambling expenditure and provide personal feedback on gambling spend (19). Design options may include "plain packaging" for gambling sites (minimizing color and graphics), increasing friction by requiring users to click through different pages to access different betting/game options, creating pauses to slow the betting speed, reducing defaults bets, and requiring users to confirm bets and manually entering the amount, using default automated withdrawals of winnings, and default opt-out of notifications and marketing.
Policies based on behavioral science principles have been shown to be effective in influencing consumer behavior, including where personal risks are possible (20), although these have only recently been considered for gambling policies (21)(22)(23). This paper aims to present a framework for how behavioral science principles can inform appropriate stakeholder actions to minimize Internet gambling-related harm, with a focus on how technology can impact harms.
FRAMEWORK
There is a web of interacting factors that influence gambling related harms-including individual cognitive and personality characteristics of gambling users; various enticements and subtle influences used by gambling providers; cultural and social factors; availability of alcohol; and of course, individual choice. Opinions differ on who among those involved in the gambling experience ought to be responsible for reducing those harms. However, all those involved can, if they wish to, implement measures to do so.
Customer journey maps visually represent user experiences in using services such as gambling websites (24). We use this method in Figure 1 to illustrate (1) a hypothetical journey of escalating harms from online gambling that a customer, "Joshua" could take, and (2) the roles different stakeholders could play at each step of the journey in order to alter its course toward a lower level of harm. We intend this map to highlight pivotal points from a user perspective and provide tangible calls to action for all stakeholders.
Individual Users
There is a range of actions that individuals can take to decrease the chance that their online gambling behaviors cause harms for themselves, their families, and their communities. Individuals should inform themselves about the risks and persuasion associated with website features. With such knowledge, individuals will be better placed to select regulated websites that employ responsible design, to turn off any default persuasive design elements, and to select the settings they prefer. This could include disabling features that nudge users toward continued gambling. At the same time, some individuals will find it difficult to make informed decisions about gambling due to factors such as comorbid conditions, addiction, or impulsivity that make it more difficult to exercise self-control. This speaks to FIGURE 1 | Hypothetical customer pathway illustrating appropriate stakeholder interventions according to level of gambling behavior and harm. For context, on average, 60-70% of individuals fall into some category of gambling behavior per year, with 1-2% falling in the "problem" category; however this varies between jurisdictions (25).
the necessity of this broader framework that identifies roles for multiple stakeholders.
Similarly, individuals should inform themselves about tools available to reduce harms. These include consumer protection tools such as self-exclusions and limits (26), but may include more general self-regulation tools that can be implemented in any behavioral domain to reduce the need to exercise selfcontrol in the moment (27). Apps and software can be used to limit and restrict access to specific apps/websites, and limits can be placed on payments and access to credit. Users may avoid features that minimize friction to provide greater opportunities for self-reflection. For example, by avoiding options to remain signed-in to accounts for betting and avoiding saved passwords, requiring manual entry of passwords. At the beginning of a gambling session, an individual may set a timer on their device with an alarm to subsequently signal the planned end of the gambling session. Such strategies are only likely to be adopted by individuals who are motived to regulate or reduce their gambling (27,28). Other individuals will likely view these strategies as a hindrance toward their goal of gambling, which might be meeting needs for relatedness, competency, or mood modulation (29)(30)(31). Knowledge of available tools combined with a desire or willingness to use them might be helpful to minimize the intention-behavior gap and selfcontrol issues (27,28). There are many tools available to assist individuals to enforce their planned behaviors if they have the knowledge and motivation to use these and autonomy to make informed choices.
Community Groups
Community groups are typically non-profit organizations (may be large or small and focus on broad or specific issues or target groups) that are established and operated independently from governments and are typically funded from a range of stakeholders, commonly governments or charity donations. These groups have the capacity to provide education and outreach to communities, mobilize resources, advocate for citizens, challenge policy, and conduct various projects to impact communities. Community groups can collaborate with other stakeholders to reach shared goals, such as working with researchers to create and disseminate up-to-date communication materials about risks and protective strategies in formats that are accessible to individuals. In collaboration with researchers, community groups might also provide tools to individuals to help them understand their own personal risks of gambling harms, such as self-assessment quizzes with personalized feedback. These strategies might help to shift their individual attitudes toward gambling (28). Community groups can work in an advocacy role to convey the needs and concerns of individuals to regulators. Efforts are needed to ensure funding received from stakeholders is provided in an independent manner without restrictions and involvement by the funding body to minimize conflicts of interest and funding should not be reliant on gambling expenditure.
Gambling Industry
The gambling industry is responsible for ensuring that websites, apps, products, offers, marketing, and communication are designed to facilitate the customer's need for autonomy (31), encourage gambling only at personally affordable levels, and reduce the risk of foreseeable harms. Operators should avoid using overly persuasive design elements as this violates the principles of autonomy and informed choice. Features to avoid could include those which create a sense of urgency (e.g., countdown timers on bets and promotions), that distort attitudes by creating overly optimistic perceptions of the chance of winning or reduce the perceived likelihood of losing (e.g., dynamic leader boards of recent winners, money back guarantee bets) (28), providing irrelevant information that perpetuate erroneous beliefs (e.g., providing details of previous wins in independent events such as winning lottery or roulette numbers, time since last jackpot, location winning lottery tickets were sold) (28), promoting irrelevant information to perpetuate social norms (e.g., most popular bets, number of active users) (28), or that act to reduce the opportunity to reflect on the decision to place a bet or make a deposit (e.g., prompted bet size, frictionless betting).
Gambling industry operators have a responsibility to "know their customer, " to verify a customer's identity prior to accepting any bets, and to avoid exacerbating any harms experienced by customers who are identified as at-risk or already experiencing gambling-related problems. Verified player accounts enable identification of behavioral risk markers and personalized private interfaces to push customized messages and interventions (32,33). For example, operators could delay sending promotional offers until they have a good understanding of their customers and use continuous monitoring programs and algorithms to identify customers with risk indicators and respond appropriately with messages to encourage use of consumer protection tools, phone calls to check in with customers, or automatic blocking of promotions and marketing materials (26).
In addition to the avoidance of harm (principle of nonmaleficence), website operators also have the opportunity to do good for their customers (principle of beneficence) (34). The gambling industry could implement consumer protection tools as the (modifiable) default option. For example, a time "limit" could be placed on all users, whereby a message alerts users when they have gambled for the limited time, and requires users to change the default settings if they wish to gamble for longer. Users could be shown pop up displays summarizing their behavior in comparison to that of other users (personalized, normative feedback) thereby potentially shifting their attitudes and social norms (28), directing the user to information about consumer protection tools that are available to them (e.g., spending limits and self-exclusion), and creating friction by using pop-up messages and breaks in play to prompt the user to pause and reflect (e.g., please confirm that you want to place your xth bet for this week) (35,36). To preserve autonomy (31), customers should be able to turn on (opt-in to) notifications and marketing and turn down (opt-out of) restrictions such as deposit limits; however, by making these active choices operators are prompting sustainable gambling-that is, gambling within their financial means and without associated harm/s. To ensure they are effective and well-received, the exact content and delivery of interventions should be negotiated in collaboration with other stakeholders-particularly users and researchers.
Government and Regulators
Like industry operators, governments and regulators have a responsibility to ensure that all legalized products and activities contribute to the public good and do no harm. Governments should consider approving non-exploitative forms of gambling, as well as consumer protections. Regulators and policy makers have a responsibility to commission research to guide the development of policy options, review evidence to inform these, and seek consultation from other stakeholders and the public, to ensure that industry standards conform to social expectations. As technology continues to evolve, it is likely that commissioned research will be needed to analyse of the impacts of individual website features and assess those impacts for harm. Experience from venue-based gambling regulation could also be expected to inform online gambling regulation where the former includes regulation of ambient and other factors that create unacceptable risks for gambling users. Regulatory and policy direction is increasingly focusing on online gambling as it steadily increases as a proportion of gambling activity. As with all tech regulation, the challenge will be to create policies that are specific enough to be effective, but also future proof. As the gambling environment is impacted by multiple layers of regulation, across jurisdictions, intergovernmental coordination on the relevant issues will also be critically important.
Financial Institutions
Financial institutions including banks and credit providers are able to contribute to reducing harms from online gambling by providing consumer tools to assist individuals to manage their online gambling spending and using algorithms to identify indicators of risky gambling (37). Financial institutions could provide individuals with comprehensive activity and expenditure statements collating all gambling spending in one place and as a proportion of income and discretionary expenditure. Statements could be an easily accessible way to communicate to customers evidence of risk indicators such as increased gambling spend or frequency in relation to previous time periods and relative to income and other expenses. Financial institutions could provide products with voluntary or default gambling spend limits or blocks and notify customers as they are approaching their limits. Non-gambling products could be developed and marketed to those who wish to opt-out of gambling completely, such as for adolescents and those who identify themselves as at-risk due to their personal situations. It is difficult for financial institutions to limit customers in spending their own money; however, there may be a duty of care implication related to offering credit to customers for the purposes of gambling given the demonstrated relationship between consumer debt and gambling problems (38,39).
Researchers
There is a role for researchers across academic disciplines in working together to ensure the evidence supporting each element involved in reducing harms from online gambling is robust. Research should focus both on the elements of the online environment (and their interactions with user characteristics) that can cause harm, as well as mechanisms of harnessing technology to prevent harm. Research should investigate mental health issues specifically associated with online gambling. These can contribute to functional impairment and include depression, suicidal behavior and proneness to psychoactive substance misuse, among other issues. Crossdisciplinary researchers can use behavioral economics theory and apply a variety of methods to identify the existing persuasive elements of the online gambling design, identifying nudges that will help maintain healthy levels of gambling without restricting autonomy of the players (31), as well as quantifying the degree of impact of persuasive design features on gambling behavior and harms. Reliable indicators of the size of effects of different features are needed to inform good policy about their use and to identify priority areas for policy development. Specific attention could be paid to those features already in use, such as financial incentives (40), time-sensitive promotions (41), targeted advertising, default site settings, and displays of "latest winners" (41).
Researchers can use the existing data to create models that will identify at-risk individuals from their usage patterns before life-changing harm occurs. In collaboration with industry operators and financial institutions, this research could inform algorithms to identify at-risk individuals in practice and deliver automatised, personalized intervention or prevention strategies. Research should focus on the multiple harms related to online gambling. The intersection between online gambling, fraud, theft, and violence-related offenses, for example, could usefully be explored by criminologists. Such insights will help in arguments regarding policy and regulatory responses required to minimize harms.
For maximal real-world impact, researchers across disciplines must be responsive to the needs and opinions of the other stakeholders with respect to priority research areas. This could involve proactive involvement of stakeholders into research design and dissemination and implementation of findings, as well as reactive design of research to address issues identified by other stakeholders. This will ensure that the research being conducted continues to address evolving real-world problems. All stakeholders should work with researchers to develop, test, and evaluate policies and strategies designed to minimize harms and to check for any unintended negative consequences.
CONCLUSION
This paper aimed to describe a framework of opportunities by which different stakeholder groups can contribute to the shared goal of reducing harm associated with online gambling. The value of this framework is that it makes explicit the roles and responsibilities of each stakeholder. In addition to those roles listed above we propose open and transparent collaborative communication between stakeholder groups as a role for all stakeholders. This is particularly important in the field of (Internet) gambling when stakeholders can hold competing interests. For example, operators' commercial imperatives compete with their need for corporate social responsibility and duty of care. Taxation revenue benefits must be balanced against governments' need to minimize harm caused by legal activities. Users face a conflict between possible longterm harms and short-term enjoyments. Community groups need to balance the needs of a minority who experience significant gambling-related harms with those who enjoy gambling and want to make autonomous choices. We intend this framework to be a step toward acknowledging and mediating these competing interests. This framework is intended to be preliminary and to facilitate discussion. As such, we welcome comments on further roles not described here that any of these stakeholder groups could play as well as suggestions of other stakeholder groups who could play a role in reducing online gambling harms. We also hope that it will serve as a structured outline of the types of harm-reduction strategies that warrant further investigation to determine their effectiveness, as this empirical evidence is somewhat limited with respect to Internet gambling.
Practical steps can be taken to achieve collaboration between stakeholders to reduce Internet-gambling-related harms. Actions that facilitate communication between stakeholders could include conferences and roundtables dedicated to this purpose. Such events will increase the knowledge held by each stakeholder of the others' roles, values, and motivations, which will ultimately lead to more effective communication. Co-funding, co-design, and co-evaluation of projects are further ways in which stakeholders could make tangible strides toward the shared goal. Behavioral science principles respect individual autonomy, allowing modifiable restrictions to be used to protect the at-risk minority. They may be imposed by regulators or implemented by operators as a form of self-regulation and corporate social responsibility, or even a marketing strategy to attract customers. In any case, design strategies can assist individuals to make decisions and act in ways that contribute to a healthy and sustainable lifestyle and overall wellbeing.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.
AUTHOR CONTRIBUTIONS
SG and AB conceived of the paper idea. SG and NB drafted the manuscript. All authors contributed to and revised the manuscript critically and approved the final version for submission. | 2020-12-14T14:16:34.680Z | 2020-12-14T00:00:00.000 | {
"year": 2020,
"sha1": "867fc6e0bba556b7c6e708230a0f6e168fe81263",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyt.2020.598589/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "867fc6e0bba556b7c6e708230a0f6e168fe81263",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Business",
"Medicine"
]
} |
3208755 | pes2o/s2orc | v3-fos-license | Complete monotonicity for inverse powers of some combinatorially defined polynomials
We prove the complete monotonicity on $(0,\infty)^n$ for suitable inverse powers of the spanning-tree polynomials of graphs and, more generally, of the basis generating polynomials of certain classes of matroids. This generalizes a result of Szego and answers, among other things, a long-standing question of Lewy and Askey concerning the positivity of Taylor coefficients for certain rational functions. Our proofs are based on two_ab initio_ methods for proving that $P^{-\beta}$ is completely monotone on a convex cone $C$: the determinantal method and the quadratic-form method. These methods are closely connected with harmonic analysis on Euclidean Jordan algebras (or equivalently on symmetric cones). We furthermore have a variety of constructions that, given such polynomials, can create other ones with the same property: among these are algebraic analogues of the matroid operations of deletion, contraction, direct sum, parallel connection, series connection and 2-sum. The complete monotonicity of $P^{-\beta}$ for some $\beta>0$ can be viewed as a strong quantitative version of the half-plane property (Hurwitz stability) for $P$, and is also related to the Rayleigh property for matroids.
Introduction
If P is a univariate or multivariate polynomial with real coefficients and strictly positive constant term, and β is a positive real number, it is sometimes of interest to know whether P −β has all nonnegative (or even strictly positive) Taylor coefficients. A problem of this type arose in the late 1920s in Friedrichs and Lewy's study of the discretized time-dependent wave equation in two space dimensions: they needed the answer for the case P (y 1 , y 2 , y 3 ) = (1 − y 1 )(1 − y 2 ) + (1 − y 1 )(1 − y 3 ) + (1 − y 2 )(1 − y 3 ) and β = 1. Lewy contacted Gabor Szegő, who proceeded to solve a generalization of this problem: Szegő [116] showed that for any n ≥ 1, the polynomial P n (y 1 , . . . , y n ) = n i=1 j =i (1 − y j ) (1.1) has the property that P −β n has nonnegative Taylor coefficients for all β ≥ 1/2. (The cases n = 1, 2 are of course trivial; the interesting problem is for n ≥ 3.) Szegő's proof was surprisingly indirect, and exploited the Gegenbauer-Sonine addition theorem for Bessel functions together with Weber's first exponential integral. 1 Shortly thereafter, Kaluza [74] provided an elementary (albeit rather intricate) proof, but only for n = 3 and β = 1. In the early 1970s, Askey and Gasper [8] gave a partially alternate proof, using Jacobi polynomials in place of Bessel functions. Finally, Straub [115] has very recently produced simple and elegant proofs for the cases n = 3, 4 and β = 1, based on applying a positivity-preserving operator to another rational function whose Taylor coefficients are known (by a different elementary argument) to be nonnegative (indeed strictly positive).
Askey and Gasper, in discussing both Szegő's problem and a related unsolved problem of Lewy and Askey, expressed the hope that "there should be a combinatorial interpretation of these results" and observed that "this might suggest new methods" [8, p. 340]. The purpose of the present paper is to provide such a combinatorial interpretation, together with new and elementary (but we think powerful) methods of proof. As a consequence we are able to prove a far-reaching generalization of Szegő's original result, which includes as a special case an affirmative solution to the problem of Lewy and Askey. Indeed, we give two different proofs for the Lewy-Askey problem, based on viewing it as a member of two different families of generalizations of the n = 3 Szegő problem. Our methods turn out to be closely connected with harmonic analysis on Euclidean Jordan algebras (or equivalently on symmetric cones) [53].
Spanning-tree polynomials and series-parallel graphs
From a combinatorial point of view, one can see that Szegő's polynomial (1.1) is simply the spanning-tree generating polynomial T G (x) for the n-cycle G = C n , after the change of variables x i = 1 − y i . This suggests that an analogous result might hold for the spanning-tree polynomials of some wider class of graphs. 2 This conjecture is indeed true, as we shall show. Moreover (and this will turn out to be quite important in what follows), the change of variables x i = 1−y i can be generalized to x i = c i − y i for constants c i > 0 that are not necessarily equal. We shall prove: Conversely, if G is a connected graph and there exists β ∈ (0, 1) \ { 1 2 } such that T G (c − y) −β has nonnegative Taylor coefficients (in the variables y) for all c > 0, then G is series-parallel.
The proof of the direct half of Theorem 1.1 is completely elementary (and indeed quite simple). The converse relies on a deep result from harmonic analysis on Euclidean Jordan algebras [60,72] [53, Chapter VII], for which, however, there now exist two different elementary proofs [109,31] [113].
Let us recall that a C ∞ function f (x 1 , . . . , x n ) defined on (0, ∞) n is termed completely monotone if its partial derivatives of all orders alternate in sign, i.e.
everywhere on (0, ∞) n , for all k ≥ 0 and all choices of indices i 1 , . . . , i k . Theorem 1.1 can then be rephrased as follows: Theorem 1.1 ′ Let G = (V, E) be a connected series-parallel graph, and let T G (x) be its spanning-tree polynomial. Then T −β G is completely monotone on (0, ∞) E for all β ≥ 1/2.
Allowing arbitrary constants c > 0 thus allows the result to be formulated in terms of complete monotonicity, and leads to a characterization that is both necessary and sufficient. Szegő's result (or rather, its generalization to arbitrary c) extends to series-parallel graphs and no farther.
Determinants
But this is not the end of the matter: we can go far beyond series-parallel graphs if we relax our demands about the set of β for which T −β G is asserted to be completely monotone. The key here is Kirchhoff's matrix-tree theorem [76,28,88,85,35,34,33,129,86,1], which shows how spanning-tree polynomials can be written as determinants. This line of thought suggests that complete monotonicity of P −β might hold more generally for the homogeneous multiaffine polynomials arising from determinants of the type studied in [37,Section 8.1]. This too is true; in fact, such a result holds for a slightly more general class of polynomials that need not be multiaffine. We shall prove, once again by elementary methods: Theorem 1.2 Let A 1 , . . . , A n (n ≥ 1) be m×m real or complex matrices or hermitian quaternionic matrices, and let us form the polynomial P (x 1 , . . . , x n ) = det n i=1 x i A i (1.4) in the variables x = (x 1 , . . . , x n ). [In the quaternionic case, det denotes the Moore determinant: see [108,Appendix A].] Assume further that there exists a linear combination of A 1 , . . . , A n that has rank m (so that P ≡ 0).
These curious conditions on β are not just an artifact of our method of proof; they really are best possible. They can be better understood if we take a slightly more general perspective, and define complete monotonicity for functions on an arbitrary open convex cone C in a finite-dimensional real vector space V (see Section 2). We then have the following result that "explains" Theorem 1.2: In particular, if the matrices A 1 , . . . , A n together span Sym(m, R), Herm(m, C) or Herm(m, H) [so that the convex cone they generate has nonempty interior], then the determinantal polynomial (1.4) has P −β completely monotone on (0, ∞) n if and only if β belongs to the set enumerated in Theorem 1.2. 3 The proof of the "if" part of Theorem 1.3 is completely elementary, but the "only if" part again relies on a deep result from harmonic analysis on Euclidean Jordan algebras, namely, the characterization of parameters for which the Riesz distribution is a positive measure (Theorem 4.8 below; but see [109,31] [113] and [108,Appendix B] for elementary proofs). In fact, when Theorem 1.3 is rephrased in this latter context it takes on a unified form: We shall see that Theorem 1.4 is essentially equivalent to the characterization of parameters for which the Riesz distribution is a positive measure. The set of values of β described in Theorem 1.4 is known as the Gindikin-Wallach set and arises in a number of contexts in representation theory [16,60,104,124,79,51,52,53].
A special case of the construction (1.4) arises [37,Section 8.1] when B is an m × n real or complex matrix of rank m, and we set P (x) = det(BXB * ), where X = diag(x 1 , . . . , x n ) and * denotes hermitian conjugate. Then the matrix A i in (1.4) is simply the outer product of the ith column of B with its complex conjugate, and so is of rank at most 1; as a consequence, the polynomial P is multiaffine (i.e., of degree at most 1 in each variable separately). 4 In particular, let G = (V, E) be a connected graph, and define its spanning-tree polynomial T G (x) by T G (x) = T ∈T (G) e∈T x e , (1.5) where x = {x e } e∈E is a family of indeterminates indexed by the edges of G, and T (G) denotes the family of edge sets of spanning trees in G. Now let B be the directed vertex-edge incidence matrix for an arbitrarily chosen orientation of G, with one row (corresponding to an arbitrarily chosen vertex of G) deleted; then the matrix-tree theorem [76,28,88,85,35,34,33,129,86,1,37] tells us that T G (x) = det(BXB T ). Applying Theorem 1.2(a), we obtain: Corollary 1.5 Let G = (V, E) be a connected graph with p vertices, and let T G (x) be its spanning-tree polynomial. Then T −β G is completely monotone on (0, ∞) E for β = 0, 1 2 , 1, 3 2 , . . . and for all real β ≥ (p − 2)/2. Likewise, we can apply Theorem 1.2 (b) to the elementary symmetric polynomial which can be represented in the form (1.4) with or equivalently as E 2,4 (x) = det(BXB * ) with B = 1 0 1 1 0 1 1 e iπ/3 . We obtain: The function E −β 2,4 is completely monotone on (0, ∞) 4 if and only if β = 0 or β ≥ 1. In particular, the function has nonnegative Taylor coefficients for all β ≥ 1.
The second sentence of Corollary 1.6 answers in the affirmative a question posed long ago by Lewy [8, p. 340], of which Askey remarks that it "has caused me many hours of frustration" [7, p. 56]. 5 (See also the recent discussion in [75].) Indeed, Lewy's 5 Askey [7, p. 56] comments that, in his view, So far the most powerful method of treating problems of this type is to translate them into another problem involving special functions and then use the results and methods which have been developed for the last two hundred years to solve the special function problem. So far I have been unable to make a reduction in [Lewy's problem] and so have no place to start.
But he immediately adds, wisely, that "it is possible to solve some problems without using special functions, so others should not give up on [Lewy's problem]." question concerned only β = 1, and made the weaker conjecture that the function (1.8) multiplied by (4 − y 1 − y 2 − y 3 − y 4 ) −1 has nonnegative Taylor coefficients. This latter factor is now seen to be unnecessary. 6 Similarly, Theorem 1.2(c) applied to the quaternionic determinant det a q q b = ab − qq for a, b ∈ R and q ∈ H, with A 1 , . . . , A 4 as above and yields an analogous result for the elementary symmetric polynomial of degree 2 in six variables 7 : The function E −β 2,6 is completely monotone on (0, ∞) 6 if and only if β = 0 or β ≥ 2. Corollaries 1.5 and 1.6 are in fact special cases of a much more general result concerning the basis generating polynomials B M (x) of certain classes of matroids. (We stress that no knowledge of matroid theory is needed to understand the main arguments of this paper; readers allergic to matroids, or simply unfamiliar with them, can skip all references to them without loss of logical continuity. Still, we think that the matroidal perspective is fruitful and we would like to make some modest 6 Ismail and Tamhankar [73, p. 483] mistakenly asserted that "the early coefficients in the power series expansion of are positive but the later coefficients do change sign", arguing that this is "because Huygen's [sic] principle holds in three-space." Huygens' principle indeed suggests that the coefficients approach zero, as Askey and Gasper [8, p. 340] observed; but this in no way contradicts the nonnegativity of those coefficients. 7 If we define q 3 = 1 and q 4 = e −iπ/3 [cf. (1.7)], then q 3 , q 4 , q 5 , q 6 are quaternions satisfying or equivalently |q i | 2 = 1 and |q i − q j | 2 = 1 for all i = j. From this it easily follows that det Now let B be an arbitrary m × n real or complex matrix of rank m, and define P (x) = det(BXB * ). Then, as discussed previously, Theorem 1.2(a or b) applies to P and gives a sufficient condition for P −β to be completely monotone. On the other hand, the Cauchy-Binet formula gives where B ⋆S denotes the submatrix of B with columns S. Since det B ⋆S = 0 if and only if the columns S of B are linearly independent, we see that P is a weighted version of the basis generating polynomial for the matroid M = M[B] that is represented by B (this matroid has rank m). In particular, a matroid is said to be real-unimodular (resp. complex-unimodular ) if it has a real (resp. complex) representing matrix B, with a number of rows equal to its rank, such that | det B ⋆S | 2 ∈ {0, 1} for all S. 9 In this case the basis generating polynomial is precisely B M (x) = det(BXB * ). We thereby obtain from Theorem 1.2(a,b) the following result: Corollary 1.8 Let M be a matroid of rank r on the ground set E, and let B M (x) be its basis generating polynomial.
In particular, by specializing (a) to a graphic matroid M(G) we recover Corollary 1.5, and by specializing (b) to the uniform matroid U 2,4 we recover Corollary 1.6.
We have also proven a (very) partial converse to Corollary 1.8, which concerns the cases of rank-r n-element simple matroids in which the matrices A 1 , . . . , A n together span Sym(r, R) or Herm(r, C): see Proposition 7.9 below.
A deeper study of quaternionic-unimodular matroids would be of interest. For instance, is the class of quaternionic-unimodular matroids closed under duality? Or even under contraction? (The class is obviously closed under deletion.) Which uniform matroids U r,n are quaternionic-unimodular?
A different notion of "quaternionic-unimodular matroid" has been introduced recently by Pendavingh and van Zwam [98]. It is not clear to us what is the relation between their notion and ours.
Quadratic forms
Of course, E 2,4 and E 2,6 are quadratic forms in the variables x, as is the polynomial E 2,3 arising in the n = 3 Szegő problem. This suggests that it might be fruitful to study more general quadratic forms. We shall prove, by elementary methods: Theorem 1.9 Let V be a finite-dimensional real vector space, let B be a symmetric bilinear form on V having inertia (n + , n − , n 0 ), and define the quadratic form Q(x) = B(x, x). Let C ⊂ V be a nonempty open convex cone with the property that Q(x) > 0 for all x ∈ C. Then n + ≥ 1, and moreover: (a) If n + = 1 and n − = 0, then Q −β is completely monotone on C for all β ≥ 0.
For all other values of β, Q −β is not completely monotone on any nonempty open convex subcone C ′ ⊆ C. (c) If n + > 1, then Q −β is not completely monotone on any nonempty open convex subcone C ′ ⊆ C for any β = 0. Theorem 1.9 follows fairly easily from the classic work of Marcel Riesz [102] (see also [48] and [53,Chapter VII]) treating the case in which B is the Lorentz form on 12) and C is the Lorentz cone (= forward light cone) {x ∈ R n : x 1 > x 2 2 + . . . + x 2 n }. We are able to give a completely elementary proof of both the sufficiency and the necessity; and we are able to give in case (b) an explicit Laplace-transform formula for Q −β (see Proposition 5.6).
Specializing Theorem 1.9 with V = R n and C = (0, ∞) n to the degree-2 elementary symmetric polynomials we obtain: The function E −β 2,n is completely monotone on (0, ∞) n if and only if β = 0 or β ≥ (n − 2)/2. By this method we obtain an alternate proof of Corollaries 1.6 and 1.7 -hence in particular a second solution to the Lewy-Askey problem -as well as of Szegő's [116] original result in the case n = 3. 10 We also obtain an explicit Laplace-transform formula for E −β 2,n (see Corollary 5.8).
Remark. It is easy to see that E 2,n is the spanning-tree polynomial of a graph only if n = 2 or 3: a connected graph G whose spanning-tree polynomial is of degree 2 must have precisely three vertices; if G has multiple edges, then T G = E 2,n because monomials corresponding to pairs of parallel edges are absent from T G ; so G must be either the 3-vertex path or the 3-cycle, corresponding to E 2,2 or E 2,3 , respectively. But this fact can also be seen from our results: Corollary 1.5 says that T −1/2 G is completely monotone for all graphs G, while Corollary 1.10 says that E −1/2 2,n is not completely monotone when n > 3. Corollaries 1.6, 1.7 and 1.10 lead naturally to the following question: If we write E r,n for the elementary symmetric polynomial of degree r in n variables, (1.14) 10 In fancy language -which is, however, completely unnecessary for understanding our proofs -our "determinantal" proof of Corollary 1.6 is based on harmonic analysis on the cone of positivedefinite m × m complex hermitian matrices specialized to m = 2, while our "quadratic form" proof is based on harmonic analysis on the Lorentz cone in R n specialized to n = 4. The point here is that the Jordan algebra Herm(2, C) ≃ R × R 3 can be viewed as a member of two different families of Jordan algebras: Herm(m, C) and R × R n−1 [53, p. 98]. Likewise, our "determinantal" proof of Corollary 1.7 is based on harmonic analysis on the cone of positive-definite m × m quaternionic hermitian matrices specialized to m = 2, while our "quadratic form" proof is based on harmonic analysis on the Lorentz cone in R n specialized to n = 6; and we have the isomorphism of Jordan algebras Herm(2, H) ≃ R × R 5 [53, p. 98]. And finally, our "determinantal" proof of the n = 3 Szegő result is based on harmonic analysis on the cone of positive-definite m × m real symmetric matrices specialized to m = 2, while our "quadratic form" proof is based on harmonic analysis on the Lorentz cone in R n specialized to n = 3; and we have Sym(2, R) ≃ R × R 2 [53, p. 98].
(where we set E 0,n ≡ 1), then for which β > 0 is E −β r,n completely monotone on (0, ∞) n ? The cases r = 0, 1 and n are trivial: we have complete monotonicity for all β ≥ 0. Our results for the cases r = n − 1 (Theorem 1.1 ′ specialized to cycles C n ) and r = 2 (Corollary 1.10), as well as numerical experiments for (r, n) = (3,5), (3,6) and (4,6), suggest the following conjecture: Conjecture 1.11 Let 2 ≤ r ≤ n. Then E −β r,n is completely monotone on (0, ∞) n if and only if β = 0 or β ≥ (n − r)/2. However, we have been unable to find a proof of either the necessity or the sufficiency.
We remark that the elementary symmetric polynomial E r,n is the basis generating polynomial of the uniform matroid U r,n . So Corollary 1.10 and Conjecture 1.11 concern the same general subject as Corollary 1.8, namely, complete monotonicity for inverse powers of the basis generating polynomials of matroids.
Discussion
In summary, we have two ab initio methods for proving, given a polynomial P and a positive real number β, that P −β is completely monotone on (0, ∞) n [or more generally on a convex cone C]: Interestingly, these two methods can be viewed as versions of the same construction, involving the determinant on a Euclidean Jordan algebra and the Laplace-transform representation of its inverse powers [53, Chapters II-VII]. We discuss this connection in Sections 4.3 and 5.2.
In addition to these two ab initio methods, we have a variety of constructions that, given such polynomials, can create other ones with the same property (see Section 3). Among these are algebraic analogues of the graph (or matroid) operations of deletion, contraction, direct sum 11 , parallel connection, series connection and 2-sum (but not duality). By combining these operations with our ab initio proofs, we are able to prove the complete monotonicity of T −β G for some values of β beyond those covered by Corollary 1.5: Proposition 1.12 Fix p ≥ 2, and let G = (V, E) be any graph that can be obtained from copies of the complete graph K p by parallel connection, series connection, direct sum, deletion and contraction. Then T −β G is completely monotone on (0, ∞) E for β = 0, 1 2 , 1, 3 2 , . . . and for all real β ≥ (p − 2)/2.
In particular, the case p = 3 covers series-parallel graphs; this is essentially our proof of the direct half of Theorem 1.1 ′ . We also have versions of this proposition for matroids: see Propositions 7.14 and 7.15 below. Finally, in Propositions 7.16 and 7.17 we give excluded-minors characterizations of the class of graphs/matroids handled by Propositions 1.12 and 7.14, respectively. But even in the graphic case, we are still far from having a complete answer to the following fundamental problem: This question can be rephrased usefully as follows: We will show in Section 7.1 that the class G β is closed under minors -so that it can be characterized by listing the excluded minors -and under parallel connection. Furthermore, it is closed under series connection when (but only when) β ≥ 1/2. In this paper we have solved Problem 1.13 ′ in a few cases: • For β ∈ { 1 2 , 1, 3 2 , . . .}, G β = all graphs. See Corollary 1.8(a). • For β ∈ (0, 1 2 ), G β = graphs obtained from forests by parallel extension of edges (= graphs with no K 3 minor). See Theorem 7.10.
So the first unsolved cases are β ∈ (1, 3 2 ): Might it be true that G β = all graphs with no K 5 minor? Or might there exist, alternatively, other excluded minors? We have been thus far unable to determine the complete monotonicity of T −β G for the cases G = W 4 (the wheel with four spokes) and G = K 5 − e (the complete graph K 5 with one edge deleted). Indeed, for k ≥ 2 we do not even know the answer to the following question: Let us mention, finally, an alternative approach to "converse" results that we have not pursued, for lack of competence. When P −β does not have all nonnegative Taylor coefficients, this fact should in most cases be provable either by explicit computation of low-order coefficients or by asymptotic computation of suitable families of highorder coefficients (computer experiments can usually suggest which families to focus on). This type of multivariate asymptotic calculation has been pioneered recently by Pemantle and collaborators [95,96,97,94,15] and involves some rather nontrivial algebraic geometry/topology. In fact, Baryshnikov and Pemantle [15,Section 4.4] have recently used their method to study the asymptotics of the Taylor coefficients of P −β n for the Szegő polynomial (1.1) with n = 3, but thus far only for β > 1/2. 12 It would be interesting to know whether this analysis can be extended to the case β < 1/2, thereby providing an explicit proof that some of the Taylor coefficients are asymptotically negative. More generally, one might try to study the elementary symmetric polynomials E r,n : after the n = 3 Szegő case E 2,3 , the next simplest would probably be the Lewy-Askey case E 2,4 [i.e., (1.8)].
Some further remarks
The half-plane property. Let us recall that a polynomial P with complex coefficients is said to have the half-plane property [37,36,120,24,23,123,26,122] if either P ≡ 0 or else P (x 1 , . . . , x n ) = 0 whenever x 1 , . . . , x n are complex numbers with strictly positive real part. 13 We shall show (Corollary 2.3 below) that if P is a polynomial with real coefficients that is strictly positive on (0, ∞) n and such that P −β is completely monotone on (0, ∞) n for at least one β > 0, then P necessarily has the half-plane property (but not conversely). The complete monotonicity of P −β can therefore be thought of as a strong quantitative form of the half-plane property. In particular, it follows that the determinantal polynomials considered in Theorem 1.2 have the half-plane property -a fact that can easily be proven directly (Corollary 4.5 below). The same is true for the quadratic polynomials considered in Theorem 1.9: see [37,Theorem 5.3] and Theorem 5.4 below.
The Rayleigh property. Complete monotonicity is also connected with the Rayleigh property [38] for matroids and, more generally, for multiaffine polynomials. Let us say that a function f is completely monotone of order K if the inequalities (1.3) hold for 0 ≤ k ≤ K. Thus, a function is completely monotone of order 0 (resp. 1) if and only if it is nonnegative (resp. nonnegative and decreasing). A function is completely monotone of order 2 if, in addition, ∂ 2 f /∂x i ∂x j ≥ 0 for all i, j. Specializing this to f = P −β where P is a polynomial, we obtain If P is multiaffine, then ∂ 2 P/∂x 2 i = 0, so it suffices to consider the cases i = j. The inequality (1.15) is then a generalization of the Rayleigh (or negative-correlation) inequality in which an extra constant C = β + 1 is inserted on the right-hand side. (The ordinary Rayleigh property corresponds to β ↓ 0, hence to taking f = − log P and omitting the k = 0 condition.) It would be interesting to know whether the combinatorial consequences of the Rayleigh property -such as the matroidal-support property [121] -extend to the C-Rayleigh property for arbitrary C < ∞. It would also be interesting to extend the results of the present paper, which address complete 12 The formula in their Theorem 4.4 has a misprint: the power −1/2 should be β − 3 2 . 13 A polynomial P ≡ 0 with the half-plane property is also termed Hurwitz stable. monotonicity of order ∞, to complete monotonicity of finite orders K. In what way do the conditions on β become K-dependent?
Connected-spanning-subgraph polynomials. Let us remark that the literature contains some other examples of multivariate polynomials P for which P −β has all nonnegative Taylor coefficients, for some specified set of numbers β. For instance, Askey and Gasper [9] showed that this is the case for Gillis, Reznick and Zeilberger [58] later gave an elementary proof. Likewise, Koornwinder [77] proved this for whenever β ≥ 1; an elementary proof later emerged from the combined work of Ismail and Tamhankar [73] and Gillis-Reznick-Zeilberger [58]. It turns out that these two examples also have a combinatorial interpretation: not in terms of the spanning-tree polynomial T G (x), but rather in terms of the connectedspanning-subgraph polynomial [105,112] which has T G (x) as a limiting case: If we specialize to G = C n and make the change of variables v i = −λ(1 − z i ) with 0 < λ < n -thus defining P G,λ (z) = T G (−λ(1 − z))/T G (−λ1) -it then turns out that the Askey-Gasper polynomial (1.16) corresponds to the case n = 3, λ = 1, while the Koornwinder polynomial (1.17) corresponds to the case n = 4, λ = 2. On the other hand, in the limit λ → 0 we recover a multiple of the Szegő polynomial (1.1); this is simply a special case of (1.19).
In the same way that the complete monotonicity of T −β G is a strong quantitative form of the half-plane property, it turns out that the nonnegativity of Taylor coefficients of P −β G,λ in these examples is a strong quantitative form of the multivariate Brown-Colbourn property (or more precisely, the multivariate property BC λ ) discussed in [105,112]. But it seems to be a difficult problem to determine the set of pairs (λ, β) for which P −β G,λ has nonnegative Taylor coefficients, even in the simplest case G = C 3 . We have some partial results on this problem, but we leave these for a future paper.
Plan of this paper
The plan of this paper is as follows: In Section 2 we define complete monotonicity on cones and recall the Bernstein-Hausdorff-Widder-Choquet theorem; we also prove a general result showing that complete monotonicity of P −β on a cone C ⊂ V implies the nonvanishing of P in the complex tube C + iV . In Section 3 we discuss some general constructions by which new polynomials P with P −β completely monotone can be obtained from old ones. In Section 4 we present the determinantal construction and prove Theorems 1.2, 1.3 and 1.4. In Section 5 we present the quadratic-form construction and prove Theorem 1.9. In Section 6 we present briefly the theory of positive-definite functions (in the semigroup sense) on convex cones -which is a close relative of the theory of completely monotone functions -and its application to the class of cones treated here. Finally, in Section 7 we apply the results of Sections 2-5 to the spanning-tree polynomials of graphs and the basis generating polynomials of matroids; in particular we analyze the series-parallel case and prove Theorem 1.1.
In the arXiv version of the paper we include two appendices that are being omitted from this journal version due to space constraints: Appendix A reviewing the definition and main properties of the Moore determinant for hermitian quaternionic matrices, and Appendix B explaining an elementary proof of Gindikin's characterization of parameters for which the Riesz distribution is a positive measure.
We have tried hard to make this paper comprehensible to the union (not the intersection!) of combinatorialists and analysts. We apologize in advance to experts in each of these fields for boring them every now and then with overly detailed explanations of elementary facts.
Complete monotonicity on cones
In the Introduction we defined complete monotonicity for functions on (0, ∞) n . For our later needs (see Sections 4 and 5), it turns out to be natural to consider complete monotonicity on more general open convex cones C ⊂ R n . This is a genuine generalization, because for n ≥ 3, an open convex cone is not necessarily the affine image of a (possibly higher-dimensional) orthant, i.e. it need not have "flat sides": an example is the Lorentz cone {x ∈ R n : Definition 2.1 Let V be a finite-dimensional real vector space, and let C be an open convex cone in V . Then a C ∞ function f : C → R is termed completely monotone if for all k ≥ 0, all choices of vectors u 1 , . . . , u k ∈ C, and all x ∈ C, we have where D u denotes a directional derivative. A function f is termed conditionally completely monotone if the inequality (2.1) holds for all k ≥ 1 but not necessarily for k = 0. 14 Of course, if the inequality (2.1) holds for all u 1 , . . . , u k in some set S, then by linearity and continuity it holds also for all u 1 , . . . , u k in the closed convex cone generated by S. This observation also shows the equivalence of Definition 2.1, specialized to the case V = R n and C = (0, ∞) n , with the definition given in the Introduction.
for arbitrarily large n, then f is completely monotone: for if (2.1) fails for some k, then we can take n = k and T e i = u i (where e i is the ith coordinate unit vector in R n ) and f • T will fail one of the kth-order complete-monotonicity inequalities.
Let us next recall some elementary facts. If f is completely monotone, then f (0 + ) = lim x→0,x∈C f (x) exists and equals sup x∈C f (x), but it might be +∞. The product of two completely monotone functions is completely monotone. If f is completely monotone and Φ: [0, ∞) → [0, ∞) is absolutely monotone (i.e. its derivatives of all orders are everywhere nonnegative), then Φ • f is completely monotone. If f is conditionally completely monotone and Φ: (−∞, ∞) → [0, ∞) is absolutely monotone, then Φ • f is completely monotone. (In particular, this occurs when Φ is the exponential function.) Finally, a locally uniform limit of a sequence of completely monotone functions is completely monotone.
The fundamental fact in the theory of completely monotone functions on (0, ∞) is the Bernstein-Hausdorff-Widder theorem [127]: A function f defined on (0, ∞) is completely monotone if and only if it can be written in the form where µ is a nonnegative Borel measure on [0, ∞). We shall need a multidimensional version of the Bernstein-Hausdorff-Widder theorem, valid for arbitrary cones. Such a result was proven by Choquet [39] 15 : Theorem 2.2 (Bernstein-Hausdorff-Widder-Choquet theorem) Let V be a finite-dimensional real vector space, let C be an open convex cone in V , and let C * = chosen by analogy with "conditionally positive definite matrix" [18,14], with which this concept is in fact closely related [68,69,70,17,18]. (Warning: The book [18] uses the term "negative definite" for what we would call "conditionally negative definite".) Please note that if f is bounded below, then f is conditionally completely monotone if and only if there exists a constant c such that f + c is completely monotone. But there also exist conditionally completely monotone functions that are unbounded below (and hence for which such a constant c Of course, it follows immediately from the definition that f is conditionally completely monotone if and only if −D u f is completely monotone for all vectors u ∈ C. In the multidimensional case this seems rather difficult to work with; but in the one-dimensional case C = (0, ∞) it says that f is conditionally completely monotone if and only if −f ′ is completely monotone.
{ℓ ∈ V * : ℓ, x ≥ 0 for all x ∈ C} be the closed dual cone. Then a function f : C → R is completely monotone if and only if there exists a positive measure µ on C * satisfying In this case, µ(C * ) = f (0 + ); in particular, µ is finite if and only if f is bounded.
In particular, if f is completely monotone on C, then it is extendible [using (2. 3)] to an analytic function on the complex tube C + iV satisfying for all k ≥ 0, x ∈ C, y ∈ V and u 1 , . . . , u k ∈ C.
Remarks. 1. Since C is nonempty and open, it is not hard to see that
for all k ≥ 0, all u 1 , . . . , u k ∈ C, and all x ∈ C.
Furthermore, for u 1 , . . . , u k in the closure of C, the left-hand side of (2.5) is either strictly positive for all x ∈ C or else identically zero on C. Of course, the latter case can occur: e.g.
2. In our definition of complete monotonicity, the cone C plays two distinct roles: it is the domain on which f is defined, and it provides the direction vectors u i for which the inequalities (2.1) hold. Choquet [39] elegantly separates these roles, and considers functions on an arbitrary open set Ω ⊆ V that are completely monotone with respect to the cone C. He then proves the integral representation (2.3) under the hypothesis Ω + C ⊆ Ω. This is a beautiful generalization, but we shall not need it.
By virtue of Theorem 2.2, one way to test a function f for complete monotonicity is to compute its inverse Laplace transform and ask whether it is nonnegative and supported on C * . Of course, this procedure is not necessarily well-defined, because the inverse Laplace transform need not exist; moreover, if it does exist, it may need to be understood as a distribution in the sense of Schwartz [107] rather than as a pointwisedefined function. But we can say this: If f : C → R is the Laplace transform of a distribution T on V * , then f is completely monotone if and only if T is positive (hence a positive measure) and supported on C * . This follows from the injectivity of the Laplace transform on the space D ′ (R n ) of distributions [107, p. 306]. Note that the complete monotonicity of f can fail either because T fails to be positive or because T fails to be supported on C * . In the former case, we can conclude that f is not completely monotone on any nonempty open convex subcone C ′ ⊆ C. In the latter case, T might possibly be supported on some larger proper cone; but if it isn't (e.g. if the smallest convex cone containing the support of T is all of V * ), then once again we can conclude that f is not completely monotone on any nonempty open convex subcone C ′ ⊆ C. And finally, if f is not the Laplace transform of any distribution on V * , then it is certainly not the Laplace transform of a positive measure, and hence is not completely monotone on any nonempty open convex subcone C ′ ⊆ C.
In the applications to be made in this paper, the functions f will typically be of the form F −β , where F is a function (usually a polynomial) that is strictly positive on the cone C and has an analytic continuation to the tube C + iV (for polynomials this latter condition of course holds trivially). The following corollary of Theorem 2.2 shows that the complete monotonicity of F −β on the real cone C implies the absence of zeros of F in the complex tube C + iV : Corollary 2.3 Let V be a finite-dimensional real vector space and let C be an open convex cone in V . Let F be an analytic function on the tube C + iV that is real and strictly positive on C. If F −β is completely monotone on C for at least one β > 0, then F is nonvanishing on C + iV . [In particular, when V = R n and C = (0, ∞) n , the function F has the half-plane property.] Proof. Suppose that G = F −β is completely monotone on C; then by Theorem 2.2 it has an analytic continuation to C + iV (call it also G). Now suppose that S = {z ∈ C + iV : F (z) = 0} is nonempty. Choose a simply connected domain D ⊂ (C + iV ) \ S such that D ∩ C = ∅ andD ∩ S = ∅. 16 Then H = F −β is a well-defined analytic function on D (we take the branch that is real and positive on D ∩ C). On the other hand, H coincides with G on the real environment D ∩ C, so it must coincide with G everywhere in D. But lim z→z 0 |H(z)| = +∞ for all z 0 ∈D ∩ S, which contradicts the analyticity of G on C + iV .
Remarks. 1. It also follows that the analytic function G = F −β defined on C + iV is nonvanishing there.
2. The hypothesis that F have an analytic continuation to C + iV is essential; it cannot be derived as a consequence of the complete monotonicity of F −β on C. To 16 For instance, let Ω be a simply connected open subset of C whose closure is a compact subset of C and which satisfies (Ω + iV ) ∩ S = ∅; fix a norm · on V ; and let By compactness we must have R > 0 (for otherwise we would haveΩ ∩ S = ∅, contrary to the hypothesis that F > 0 on C). Now take D = Ω + iB R , where B R = {y ∈ V : y < R}. see this, take V = R and C = (0, ∞) and consider F (x) = (1 + 1 2 e −x ) −1/β with any β > 0.
3. The converse of Corollary 2.3 is easily seen to be false: for instance, the univariate polynomial P (x) = 1 + x 2 has the half-plane property (i.e. is nonvanishing for Re x > 0), but P −β is not completely monotone on (0, ∞) for any β > 0. The same holds for the bivariate multiaffine polynomial P (x 1 , x 2 ) = 1 + x 1 x 2 . So the complete monotonicity of P −β for some β > 0 is strictly stronger than the half-plane property. It would be interesting to know whether similar counterexamples can be found if P is required to be homogeneous, or homogeneous and multiaffine.
In this paper we will typically consider a polynomial P that is strictly positive on an open convex cone C, and we will ask for which values of β the function P −β is completely monotone. We begin with a trivial observation: If P is a nonconstant polynomial, then P −β cannot be completely monotone on any nonempty open convex cone for any β < 0 (because P grows at infinity in all directions except at most a variety of codimension 1); and P −β is trivially completely monotone for β = 0. So we can restrict attention to β > 0.
Given a function F : C → (0, ∞) -for instance, a polynomial -we can ask about the set Clearly B F is a closed additive subset of (0, ∞). In particular, we either have B F ⊆ [ǫ, ∞) for some ǫ > 0 or else B F = (0, ∞). The following easy lemma [68,17] characterizes the latter case: Lemma 2.4 Let V be a finite-dimensional real vector space, let C be an open convex cone in V , and let F : C → (0, ∞). Then the following are equivalent: (a) F −β is completely monotone on C for all β > 0.
and its derivatives with respect to x. Finally, (c) =⇒ (a) follows from F −β = exp(−β log F ) and the fact that exp is absolutely monotone (i.e. has all derivatives nonnegative) on (−∞, ∞).
Already for C = (0, ∞) it seems to be a difficult problem to characterize in a useful way the functions F described in Lemma 2.4, or even the subclass consisting of polynomials P . 17 For polynomials P (x) = (1 + x/x i ), a necessary condition from Corollary 2.3 is that P have the half-plane property, i.e. Re x i ≥ 0 for all i. A sufficient condition is that all x i be real and positive; and for quadratic polynomials this condition is necessary as well. But already for quartic polynomials the situation becomes more complicated: for instance, we can take x 1 = a + bi, x 2 = a − bi, x 3 = x 4 = c with 0 < c ≤ a and b ∈ R, and it is not hard to see that − log P is conditionally completely monotone on (0, ∞). 18 It also seems to be a difficult problem to characterize the closed additive subsets S ⊆ (0, ∞) that can arise as S = B F .
On the other hand, for a > 1 the function F −β has singularities in the right half-plane at x = log a ± iπ whenever β / ∈ {0, 1, 2, 3, . . .}, so it is not the Laplace transform of any distribution.
Example 2.6
It is an interesting problem [6,10,55,57,83,128] to determine the pairs (µ, λ) ∈ R 2 for which the function is completely monotone on (0, ∞). It is easy to show that there is a function µ ⋆ (λ) such that F µ,λ is completely monotone if and only if µ ≥ µ ⋆ (λ); furthermore, the function µ ⋆ is subadditive. The state of current knowledge about µ ⋆ seems to be: It seems to be an open problem even to prove that µ ⋆ is continuous. 17 Functions f = F −1 for which f β is completely monotone for all β > 0 are sometimes called logarithmically completely monotone [17]. 18 The function − log P is conditionally completely monotone on (0, ∞) if and only if (log P ) ′ = i (x + x i ) −1 is completely monotone on (0, ∞); and this happens if and only if its inverse Laplace transform, which is g(t) = i e −txi , is nonnegative on [0, ∞).
Constructions
In this section we discuss some general constructions by which new polynomials P with P −β completely monotone can be obtained from old ones. In the situations we have in mind, the vector space V decomposes as a direct sum V = V 1 ⊕ V 2 and the cone C is a product cone C = C 1 × C 2 (with C 1 ⊂ V 1 and C 2 ⊂ V 2 ). Since we shall be using the letters A, B, C, D in this section to denote functions, we shall write our cones as C.
Let us begin with a trivial fact: a function f (x, y) that is completely monotone on C 1 × C 2 can be specialized by fixing y to a specific point in C 2 , and the resulting function will be completely monotone on C 1 . In particular, this fixed value can then be taken to zero or infinity, and if the limit exists -possibly with some rescalingthen the limiting function is also completely monotone on C 1 . Rather than stating a general theorem of this kind, let us just give the special case that we will need, which concerns functions of the form Proof. Restrict to fixed y ∈ (0, ∞) and then take y ↓ 0; this proves that A −β is completely monotone. Restrict to fixed y ∈ (0, ∞), multiply by y β and then take y ↑ ∞; this proves that B −β is completely monotone.
As we shall see later, this trivial lemma is an analytic version of deletion (y → 0) or contraction (y → ∞) for graphs or matroids. Let us also observe a simple but important fact about complete monotonicity for functions defined on a product cone C 1 × C 2 : Then the following are equivalent: (a) f is completely monotone on C 1 × C 2 .
(b) For all k ≥ 0, all y ∈ C 2 , and all choices of vectors u 1 , . . . , u k ∈ C 2 , the function (c) For all k ≥ 0, all y ∈ C 2 , and all choices of vectors u 1 , . . . , u k ∈ C 2 , there exists a positive measure µ k,y,u 1 ,...,u k on C * 1 such that In particular, when V 2 = R and C 2 = (0, ∞), (b) reduces to the statement that the functions F k,y (x) = (−1) k ∂ k f /∂y k are completely monotone on C 1 for all k ≥ 0 and all y > 0, and (c) reduces analogously. Proof. We have the Laplace-transform formula Therefore, Lemma 3.2(a) ⇐⇒ (c) with C 1 = (0, ∞) and C 2 = C proves the equivalence of (a) and (b). Now assume that B −β exp(−tA/B) is completely monotone on C for all t ≥ 0. Then we can multiply by e −zt t p−1 /Γ(p) for any p > 0 and integrate over t ∈ (0, ∞), and the result will be completely monotone. This (together with a trivial evaluation at t = 0 to handle p = 0) shows that (b) =⇒ (c).
In particular, if B B = (0, ∞), then B A+By is either the empty set or all of (0, ∞) Proof. This follows immediately from Lemma 3.3(a) ⇐⇒ (b): for if β ∈ B A+By and λ ∈ B B , then B −β exp(−tA/B) and B −λ are both completely monotone on C, hence so is their product, hence β + λ ∈ B A+By . Lemma 3.3 leads to the following extremely important result, which (as we shall see later) is an analytic version of parallel connection for graphs or matroids: We also have an analytic version of series connection for graphs or matroids, but only for β ≥ 1/2: To prove Proposition 3.6, we begin with a lemma that we think is of independent interest; both the sufficiency and the necessity will play important roles for us. Lemma 3.7 For β ∈ R and λ > 0, the function is completely monotone on (0, ∞) 2 if and only if β ≥ 1/2. In particular, for β ≥ 1/2 there exists a positive measure µ β,λ on [0, ∞) 2 such that ) is completely monotone when β ≥ 1/2, it suffices to prove the complete monotonicity for β = 1/2. But this follows immediately from the identity (3.9) which is easily verified by completing the square in the Gaussian integral. The statement about the measure µ β,λ then follows from the Bernstein-Hausdorff-Widder-Choquet theorem (Theorem 2.2).
"Only if": If F β,λ is completely monotone, then so is F β ′ ,λ for all β ′ > β; so it suffices to prove the failure of complete monotonicity for 0 < β < 1/2. Now, by Lemma 3.2, F β,λ is completely monotone on (0, ∞) 2 if and only if the functions are completely monotone on (0, ∞) for all k ≥ 0 and all v > 0, or equivalently if their inverse Laplace transforms with respect to u, (see [49, p. 245, eq. 5.6(35)]), are nonnegative for all k ≥ 0 and all t, v > 0 (here I β−1 is the modified Bessel function). For k = 0 this manifestly holds for all β ≥ 0; but let us now show that for k = 1 it holds only for β ≥ 1/2. We have (3.12) and we need the term in square brackets to be nonnegative for all t, v > 0. Write x = 2v √ λt and eliminate v in favor of x; we need for all t, x > 0. This quadratic in √ t is nonnegative for all t > 0 if and only if (3.14) But using the large-x asymptotic expansion we see that which is < −1 for all sufficiently large x whenever β < 1/2.
Remarks. 1. It is obvious by rescaling of u and v that, for any given β, the functions F β,λ are either completely monotone for all λ > 0 or for none. 2. The appeal to the Bernstein-Hausdorff-Widder-Choquet theorem can be avoided: for β = 1/2, the integral representation (3.9) already provides the desired measure µ 1/2,λ ; and for β > 1/2, (3.9) together with represents µ β,λ as the convolution of two positive measures. Indeed, multiplying (3.9) by (3.17), one obtains after a straightforward change of variables the explicit formula and The constraint χ(t 1 , t 2 , λ) = 0 states simply that √ t 1 , √ t 2 , √ λ form the sides of a triangle; and P (t 1 , t 2 , λ) is precisely 16 times the square of the area of this triangle (Heron's formula [42,Section 3.2]). 19 In view of these explicit formulae, the proof of Lemma 3.7 is in fact completely elementary.
3. It would be interesting to know whether Lemma 3.7 can be generalized to other ratios of elementary symmetric polynomials, e.g. E −β r,n exp(−λE r+1,n /E r,n ). By Using Lemma 3.7 we can also show that the spanning-tree polynomial of the 3cycle -or equivalently, the elementary symmetric polynomial of degree 2 in three variables -has the property that P −β is completely monotone on (0, ∞) 3 if and only if β ≥ 1/2: Proof. As always, F is completely monotone for β = 0 and not completely monotone for β < 0, so it suffices to consider β > 0. Using Lemma 3.3 with A = x 1 x 2 , B = x 1 + x 2 , y = x 3 , we see that F is completely monotone on (0, ∞) 3 if and only if the function F β,λ defined in (3.7) is completely monotone for all λ ≥ 0. But by Lemma 3.7, this occurs if and only if β ≥ 1/2.
Remarks. 1. We shall later give two further independent proofs of Proposition 3.8: one based on harmonic analysis on the cone of positive-definite m × m real symmetric matrices specialized to m = 2 [Corollary 1.5, which follows from results to be proved in Section 4, together with Proposition 7.9(a)], and one based on harmonic analysis on the Lorentz cone in R n specialized to n = 3 [Corollary 1.10, which follows from results to be proved in Section 5]. The point here is that the Jordan algebra Sym(2, R) ≃ R × R 2 can be viewed as a member of two different families of Jordan algebras: Sym(m, R) and R × R n−1 [53, p. 98].
3. Proposition 3.8 is, of course, just Theorem 1.1 ′ restricted to the 3-cycle G = K 3 . In particular it implies Szegő's [116] result (except the strict positivity) for the polynomial (1.1) in the special case n = 3.
4. Combining (3.4) with (3.8)/(3.18), we obtain for β > 1/2 the formula which provides an explicit elementary proof of the direct ("if") half of Proposition 3.8. See also Remark 1 in Section 4.3 for an alternate derivation of (3.21), and see Corollary 5.8 for a generalization from E 2,3 to E 2,n .
The following generalization of Lemma 3.7 is an analytic version of series extension of a single edge: Lemma 3.9 Let V be a finite-dimensional real vector space and let C be an open convex cone in V . Let f be completely monotone on C × (0, ∞), and let β ≥ 1/2.
Proof. By the Bernstein-Hausdorff-Widder-Choquet theorem (Theorem 2.2) and linearity, it suffices to prove the lemma for f (x, y) = exp(− ℓ, x − λy) with ℓ ∈ C * and λ ≥ 0. The variable x now simply goes for the ride, so that the claim follows immediately from Lemma 3.7.
Determinantal polynomials
In this section we consider polynomials defined by determinants as in (1.4). We begin with some preliminary algebraic facts about such determinantal polynomials. After this, we turn to the analytic results that are our principal concern. We first prove a simple abstract version of the half-plane property for determinantal polynomials. Then we turn to the main topic of this section, namely, the proof of Theorems 1.2, 1.3 and 1.4. We begin with a simple formula for the determinant of the sum of two matrices, which ought to be found in every textbook of matrix theory but seems to be surprisingly little known. 20 20 This formula can be found in [80, pp. We remark that, by the same method, one can prove a formula analogous to (4.1) in which all three occurrences of determinant are replaced by permanent and the factor ǫ(I, J) is omitted. Proof. Using the definition of determinant and expanding the products, we have
Algebraic preliminaries
where the outermost sum runs over all permutations π of [m]. Define now J = π[I].
Then we can interchange the order of summation: Let π ′ ∈ S k and π ′′ ∈ S m−k be the permutations defined so that It is easy to see that sgn(π) = sgn(π ′ ) sgn(π ′′ )ǫ(I, J). The formula (4.1) then follows by using twice again the definition of determinant.
The following special case is frequently useful: where the sum runs over ordered partitions I = (I 1 , . . . , I n ) and J = (J 1 , . . . , J n ) of [m] into n possibly empty blocks satisfying |I k | = |J k | for all k; here ǫ(I, J) is the sign of the permutation taking I 1 I 2 · · · I n into J 1 J 2 · · · J n . We can now say something about determinantal polynomials of the type (1.4). Recall that if A is a (not-necessarily-square) matrix with elements in a commutative ring R, then the (determinantal) rank of A is defined to be the largest integer r such that A has a nonzero r × r minor; if no such minor exists (i.e., A = 0), we say that A has rank 0.
is a homogeneous polynomial of degree m with coefficients in R. Furthermore, the degree of P in the variable x i is ≤ rank(A i ).
[In particular, if each A i is of rank at most 1, then P is multiaffine.] Proof. Both assertions about P are immediate consequences of (4.6).
We are grateful to Andrea Sportiello for drawing our attention to Lemma 4.1 and its proof, and for showing us this elegant proof of Proposition 4.3.
An analogue of Proposition 4.3 holds also for hermitian quaternionic matrices, albeit with a different proof: here the determinant is the Moore determinant, and "rank" means left row rank (= right column rank); moreover, the polynomial P (x 1 , . . . , x n ) is defined initially by letting x 1 , . . . , x n be real numbers. See [108, Proposition A.8].
The half-plane property
Now we take an analytic point of view, so that the commutative ring R will be either R or C.
In this subsection we make a slight digression from our main theme, by showing that if the A i are complex hermitian positive-semidefinite matrices (this of course includes the special case of real symmetric positive-semidefinite matrices), then the determinantal polynomial (4.7) has the half-plane property. This turns out to be an easy extension of the proof of [37, Theorem 8.1(a)]. Indeed, we can go farther, by first stating the result in a clean abstract way, and then deducing the half-plane property for (4.7) as an immediate corollary.
has the half-plane property, that is, either P ≡ 0 or else P (x 1 , . . . , x n ) = 0 whenever Re x i > 0 for all i.
First Proof of Proposition 4.4.
Let A ∈ C + iV , and let ψ be a nonzero vector in C m . Then the Hermitian form ψ * Aψ = m i,j=1ψ i A ij ψ j has strictly positive real part, and in particular is nonzero; it follows that Aψ = 0. Since this is true for every nonzero ψ ∈ C m , we conclude that ker A = {0}, i.e. A is nonsingular; and this implies that (and is in fact equivalent to) det A = 0. This is nonzero because all the eigenvalues of I + iP −1/2 QP −1/2 have real part equal to 1.
Proof of Corollary 4.5. If at least one of the matrices A 1 , . . . , A n is strictly positive definite, then n i=1 x i A i lies in C + iV whenever Re x i > 0 for all i. Proposition 4.4 then implies that P (x 1 , . . . , x n ) = 0.
The general case can be reduced to this one by replacing A i → A i + ǫI (ǫ > 0) and taking ǫ ↓ 0. By Hurwitz's theorem, the limiting function is either nonvanishing on the product of open right half-planes or else is identically zero. In order to make the elementary nature of the proof as transparent as possible, we proceed as follows: First we prove the direct ("if") half of Theorem 1.3(a,b) by completely elementary methods, without reference to Euclidean Jordan algebras. Then we prove Theorem 1.4: we will see that the proof of the direct half of this theorem is a straightforward abstraction of the preceding elementary proof in the concrete cases; only the converse ("only if") half is really deep.
Proof of the direct half of Theorem 1.3(a,b). Let us begin with the real case and β = 1/2. We use the Gaussian integral representation where A is a real symmetric positive-definite m × m matrix and we have written x = (x 1 , . . . , x m ) ∈ R m . This exhibits (det A) −1/2 as the Laplace transform of a positive measure on Π m (R) * = Π m (R), namely, the push-forward of Lebesgue measure dx/π m/2 on R m by the map x → xx T . [We remark that this measure is supported on positive-semidefinite matrices of rank 1.] Alternatively, one can see directly, by differentiating under the integral sign in (4.10), that the k-fold directional derivative of (det A) −1/2 in directions B 1 , . . . , B k ∈ Π m (R) has sign (−1) k , because each derivative brings down a factor −x T B i x ≤ 0. Since a product of completely monotone functions is completely monotone, it follows immediately that (det A) −N/2 is completely monotone for all positive integers N. We remark that this can alternatively be seen from the Gaussian integral representation where we have introduced vectors x (α) ∈ R m for α = 1, . . . , N. If we assemble these vectors into an m × N matrix X, then we have exhibited (det A) −N/2 as the Laplace transform of a positive measure on Π m (R), namely, the push-forward of Lebesgue measure dX/π mN/2 on R m×N by the map X → XX T . This measure is supported on positive-semidefinite matrices of rank min (N, m). Finally, for real values of β > (m − 1)/2, we use the integral representation 22 The proof in the complex case is completely analogous. For β = 1 we use the Gaussian integral representation where A is a complex hermitian positive-definite matrix, z = (z 1 , . . . , z m ) ∈ C m and denotes complex conjugation. For real values of β > m − 1, we use the integral representation 23 where the integration runs over complex hermitian positive-definite m × m matrices B, with measure Remarks. 1. By applying (4.12) for the case m = 2 to A = we obtain after a bit of algebra an alternate derivation of the formula (3.21) for (x 1 x 2 + x 1 x 3 + x 2 x 3 ) −β , valid for β > 1/2. In particular it implies the direct ("if") half of Proposition 3.8, and hence also Szegő's [116] result (except the strict positivity) for the polynomial (1.1) in the special case n = 3.
2.
Similarly, by combining (4.14)/(4.15) for the case m = 2 with (1.6)/(1.7), we obtain after some algebra an explicit formula for E 2,4 (x) −β , valid for β > 1: This formula provides an explicit elementary proof of the direct half of Corollary 1.6 -and in particular solves the Lewy-Askey problem -in the same way that (3.21) provides an explicit elementary proof of the direct half of Proposition 3.8. Note also that by setting x 4 = 0 in (4.16) and performing the integral over t 4 , we obtain (3.21). Finally, see Corollary 5.8 for a generalization from E 2,3 and E 2,4 to E 2,n .
As preparation for the proof of Theorem 1.4, let us review some facts from the theory of harmonic analysis on Euclidean Jordan algebras (see [53] for definitions and further background).
Let V be a simple Euclidean (real) Jordan algebra of dimension n and rank r, with Peirce subspaces V ij of dimension d; recall that n = r + d 2 r(r − 1). It is illuminating (though not logically necessary) to know that there are precisely five cases [ We denote by (x|y) = tr(xy) the inner product on V , where tr is the Jordan trace and xy is the Jordan product. 24 Let Ω ⊂ V be the positive cone (i.e. the interior of the set of squares in V , or equivalently the set of invertible squares in V ); it is open, convex and self-dual. 25 We denote by ∆(x) the Jordan determinant on V : it is a homogeneous polynomial of degree r on V , which is strictly positive on Ω and vanishes on ∂Ω. 26 We have the following fundamental Laplace-transform formula [53, Corollary VII.1.3]: for y ∈ Ω and Re α > (r − 1) d 2 = n r − 1, Thus, for Re α > (r − 1) d 2 , the function ∆(x) α− n r /Γ Ω (α) is locally integrable on Ω and polynomially bounded; it therefore defines a tempered distribution R α on V by the usual formula Using (4.18), a beautiful argument -which is a special case of I. Bernstein's general method for analytically continuing distributions of the form P λ Ω [19,20] -shows that the distributions R α can be analytically continued to the whole complex α-plane: The distributions R α can be analytically continued to the whole complex α-plane as a tempered-distribution-valued entire function of α. The distributions R α have support contained in Ω and have the 26 In cases (a) and (b), the Jordan determinant is the ordinary determinant; in case (c) it is the Moore determinant (see [108, Appendix A]); in case (d) it is the Freudenthal determinant [56,46,13,84]; and in case (e) it is the Lorentz quadratic form ∆(x 0 , x) = x 2 0 − x 2 . 27 Here dx is Lebesgue measure on the Euclidean space V with inner product ( · | · ). Thus in showing that (4.18)/(4.19) agrees with (4.12). Likewise, in case showing that (4.18)/(4.19) agrees with (4.14).
following properties: (here δ denotes the Dirac measure at 0). Finally, the Laplace transform of R α is for y in the complex tube Ω + iV . 28 The distributions {R α } α∈C constructed in Theorem 4.6 are called the Riesz distributions on the Euclidean Jordan algebra V . It is fairly easy to find a sufficient condition for a Riesz distribution to be a positive measure:
(b) For α > (r − 1) d 2 , the Riesz distribution R α is a positive measure that is supported on Ω and given there by a density (with respect to Lebesgue measure) that lies in L 1 loc (Ω).
Indeed, part (b) is immediate from the definition (4.20), while part (a) follows by reasoning that abstracts the constructions given in (4.10)/(4.11) and (4.13) above for the special cases of real symmetric and complex hermitian matrices. 29 It is a highly nontrivial fact that the converse of Proposition 4.7 also holds: 28 The property (4.21d) is not explicitly stated in [53], but for Re α > (r − 1) d 2 it is an immediate consequence of (4.19)/(4.20), and then for other values of α it follows by analytic continuation (see also [65,Proposition 3.1(iii) and Remark 3.2]). 29 Thus, for integer N ≥ 0 in the real symmetric (resp. complex hermitian) case, the positive measure R N/2 is supported on the positive-semidefinite matrices of rank min(N, m) [53, Proposition VII.2.3] and is nothing other than the push-forward of Lebesgue measure on R m×N (resp. C m×N ) by the map X → XX T (resp. X → XX * ), as discussed above during the proof of the direct half of Theorem 1.3(a). For N ≥ m this is a straightforward calculation [114,21,50,87,5], which shows the equivalence of (4.11) and (4.12) [or the corresponding formulae in the complex case]; for 0 ≤ N ≤ m − 1 it follows by comparing (4.11) with (4.22) and invoking the injectivity of the Laplace transform on the space of distributions [107, p. 306 This fundamental fact was first proven by Gindkin [60] (see also [16,72]) and is generally considered to be deep. However, there now exist two elementary proofs: one that is a fairly simple but clever application of Theorem 4.6 and Proposition 4.7 [109,31] [113,Appendix], and another that analyzes the integrability of ∆(x) α− n r near ∂Ω and characterizes those α ∈ C for which R α is a locally finite complex measure [113]. In [108,Appendix B] we give the first of these proofs; it is amazingly short.
Using Remark. The formulae in this section arise in multivariate statistics in connection with the Wishart distribution [87,5]; in recent decades some statisticians have introduced the formalism of Euclidean Jordan algebras as a unifying device [22,31,32,81,82]. These formulae also arise in quantum field theory in studying the analytic continuation of Feynman integrals to "complex space-time dimension" [114,21,50,30].
Quadratic forms
In this section we consider quadratic forms (= homogeneous polynomials of degree 2). We begin by proving an abstract theorem giving a necessary and sufficient condition for such a quadratic form to be nonvanishing in a complex tube C + iV ; in the special case C = (0, ∞) n this corresponds to the half-plane property. We then employ these results as one ingredient in our proof of Theorem 1.9.
The half-plane property
In this subsection we proceed in three steps. First we study the analytic geometry associated to a symmetric bilinear form B on a finite-dimensional real vector space V (Lemma 5.1). Next we extend the quadratic form Q(x) = B(x, x) to the complexified space V +iV and study the values it takes (Proposition 5.2). Finally we introduce the additional structure of an open convex cone C ⊆ V on which Q is assumed strictly positive (Corollary 5.3 and Theorem 5.4).
So let V be a finite-dimensional real vector space, and let B: V × V → R be a symmetric bilinear form having inertia (n + , n − , n 0 ). Define S + = {x: B(x, x) > 0} and S − = {x: B(x, x) < 0}. Clearly S + and S − are open cones (not in general convex or even connected). Indeed, S + and S − are never convex (except when they are empty) because x ∈ S ± implies −x ∈ S ± but manifestly 0 / ∈ S ± . Many of our proofs will involve choosing a basis in V (and hence identifying V with R n ) in such a way that B takes the form Moreover, whenever S + = ∅ we can choose the basis in this construction such that the first coordinate direction lies along any desired vector x ∈ S + : that this can be done follows from the standard Gram-Schmidt proof of the canonical form (5.1). Elementary analytic geometry gives: Lemma 5.1 Let V be a finite-dimensional real vector space, and let B be a symmetric bilinear form on V having inertia (n + , n − , n 0 ).
(a) If n + = 0, then S + = ∅. is a nonempty open non-convex cone that is contained in S + and has a nonempty intersection with every neighborhood of x; moreover we can write Analogous statements hold for S − when n − = 0, n − = 1 or n − ≥ 2.
Proof. (a) is trivial. (b) Assume that B takes the canonical form (5.1) with n + = 1, and define C to be the "forward light cone" It is immediate that S + = C ∪ (−C) and C ∩ (−C) = ∅, and the statements about S + follow easily. Now consider any x, y ∈ C and define We have g(0) = B(x, x) > 0; but for the special value α ⋆ = −x 1 /y 1 the vector x + α ⋆ y has its first component equal to zero and hence g(α ⋆ ) = B(x+α ⋆ y, x+α ⋆ y) ≤ 0. So the quadratic equation g(α) = 0 has a real solution, which implies that its discriminant is nonnegative, i.e. that B(x, y) 2 ≥ B(x, x) B(y, y). Next assume that x ∈ S + and y ∈ V . If B(x, x) = 0 or B(y, y) ≤ 0, the assertion is trivial; so we can assume that x, y ∈ S + . By the replacements x → ±x and y → ±y (which do not affect the desired conclusion) we may assume that x, y ∈ C. But in this case the desired inequality has already been proven.
Finally, using B(x, y) > 0 for x, y ∈ C it is easily checked that C is convex. (c) Clearly S + is nonempty; and as explained earlier it is non-convex. To prove that S + is connected, we can assume that B takes the form (5.1) with n + ≥ 2. It is now sufficient to find a path in S + from an arbitrary vector x ∈ S + to the vector e 1 = (1, 0, . . . , 0). But this is easy: first move coordinates x i with i > n + monotonically to zero [this increases B(x, x) monotonically and hence stays in S + ]; then rotate and scale inside the subspace spanned by the first n + coordinates to obtain e 1 . Now assume that B takes the canonical form (5.1) with n + ≥ 2 and with the given vector x ∈ S + lying along the first coordinate direction. Then an easy computation shows that y = (y 1 , y 2 , . . . , y n ) belongs to T + (x) if and only if y ′ ≡ (0, y 2 , . . . , y n ) belongs to S + . Therefore, the preceding results (b,c) applied with n + replaced by n + − 1 show that T + (x) is a nonempty open non-convex cone, which is obviously contained in S + ; and by taking y ′ small we see that T + (x) meets every neighborhood of x. Moreover, Rx + Ry = Rx + Ry ′ is a two-dimensional subspace if and only if y ′ = 0 (i.e. y / ∈ Rx); and since B(x, y ′ ) = 0, we have Rx + Ry ′ ⊆ S + ∪ {0} if and only if y ′ ∈ S + ∪ {0}.
Remark. Note the sharp contrast between parts (b) and (c): in the latter case, given any x ∈ S + there is a nonempty open cone of vectors y satisfying the Schwarz inequality (strictly) with x; while in the former case all vectors y ∈ V satisfy the reverse Schwarz inequality with x.
We now consider the quadratic form Q(x) = B(x, x), extended to the complexified space V + iV in the obvious way: Q(x + iy) = B(x, x) − B(y, y) + 2iB(x, y). We want to study the values taken by Q in the complex tubes S + + iV and S − + iV , and in particular the presence or absence of zeros. We write H to denote the open right half-plane {ζ ∈ C: Re ζ > 0}.
Proposition 5.2 Let V be a finite-dimensional real vector space, let B be a symmetric bilinear form on V having inertia (n + , n − , n 0 ), and let Q be the associated quadratic form extended to V + iV . In particular, Q is nonvanishing on S + + iV .
(b) If n + ≥ 2, then for every x ∈ S + and y ∈ T In particular, for each x ∈ S + and y ∈ T + (x) there exists z ∈ Rx + Ry such that Q(x + iz) = 0.
Now we introduce the additional structure of an open convex cone C ⊆ V on which Q is assumed strictly positive (i.e. C ⊆ S + ). The hypotheses in the following result are identical to those of Theorem 1.9.
Proof of Theorem 1.9
Proof of Theorem 1.9. We are concerned with the complete monotonicity of Q −β , where Q(x) = B(x, x). As always, it suffices to consider β > 0, because complete monotonicity trivially holds when β = 0 and never holds when β < 0 (because Q grows at infinity).
We assume that B takes the canonical form (5.1) on V = R n , and we consider separately the three cases: (a) The case n + = 1, n − = 0 is trivial: we have Q(x) = x 2 1 , and the convex cone C must be contained in one of the half-spaces is clearly completely monotone on each of these two half-spaces. (b) Next consider the case n + = 1, n − ≥ 1: here (5.1) is the Lorentz form in one "timelike" variable and n − "spacelike" variables, and we have S + = C ∪ (−C) where C is the forward light cone (5.4). By Corollary 5.3(a) we have either C ⊆ C or C ⊆ −C; let us suppose the former.
Let us now show that if β ≥ (n − − 1)/2, then the map x → Q(x) −β is completely monotone on C. The variables x n − +2 , . . . , x n play no role in this, so we can assume without loss of generality that n 0 = 0, i.e. n = n − +1. For β > (n − −1)/2 = (n−2)/2, the desired complete monotonicity then follows from the integral representation [102, pp. 31-34] For suppose that this map is completely monotone on C ′ for some such β: then by the Bernstein-Hausdorff-Widder-Choquet theorem (Theorem 2.2), we must have (assuming again without loss of generality that n 0 = 0) 31 for some positive measure µ β supported on (C ′ ) * . Now, any such measure must clearly be Lorentz-invariant and homogeneous of degree 2β − n (this follows from the injectivity of the Laplace transform). Furthermore, µ β must be supported on C, for otherwise the support would contain a spacelike hyperboloid {y ∈ R n : y 2 1 − y 2 2 − . . . − y 2 n = λ} for some λ < 0, whose convex hull is all of R n and hence not contained in the proper cone (C ′ ) * . On the other hand, every Lorentz-invariant locally-finite positive measure on R n that is supported on C is of the form [101, Theorem IX.33, pp. [70][71][72][73][74][75][76] µ = cδ 0 + (c) c ≥ 0, ρ = 0: here µ is homogeneous of degree −n.
This proves that a positive measure µ β can exist only if β = 0 or β ≥ (n − 2)/2. (c) Finally, consider the case n + > 1. By Corollary 5.3 (b), Q has zeros in the tube C ′ + iV for every nonempty open convex subcone C ′ ⊆ S + (indeed, for every nonempty subset C ′ ⊆ S + ). We conclude by Corollary 2.3 that Q −β cannot be completely monotone on C ′ for any β > 0.
Remark. More can be said about the integral representation (5.7). It turns out that the quantity which is initially defined for β > (n − 2)/2 as a positive measure [or for Re β > (n − 2)/2 as a complex measure] on R n (and which is of course supported on C), can be analytically continued as a tempered-distribution-valued entire function of β [48] [53, Theorem VII.2.2]: this is the Riesz distribution R β on the Euclidean Jordan algebra R × R n−1 . 32 The integral representation 32 A slightly different normalization is used in [53], arising from the fact that the Jordan inner product on R × R n−1 is (x 0 , x)|(y 0 , y) = 2(x 0 y 0 + x · y): this has the consequence that dx Jordan = 2 n/2 dx ordinary , and also the Laplace transform is written with an extra factor 2 in the exponential. The change of sign from x 0 y 0 − x · y to x 0 y 0 + x · y is irrelevant, because the Riesz distribution R β (y) is invariant under the reflections y i → −y i for 2 ≤ i ≤ n.
where x ∈ C then holds for all complex β, by analytic continuation. However, the distribution R β is a positive measure if and only if either β = 0 or β ≥ (n − 2)/2. This follows from general results of harmonic analysis on Euclidean Jordan algebras (i.e. Theorem 4.8), but we have given here a direct elementary proof. Indeed, once one has in hand the fundamental properties of the Riesz distribution R β , one obtains an even simpler elementary proof by observing that (5.11) is not locally integrable near the boundary of the cone C when Re β ≤ (n − 2)/2 and β = (n − 2)/2, hence [113, Lemma 2.1] that the distribution R β is not a locally finite complex measure in these cases.
So let A be a real symmetric n × n matrix with one positive eigenvalue, n − 1 negative eigenvalues, and no zero eigenvalues. We first need a slight refinement of Lemma 5.1 (b) to take advantage of the fact that we now have n 0 = 0; for simplicity we state it in the "concrete" situation V = R n .
Lemma 5.5 Fix n ≥ 2, and let A be a real symmetric n × n matrix with one positive eigenvalue, n − 1 negative eigenvalues, and no zero eigenvalues. Then there exists a nonempty open convex cone C ⊂ R n (which is uniquely determined modulo a sign) such that is the open dual cone to C.
Proof. We can write A = S T L n S where S is a nonsingular real matrix. Then the claims follow easily from the corresponding properties of the Lorentz quadratic form.
Proposition 5.6 Fix n ≥ 2, and let A be a real symmetric n × n matrix with one positive eigenvalue, n − 1 negative eigenvalues, and no zero eigenvalues; and let C be the open convex cone from Lemma 5.5. Then for β > (n − 2)/2 we have Proof. Write A = S T L n S where S is a nonsingular real matrix, and make the changes of variable y = Sy ′ and x = L n S −T x ′ in (5.7). Then dy = | det S| dy ′ where | det S| = | det A| 1/2 ; and the formula (5.15) follows immediately after dropping primes.
Positive-definite functions on cones
In this section we recall briefly the theory of positive-definite functions (in the semigroup sense) on convex cones, which closely parallels the theory of completely monotone functions developed in Section 2 and indeed can be considered as a natural extension of it. We then apply this theory to powers of the determinant on a Euclidean Jordan algebra, and derive (in Theorem 6.5) a strengthening of Theorem 1.4. As an application of this latter result, we disprove (in Example 6.6) a recent conjecture of Gurau, Magnen and Rivasseau [64].
This section is not required for the application to graphs and matroids (Section 7).
General theory
Here we summarize the basic definitions and results from the theory of positivedefinite functions on convex cones and, more generally, on convex sets. A plethora of useful additional information concerning positive-definite (and related) functions on semigroups can be found in the monograph by Berg, Christensen and Ressel [18].
Similarly, a function f : C + iV → C is termed positive-definite in the involutivesemigroup sense if for all n ≥ 1 and all x 1 , . . . , x n ∈ C + iV , the matrix {f (x i + x j )} n i,j=1 is positive-semidefinite.
Theorem 6.2 Let V be a finite-dimensional real vector space, let C be an open convex cone in V , and let f : C → R. Then the following are equivalent: (a) f is continuous and positive-definite in the semigroup sense.
(b) f extends to an analytic function on the tube C + iV that is positive-definite in the involutive-semigroup sense.
(c) There exists a positive measure µ on V * satisfying for all x ∈ C.
Moreover, in this case the measure µ is unique, and the analytic extension to C + iV is given by (6.2).
Please note that the completely monotone functions (Theorem 2.2) correspond to the subset of positive-definite functions that are bounded at infinity (in the sense that f is bounded on the set x + C for each x ∈ C), or equivalently decreasing (with respect to the order induced by the cone C), or equivalently for which the measure µ is supported on the closed dual cone C * [rather than on the whole space V * as in Theorem 6.2 (c)]. See [89, Lemma 1, p. 579] for a direct proof that complete monotonicity implies positive-definiteness.
We remark that the hypothesis of continuity (or at least something weaker, such as measurability or local boundedness) in Theorem 6.2(a) is essential, even in the simplest case V = R and C = (0, ∞). Indeed, using the axiom of choice it can easily be shown [2, pp. 35-36, 39] that there exist discontinuous solutions to the functional equation ρ(x+ y) = ρ(x)ρ(y) for x, y ∈ (0, ∞), and any such function is automatically positive-definite in the semigroup sense. However, any such function is necessarily non-Lebesgue-measurable and everywhere locally unbounded [2, pp. 34-35, 37-39].
Theorem 6.2 is actually a special case of a more general theorem for open convex sets that need not be cones. We begin with the relevant definition [61, p. x]: Definition 6.3 Let V be a real vector space. If C ⊆ V is a convex set, then a function f : C → R is termed positive-definite in the convex-set sense if for all n ≥ 1 and all x 1 , . . . , x n ∈ C, the matrix {f ( is positive-semidefinite. More generally, if C ⊆ V + iV is a conjugation-invariant convex set, then a function f : C → C is termed positive-definite in the involutive-convex-set sense if for all n ≥ 1 and all x 1 , . . . , x n ∈ C, the matrix {f ( Note that if C is in fact a convex cone, then a function f : C → R is positivedefinite in the convex-set sense if and only if it is positive-definite in the semigroup sense. So this concept is a genuine generalization of the preceding one. (a) f is continuous and positive-definite in the convex-set sense.
(b) f extends to an analytic function on the tube C + iV that is positive-definite in the involutive-convex-set sense.
(c) There exists a positive measure µ on V * satisfying for all x ∈ C.
Moreover, in this case the measure µ is unique, and the analytic extension to C + iV is given by (6.3).
Theorem 6.4 was first proven by Devinatz [43], using the spectral theory of commuting unbounded self-adjoint operators on Hilbert space (he gives details for dim V = 2 but states that the methods work in any finite dimension); see also Akhiezer [4, for the special case in which C is a Cartesian product of open intervals. A detailed alternative proof, based on studying positive-definiteness on convex sets of rational numbers as an intermediate step [11], has been given by Glöckner [61,Proposition 18.7 and Theorem 18.8], who also gives generalizations to infinite-dimensional spaces V and to operator-valued positive-definite functions. See also Shucker [110,Theorem 4 and Corollary] and Glöckner [61,Theorem 18.8] for the very interesting extension to convex sets C that are not necessarily open (but have nonempty interior): in this latter case the representation (6.3) does not imply the continuity of f on C, but only on line segments (or more generally, closed convex hulls of finitely many points) within C. But with this modification the equivalence (a ′ ) ⇐⇒ (c) holds.
Surprisingly, we have been unable to find in the literature any complete proof of Theorem 6.2 except as a corollary of the more general Theorem 6.4. But see [61,Theorem 16.6] for a version of Theorem 6.2 for the subclass of positive-definite functions that are α-bounded with respect to a "tame" absolute value α.
It would be interesting to try to find simpler proofs of Theorems 6.2 and 6.4.
Powers of the determinant on a Euclidean Jordan algebra
We can now deduce analogues of Theorems 1.3 and 1.4 in which complete monotonicity is replaced by positive-definiteness in the semigroup sense. For brevity we state only the abstract result in terms of Euclidean Jordan algebras. The "converse" half of this result constitutes an interesting strengthening of the corresponding half of Theorem 1.4; we will apply it in Example 6.6.
Theorem 6.5 is an immediate consequence of facts about Riesz distributionsnamely, the Laplace-transform formula (4.22) and Theorem 4.8 -together with Theorem 6.2. Indeed, the proof of Theorem 6.5 is essentially the identical to that of Theorem 1.4, but using Theorem 6.2 in place of the Bernstein-Hausdorff-Widder-Choquet theorem. The point, quite simply, is that our proof of the "converse" half of Theorem 1.4 used only the failure of positivity of the Riesz distribution, not any failure to be supported on the closed dual cone (indeed, it is always supported there); so it proves Theorem 6.5 as well. 7 Application to graphs and matroids
Graphs
Let G = (V, E) be a finite undirected graph with vertex set V and edge set E; in this paper all graphs are allowed to have loops and multiple edges unless explicitly stated otherwise. Now let x = {x e } e∈E be a family of indeterminates indexed by the edges of G. If G is a connected graph, we denote by T G (x) the generating polynomial of spanning trees in G, namely where T (G) denotes the family of edge sets of spanning trees in G. If G is disconnected , we define T G (x) to be the product of the spanning-tree polynomials of its connected components. Otherwise put, T G (x) is in all cases the generating polynomial of maximal spanning forests in G. This is a slightly nonstandard definition (the usual definition would put T G ≡ 0 if G is disconnected), but it is convenient for our purposes and is natural from a matroidal point of view (see below). In order to avoid any possible misunderstanding, we have inserted in Theorems 1.1 and 1.1 ′ and Corollary 1.8 the word "connected", so that the claims made in the Introduction will be true on either interpretation of T G . Please note that, in our definition, T G is always strictly positive on (0, ∞) E , because the set of maximal spanning forests is nonempty.
Note also that, on either definition of T G , loops in G (if any) play no role in T G . And it goes without saying that T G is a multiaffine polynomial, i.e. of degree at most 1 in each x e separately. If e is an edge of G, the spanning-tree polynomial of G can be related to that of the deletion G \ e and the contraction G/e: if e is neither a bridge nor a loop The fact that T G\e equals T G/e (rather than equalling zero) when e is a bridge is a consequence of our peculiar definition of T G . Now let us take an analytic point of view, so that the indeterminates x e will be interpreted as real or complex variables.
Definition 7.1 For each β > 0, we denote by G β the class of graphs G for which The naturality of the classes G β is illustrated by the following easy but fundamental result: Proof. Deletion of a non-bridge edge e corresponds to taking x e ↓ 0. Contraction of a non-loop edge e corresponds to dividing by x e and taking x e ↑ +∞. Both of these operations preserve complete monotonicity. Deletion of a bridge has the same effect as contracting it, in our peculiar definition of T G . Contraction of a loop is equivalent to deleting it (but loops play no role in T G anyway). Isolated vertices play no role in T G . This proves closure under taking minors.
If G is obtained from G 1 and G 2 either by disjoint union or by gluing at a cut vertex, then T G = T G 1 T G 2 (on disjoint sets of variables) in our definition of T G ; this again preserves complete monotonicity. Proposition 7.2 illustrates the principal reason for allowing arbitrary constants c > 0 (rather than just c = 1) in Theorem 1.1 and subsequent results: it leads to a minor-closed class of graphs. This, in turn, allows for characterizations that are necessary as well as sufficient. A similar situation arises in studying the negativecorrelation property for a randomly chosen basis of a matroid. If only the "uniformlyat-random" situation is considered (i.e., element weights x = 1), then the resulting class of matroids is not minor-closed, and closure under minors has to be added by hand, leading to the class of so-called balanced matroids [54]. But it then turns out that the class of balanced matroids is not closed under taking 2-sums [38]. If, by contrast, one demands negative correlation for all choices of element weights x > 0, then the resulting class -the so-called Rayleigh matroids -is automatically closed under taking minors (by the same x e → 0 and x e → ∞ argument as in Proposition 7.2). Moreover, it turns out to be closed under 2-sums as well [38].
The very important property of closure under 2-sums holds also in our context. To see this, let us first recall the definitions of parallel connection, series connection and 2-sum of graphs [91, Section 7.1], and work out how T G transforms under these operations.
For i = 1, 2, let G i = (V i , E i ) be a graph and let e i be an edge of G i ; it is convenient (though not absolutely necessary) to assume that e i is neither a loop nor a bridge in G i . Let us furthermore choose an orientation e i = − − → x i y i for the edge e i . (To avoid notational ambiguity, it helps to assume that the sets V 1 , V 2 , E 1 , E 2 are all disjoint.) Then the parallel connection of (G 1 , e 1 ) with (G 2 , e 2 ) is the graph (G 1 , e 1 ) (G 2 , e 2 ) obtained from the disjoint union G 1 ∪ G 2 by identifying x 1 with x 2 , y 1 with y 2 , and e 1 with e 2 . [Equivalently, it is obtained from the disjoint union (G 1 \ e 1 ) ∪ (G 2 \ e 2 ) by identifying x 1 with x 2 (call the new vertex x), y 1 with y 2 (call the new vertex y), and then adding a new edge e from x to y.] The series connection of (G 1 , e 1 ) with (G 2 , e 2 ) is the graph (G 1 , e 1 ) ⊲⊳ (G 2 , e 2 ) obtained from the disjoint union (G 1 \e 1 )∪(G 2 \e 2 ) by identifying x 1 with x 2 and adding a new edge e from y 1 to y 2 . The 2-sum of (G 1 , e 1 ) with (G 2 , e 2 ) is the graph (G 1 , e 1 ) ⊕ 2 (G 2 , e 2 ) obtained from the parallel connection (G 1 , e 1 ) (G 2 , e 2 ) by deleting the edge e that arose from identifying e 1 with e 2 , or equivalently from the series connection (G 1 , e 1 ) ⊲⊳ (G 2 , e 2 ) by contracting the edge e.
To calculate the spanning-tree polynomial of a parallel connection, series connection or 2-sum, it is convenient to change slightly the notation and suppose that e 1 and e 2 have already been identified (let us call this common edge e), so that E 1 ∩E 2 = {e}. It is then not difficult to see [91,Proposition 7.1.13] that the spanning-tree polynomial of a parallel connection G 1 e G 2 is given by while that of a series connection G 1 ⊲⊳ e G 2 is (All the spanning-tree polynomials on the right-hand sides are of course evaluated at x =e .) The spanning-tree polynomial of a 2-sum G 1 ⊕ 2,e G 2 is therefore Proof. Closure under parallel connection is an immediate consequence of Proposition 3.5 and the formula (7.3) for parallel connection. Since the 2-sum is obtained from the parallel connection by deletion, closure under 2-sum then follows from Proposition 7.2.
2. Since G β is closed under 0-sums (disjoint unions), 1-sums (gluing at a cut vertex) and 2-sums (essentially gluing at an edge), it is natural to ask whether it is also closed under 3-sums (gluing along triangles). We do not know the answer. In particular, K 4 ∈ G β for all β ≥ 1 by Corollary 1.5, but as noted in the discussion after Problem 1.13 ′ , we do not know whether K 5 − e = K 4 ⊕ 3 K 4 belongs to G β for β ∈ (1, 3 2 ).
It is an well-known (and easy) result that any minor-closed class of graphs is of the form Ex(F ) = {G: G does not contain any minor from F } (7.6) for some family F of "excluded minors"; indeed, the minimal choice of F consists of those graphs that do not belong to the class in question but whose proper minors all do belong to the class. (Here we consider isomorphic graphs to be identical, or alternatively take only one representative from each isomorphism class.) In one of the deepest and most difficult theorems of graph theory, Roberston and Seymour [103] sharpened this result by proving that any minor-closed class of graphs is of the form Ex(F ) for some finite family F . Therefore, each of our classes G β can be characterized by a finite family of excluded minors. One of the goals of this paper -alas, incompletely achieved -is to determine these excluded minors.
Matroids
The foregoing considerations have an immediate generalization to matroids. Proof. The proof is identical to that of Proposition 7.2 if one substitutes "element" for "edge", "coloop" for "bridge", and "direct sum" for either form of union.
We refer to [91, Section 7.1] for the definitions of parallel connection, series connection and 2-sum of matroids, which generalize those for graphs. The upshot [91, Proposition 7.1.13] is that the formulae (7.3)-(7.5) for the spanning-tree polynomials of graphs extend unchanged to the basis generating polynomials of matroids. We therefore have: Proposition 7.7 Each class M β is closed under parallel connection and under 2sums.
Since each M β is a minor-closed class, we can once again seek a characterization of M β by excluded minors. However, in this case no analogue of the Robertson-Seymour theorem exists, so we have no a priori guarantee of finiteness of the set of excluded minors. Indeed, there exist minor-closed classes of matroids having an infinite family of excluded minors [91, Exercise 6.5.5(g)]; and in fact, for any infinite field F , the class of F -representable matroids has infinitely many excluded minors [91,Theorem 6.5.17].
We suspect that the classes M β are not closed under duality in general. For instance, Corollary 1.10 shows that U 2,5 ∈ M β if and only if β ≥ 3/2; but we suspect (Conjecture 1.11) that U 3,5 ∈ M β if and only if β ≥ 1. On the other hand, we shall show in Theorem 7.13 that M β for 1/2 < β < 1 consists precisely of the graphic matroids of series-parallel graphs -a class that is closed under duality.
Partial converse to Corollary 1.8
It was remarked at the end of Section 4.3 that if β does not lie in the set described in Theorem 1.3, then the map A → (det A) −β is not completely monotone on any nonempty open convex subcone of the cone of positive-definite matrices; and in particular, if the matrices A 1 , . . . , A n span Sym(m, R) or Herm(m, C), then the determinantal polynomial (1.4)/(4.8) does not have P −β completely monotone on (0, ∞) n . The spanning-tree polynomial of the complete graph K m+1 provides an example of this situation; and it turns out that there are two other cases arising from complexunimodular matroids. The following result thus provides a (very) partial converse to Corollary 1.8, where part (a) concerns regular [= real-unimodular] matroids and part (b) concerns complex-unimodular matroids: is easily seen to be complex-unimodular and to represent AG(2, 3). A tedious computation (or an easy one using Mathematica or Maple) now shows that the matrices A 1 , . . . , A 9 are linearly independent, hence span the 9-dimensional space Herm(3, C).
Let us remark that the cases enumerated in Proposition 7.9 exhaust the list of regular or complex-unimodular matroids (of rank r ≥ 2) for which the matrices A 1 , . . . , A n span Sym(r, R) or Herm(r, C), respectively. Indeed, it is known that a simple rank-r matroid that is regular (or, more generally, is binary with no F 7 minor) can have at most r(r + 1)/2 elements; furthermore, the unique matroid attaining this bound is M(K r+1 ) [91,Proposition 14.10.3]. See also [12] for an intriguing proof that uses the matrices A 1 , . . . , A n (but over GF (2) rather than C). Likewise, it is known [93, Theorem 2.1] that a simple rank-r matroid that is complex-unimodular can have at most (r 2 + 3r − 2)/2 elements if r = 3, or 9 elements if r = 3; furthermore, the unique matroid attaining this bound is T r (defined in [93]) when r = 3, or AG(2, 3) when r = 3. The only cases in which this size reaches dim Herm(r, C) = r 2 are thus r = 1, 2, 3, yielding T 1 = U 1,1 , T 2 = U 2,4 and AG(2, 3), respectively. 7.4 Series-parallel graphs (and matroids): Proof of Theorem 1.1 ′ Before proving Theorem 1.1 ′ , let us prove a similar but simpler theorem concerning the interval 0 < β < 1 2 .
Theorem 7.10 Let G be a graph, and let β ∈ (0, 1 2 ). Then the following are equivalent: (a) G ∈ G β . (b) G can be obtained from a forest by parallel extensions of edges (i.e., replacing an edge by several parallel edges) and additions of loops.
(c) G has no K 3 minor.
Proof. The equivalence of (b) and (c) is an easy graph-theoretic exercise. If G is obtained from a forest by parallel extensions of edges and additions of loops, then T G (x) is a product of factors of the form x e 1 + . . . + x e k (where e 1 , . . . , e k are a set of parallel edges in G), so that T −β G is completely monotone on (0, ∞) E for all β ≥ 0. Therefore (b) =⇒ (a).
Dave Wagner has pointed out to us that Theorem 7.10 extends easily to matroids: Moreover, these equivalent conditions imply that M ∈ M β ′ for all β ′ > 0.
Proof. Tutte has proven [91, Theorem 10.3.1] that a matroid is graphic if and only if it has no minor isomorphic to U 2,4 , F 7 , F * 7 , M * (K 5 ) or M * (K 3,3 ). Since M(K 3 ) is a minor of the last four matroids on this list, the equivalence of (b) and (c) follows from the graphic case.
Let us now prove the corresponding characterization for 1 2 < β < 1, which is a rephrasing of Theorem 1.1 ′ and concerns series-parallel graphs. Unfortunately, there seems to be no completely standard definition of "series-parallel graph"; a plethora of slightly different definitions can be found in the literature [47,41,90,91,27]. So let us be completely precise about our own usage: we shall call a loopless graph series-parallel if it can be obtained from a forest by a finite sequence of series and parallel extensions of edges (i.e. replacing an edge by two edges in series or two edges in parallel). We shall call a general graph (allowing loops) series-parallel if its underlying loopless graph is series-parallel. 33 So we need to understand how the spanning-tree polynomial T G (x) behaves under series and parallel extensions of edges. Parallel extension is easy: if G ′ is obtained from G by replacing the edge e by a pair of edges e 1 and e 2 in parallel, then T G ′ (x =e , x e 1 , x e 2 ) = T G (x =e , x e 1 + x e 2 ) . (7.9) In other words, two parallel edges with weights x e 1 and x e 2 are equivalent to a single edge with weight x e 1 + x e 2 . This is because the spanning trees of G ′ are in correspondence with the spanning trees T of G as follows: if T does not contain e, then leave T as is (it is a spanning tree of G ′ ); if T does contain e, then adjoin to T \ e one but not both of the edges e 1 and e 2 .
Series extension is only slightly more complicated: if G ′ is obtained from G by replacing the edge e by a pair of edges e 1 and e 2 in series, then T G ′ (x =e , x e 1 , x e 2 ) = (x e 1 + x e 2 ) T G x =e , x e 1 x e 2 x e 1 + x e 2 . (7.10) In other words, two series edges with weights x e 1 and x e 2 are equivalent to a single edge with weight x e 1 x e 2 /(x e 1 + x e 2 ) together with a prefactor that clears the resulting denominator. This is because the spanning trees of G ′ are in correspondence with the spanning trees T of G as follows: if T does not contain e, then adjoin to T \ e one but not both of the edges e 1 and e 2 ; if T does contain e, then adjoin to T \ e both of the edges e 1 and e 2 . Since T G (x) = T G\e (x =e ) + x e T G/e (x =e ) where T G\e (resp. T G/e ) counts the spanning trees of G that do not (resp. do) contain e (the latter without the factor x e ), this proves (7.10). Let us remark that the parallel and series laws for T G (x) are precisely the laws for combining electrical conductances in parallel or series. This is no accident, because as Kirchhoff [76] showed a century-and-a-half ago, the theory of linear electrical circuits can be written in terms of spanning-tree polynomials (see e.g. [35] for a modern treatment). Let us also remark that the parallel and series laws for T G (x) are limiting cases of the parallel and series laws for the multivariate Tutte polynomial Z G (q, v), obtained when q → 0 and v is infinitesimal: see [112,] for a detailed explanation.
We are now ready to state and prove the main result of this section: Theorem 7.12 Let G be a graph, and let β ∈ ( 1 2 , 1). Then the following are equivalent: (a) G ∈ G β . (b) G is series-parallel.
(c) G has no K 4 minor.
Please note that Theorems 7.10 and 7.12 together imply Theorem 1.1 ′ .
Proof of Theorem 7.12. The equivalence of (b) and (c) is a well-known graphtheoretic result [45, Exercise 7.30 and Proposition 1.7.4] (see also [47,90]). Now let G be a series-parallel graph: this means that G can be obtained from a forest by series and parallel extensions of edges and additions of loops. As shown in Theorem 7.10, if G is a forest, then T −β G is completely monotone for all β ≥ 0. Parallel extension (7.9) obviously preserves complete monotonicity. By Lemma 3.9, series extension (7.10) preserves complete monotonicity whenever β ≥ 1 2 . Finally, additions of loops do not affect T G . Therefore, every series-parallel graph G belongs to the class G β for all β ≥ 1 2 . Conversely, Proposition 7.9(a) tells us that K 4 / ∈ G β for β ∈ ( 1 2 , 1). Since G β is a minor-closed class, this proves (a) =⇒ (c). Proposition 7.14 Fix r ≥ 1, and let M be any matroid (on the ground set E) that can be obtained from regular matroids of rank at most r by parallel connection, series connection, direct sum, deletion and contraction. Then B −β M is completely monotone on (0, ∞) E for β = 0, 1 2 , 1, 3 2 , . . . and for all real β ≥ (r − 1)/2.
The proof is essentially identical to the previous one, but uses Corollary 1.8(a) in place of Corollary 1.5 and Propositions 7.6-7.8 in place of Propositions 7.2-7.4. And we also have: Proposition 7.15 Fix r ≥ 1, and let M be any matroid (on the ground set E) that can be obtained from regular matroids of rank at most 2r − 1 and complexunimodular matroids of rank at most r by parallel connection, series connection, direct sum, deletion and contraction. Then B −β M is completely monotone on (0, ∞) E for β = 0, 1, 2, 3, . . . and for all real β ≥ r − 1.
The proof is again identical, but uses both parts of Corollary 1.8 instead of only part (a). Propositions 1.12, 7.14 and 7.15 give a rather abstract characterization of the class of graphs or matroids that they handle, and so it is of interest to give a more explicit characterization. Let us start with the graph case. We say that a graph G is minimally 3-connected if G is 3-connected but, for all edges e of G, the graph G \ e is not 3-connected. We then have: Proposition 7.16 Let G p be the class of graphs obtained from K p by disjoint union, gluing at a cut vertex, series and parallel connection, deletion and contraction of edges, and deletion of vertices. Then, for p ≥ 3, G p is minor-closed, and the minimal excluded minors for G p are the minimally 3-connected graphs on p + 1 vertices.
Proof. Let H p be the class of graphs with no minimally 3-connected minor on p + 1 vertices. It is clear that G p is minor-closed, so our aim is prove that G p = H p .
We first show that G p ⊆ H p . It is clear that H p is minor-closed, and is closed under disjoint union ("0-sum") and gluing at a cut vertex ("1-sum"). It is easily checked that if a graph G is obtained by parallel connection of graphs G 1 and G 2 , then any 3-connected minor of G is a minor of either G 1 or G 2 ; it follows that H p is closed under parallel connection. Since series connection can be obtained by combining the other operations (exploiting K 3 ∈ H p ), we conclude that H p is also closed under series connection. Finally, we note that K p ∈ H p , and as G p is the closure of {K p } under these operations, we see that G p ⊆ H p .
We now show that H p ⊆ G p . For suppose otherwise, and choose G ∈ H p \ G p with the minimal number of vertices. Clearly G is 2-connected and has at least p + 1 vertices. If G is not 3-connected, then G has a cutset {x, y} and there is a decomposition G = G 1 ∪ G 2 where G 1 and G 2 are connected graphs with at least 3 vertices and V (G 1 ) ∩ V (G 2 ) = {x, y}. Now let G ′ 1 , G ′ 2 be the graphs obtained from G 1 , G 2 by adding the edge xy if not present. Then G ′ 1 and G ′ 2 are both minors of G (obtained by contracting the other side to a single edge). As H p is minor-closed, we have G ′ 1 , G ′ 2 ∈ H p . Therefore, by minimality of G we have G ′ 1 , G ′ 2 ∈ G p . But G can be obtained by taking a parallel connection of G ′ 1 and G ′ 2 along xy and then deleting the edge xy if necessary, yielding G ∈ G p , contrary to hypothesis. We conclude that G must be 3-connected.
We now use the fact that every 3-connected graph other than K 4 has an edge that can be contracted to produce another 3-connected graph [45,Lemma 3.2.4]. Contracting suitable edges, we see that G has a 3-connected minor on p + 1 vertices, which in turn (deleting edges if necessary) contains a minimally 3-connected minor on p + 1 vertices. But this contradicts G ∈ H p .
The matroid case is analogous. We refer to [91,Chapter 8] for the definitions of 3-connectedness and minimal 3-connectedness for matroids. The following result and its proof are due to Oxley [92]: Proposition 7.17 Let F be a class of matroids that is closed under minors, direct sums and 2-sums. Let r ≥ 1, let F r = {M ∈ F : rank(M) ≤ r}, and let F ′ r denote the class of matroids that can be obtained from F r by direct sums and 2-sums. Then Proof. Consider first the case r = 1. The unique minimally 3-connected matroid of rank 2 is U 2,3 . If U 2,3 ∈ F , then the result clearly holds; if U 2,3 / ∈ F , then F ′ 1 = F and the result again holds. Now assume r ≥ 2, and let H r denote the class of matroids in F that have no F • r+1 minor. Clearly H r is minor-closed, and our goal is to show that F ′ r = H r . We first show that F ′ r ⊆ H r . It is easy to see [91,Propositions 4.2.20 and 8.3.5] that if a matroid M is either a direct sum or a 2-sum of matroids M 1 and M 2 , then any 3-connected minor of M is isomorphic to a minor of either M 1 or M 2 ; it follows that H r is closed under direct sum and 2-sum. Since F r ⊆ H r , and F ′ r is the closure of F r under direct sum and 2-sum, we see that F ′ r ⊆ H r . We now show that H r ⊆ F ′ r . For suppose otherwise, and choose M ∈ H r \ F ′ r with the minimal number of elements. It is not hard to see that M must be 3-connected. 34 Note also that rank(M) > r since M ∈ F \ F ′ r . We now use Tutte's Wheels and Whirls Theorem [91,Theorem 8.8.4], which says that every 3-connected matroid N that is not a wheel or a whirl has an element e such that either N \ e or N/e is 3-connected (or both). On the other hand, if N is a wheel or a whirl, then by contracting a rim element and deleting one of the 34 If M were a direct sum or a 2-sum of matroids M 1 and M 2 , each having at least one element, then M 1 and M 2 would be minors of M [91, Proposition 7.1.21], hence M 1 , M 2 ∈ H r because H r is minor-closed. But M 1 and M 2 cannot both belong to F ′ r because F ′ r is closed under direct sum and 2-sum and M / ∈ F ′ r . Therefore M 1 or M 2 would be a counterexample to the minimality of M .
spokes adjacent to that rim element, we obtain a 3-connected minor N ′ of N such that rank(N ′ ) = rank(N) − 1. So we apply this argument repeatedly to M until we obtain a 3-connected minor M ′ of M with rank(M ′ ) = r + 1. We then delete elements from M ′ while maintaining 3-connectedness until we arrive at a minimally 3-connected matroid M ′′ . Therefore M has a minor M ′′ ∈ F • r+1 , contradicting the hypothesis that M ∈ H r . Proposition 7.17 with F = regular matroids gives an excluded-minor characterization of the class of matroids handled by Proposition 7.14. We leave it as a problem for readers more expert in matroid theory than ourselves to provide an analogous characterization for Proposition 7.15. | 2014-01-06T16:45:28.000Z | 2013-01-11T00:00:00.000 | {
"year": 2013,
"sha1": "3caaddf9f7e30af036214895d55c5b6f6e86d3fb",
"oa_license": "implied-oa",
"oa_url": "https://doi.org/10.1007/s11511-014-0121-6",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "3caaddf9f7e30af036214895d55c5b6f6e86d3fb",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
} |
247064859 | pes2o/s2orc | v3-fos-license | Development of an Oxide Layer on Al 6061 Using Plasma Arc Electrolytic Oxidation in Silicate-Based Electrolyte
The plasma electrolytic method is one of the techniques which can be used to form an oxide layer on the substrate material surface. This technique employs ion exchange by developing an electrolytic arc between the cathode and the anode. The strong bond at high temperatures promotes the formation of an oxide layer on the metal surface. The electrolyte composition has a strong influence on the metal surface characteristics. Hence, the addition of certain nanoparticles in an adequate amount can improve the surface properties like wear and corrosion resistance. In this study, a plasma electrolytic technique based on using a direct current and voltage approach is investigated. The plasma electrolytic technique is utilized to develop an oxide layer on the Al 6061 alloy substrate surface using a DC voltage input on a silicate-based electrolyte. The substrate surface is then investigated for the thickness of the oxide layer formed and the amount of carbon element absorbed, using the SEM and XRD analysis. The experimentation and the study of the results confirmed the presence of a substantial oxide layer on the surface. The influence of the process on the output parameters-direct voltage and electrode distance is studied with the significant changes obtained in the weight percentage of elements like C, Al, Si, and O as supported by SEM and EDAX analysis. Most changes occurred when using a 197 V and in the current range of 0.3 A to 1 A. This can be useful further to improve the mechanical properties of the metal alloy using the plasma arc oxidation method.
Introduction
Plasma electrolytic oxidation (PEO) can produce a dominant crystalline oxide layer on the substrate alloy surface with specific electrolyte composition. The use of mild alkaline electrolytes makes the process more environmentally friendly than hard anodizing in a strongly acidic environment. Also, the implementation of high voltage during the PEO process with the local plasma results in the desired electrical discharges to produce thick coatings and achieve good microstructural control [1]. During the electrochemical reaction, sparks induced by a local discharge last from a few hundred to a few microseconds [2]. The oxide layers with a wide range of thicknesses can be generated quickly and efficiently on the surfaces of metal components of various shapes and sizes [3]. These coatings possess low porosity and excellent interfacial adhesion [4]. Since the transportation sector has adopted light-in-weight metal alloys to manufacture its components and requires better surface plating, plasma electrolytic oxidation (PEO) has gotten significant attention [5]. For this purpose, researchers have expanded the technical meaning of PEO by diversifying the development pathways and including additional precursors.
Almost a decade after its first successful use, graphene remains a material for various technological applications due to its unique characteristics [3]. Researchers recognize graphene as a super-material because of its high strength, high surface-to-mass ratio, and superconducting properties [6]. However, it is yet to be proved as a viable electronics material. Graphene is used as a scaffold in cell-tissue engineering, as an active electrode in supercapacitors to power implantable biomedical devices, and as detectors in biosensors [7]. High-temperature studies on graphene allow the researchers to understand the nanostructures' stability, behavior, and interactions with the substrate [8]. Analyzing these interactions aids in understanding the fundamental processes that control graphene development at high temperatures and thus can be explored to modify its properties [9].
Different studies on graphene show that the absorption rate of the graphene starts at a temperature of 40 • C The Al 6061 exhibits improved tribological characteristics in a silicate-based aqueous solution with Graphene as an ingredient [10]. The Graphene Oxide (GO) particles are dispersed in the solution at a specified configuration. Results revealed that the porous structure of the coating had a higher alumina content, which strengthened the mechanical characteristics of the material [11]. The addition of the GO increased the surface microhardness and maintained a low friction coefficient with good corrosion resistance [10]. The polarization resistance was also enhanced. Using Hammer's method, the magnesium ions were functionalized with increased corrosion resistance as the current density, and negative polarization loop decreased. Ionic type absorption at the surface coating enhanced barrier characteristics GO, resulting in higher R values than the uncoated samples [12]. PEO modified the process to obtain Al 2 O 3 ceramic coatings on the AA2024 alloy surface by PEO, emphasizing the adherence of the coating to the substrate joining face. The results indicated that the coatings improved surface roughness, hardness, and thickness [13].
Spark plasma sintering was used to create bimodal grain size Al 7075 alloys with different ratios of coarse and fine grains. Coarse grains dissolve faster in acidic NaCl solution than fine grains because of their larger size, higher alloying element content, and higher second phase area [14]. A higher reduction rate of hydrogen ions led to an increased corrosion rate in the cathodic second phase for the coarse grain structure [15]. The mixture of both grain sizes enhanced the micro-scale electrochemical heterogeneity of the alloy. Hence, the improved mixing percentage of grains in the metal matrix accelerated corrosion in the acidic NaCl solution. The aluminate-based electrolyte is proven to be the best for corrosion resistance due to the formation of volcano-like granules on the surface [11].
The influence of Graphene concentration on the PEO coatings, produced on D16T aluminum alloy for the silicate-based electrolyte, were studied. The findings revealed that the morphologies of graphene-coated coatings differed significantly depending on the manner of graphene incorporation [16]. The coatings Al 2 O 3 and Al were split into a porous layer and a thick inner layer. Coating thickness grew non-linearly as graphene content increased [17]. The corrosion resistance of graphene coating was greatly enhanced. Binary electrolyte additives, such as (Na (PO) 3 ) 6 and (H 3 BO) 3 was utilized in this study to produce MAO coatings with enhanced thickness and microstructure on 6061 Aluminum alloys [18]. Compared to the basic silicate electrolyte, the results revealed that the total impact of the binary additions might have modified the MAO coatings' discharge properties and microstructure morphologies [19]. It was possible to create a thicker and more durable MAO coating, which was mostly made of Al 2 O 3 phases [20]. According to the literature, silicate electrolytes are advantageous to the oxidation process. They encourage the development of phases that provide significant adhesive strength between the substrate and the oxide layer [21]. The addition of nanoparticles determines the barrier characteristics quality of the oxide layer and contributes to the increased corrosion resistance and microhardness of the surface layers [19,22]. Graphene is well known for the different surface hardening processes for improving mechanical characteristics. The silicate-based electrolyte is widely used for the oxidation process due to its alkaline nature [23]. Under a strong electric field, electrophoresis affects the mechanical and tribological properties of a material [24]. Hence graphene-added silicate-based electrolytes are useful for improving the mechanical characteristics of the Al 6061 alloy by the micro-arc oxidation process [25]. According to the existing literature, the influence of direct current parameters from the plasma arc technique has not been well investigated. As a result, it is selected as the foundation for the experiments in the planned research work. The addition of the graphene in the electrolyte had less effect on the thickness of the oxide layer formed [8,26]. The oxidation process can be optimized more, keeping in view other process parameters.
The purpose of this article is to study the effects of the direct current voltage on the PEO process on Al 6061 alloy using a silicate-based electrolyte with Graphene. The effect of the electrolyte composition and processing parameters on the growth of the coating and microstructure properties are investigated. The following sections include significant detail on the experimentations carried out.
Materials and Methods
The spectroscopic analysis (BAIRD-DV6) following ASTM E 451-14 standard of the material Al 6061 is conducted for the elemental compositions as shown in Table 1. It is observed that the material is within the given specifications.
Mechanics of Oxidation Process
The reactions during the oxidation process play an important role in formulating the surface characteristics [27]. The Al metal ion exchange and reaction during the interaction has a significant value in terms of duration of process and composition of the material [28]. The dissolution and oxidation of metals are prime reactions in the plasma electrolytic oxidation process. The formation of the metal oxide is characterized by the following reactions at anode and cathode, respectively [29].
At anode: The electrode gets oxidized when the decomposition of the electrolyte elements starts, and the metal oxide layer forms on the anode as per Equation (3).
Configuration of the Experiment
The experiments were conducted in the lab with configured set-up for micro-arc oxidation. The lab-based set-up consists of the electrolyte, electrode, container with asbestos insulation and cooling water circulating arangements, direct current (DC) power supply, stirrer, water inlet, and outlet with the constant water flow. The DC power source generates a high amount of heat and is used as the power input. The temperature inside the electrolytic container can be maintained low using the coolant or water flow around it, as shown in Figure 1.
Configuration of the Experiment
The experiments were conducted in the lab with configured set-up for micro-arc oxidation. The lab-based set-up consists of the electrolyte, electrode, container with asbestos insulation and cooling water circulating arangements, direct current (DC) power supply, stirrer, water inlet, and outlet with the constant water flow. The DC power source generates a high amount of heat and is used as the power input. The temperature inside the electrolytic container can be maintained low using the coolant or water flow around it, as shown in Figure 1. The electrolytic chamber consists of a stainless-steel container with insulation and cooling water circulating through copper tubes. The electrodes supply the DC power up to 200 V. The cooling water maintained the electrolyte temperature at a constant low level. Al 6061 substrate represents the anode, while stainless steel represents the cathode. The compositions of the electrolytes NaOH and Na2SiO3 are shown in Table 2.
The geometry of the electrolyte chamber consists of an outer chamber and an inner chamber. The inner chamber comprises stainless steel with internal asbestos insulation 50 mm in thickness, and an outer chamber (550 mm × 400 mm × 400 mm) is a rectangular box. The insulation contains copper tubes for the fluid flow (coolant or water). The cooling arrangement is necessary to keep the temperature at lower values. Asbestos ensures the safety of handling the equipment during the operating condition. The stirrer is mounted inside the container during the process. The continuous movement of the electrolyte can reduce the agglomerate of masses.
During the experiments, 6061 Al alloy acts as a substrate, with an alkali silicate-based electrolyte containing graphene as an additive in the PEO process with one variable at a time approach. The coating condition was evaluated for the significant growth layer (i.e., optimized electrolyte concentration, current density, and process duration) on the surface of the substrate. In an earlier literature review for most of the cases, the electrolyte container was considered a cathode. The current PEO setup is different in the view of using the separate anode-cathode arrangement. This helps to understand the effect of electrode distance parameter and its significance in arc formation pattern due to lesser gap. The lower distance in the electrode is beneficial for strong arc formation resulting in a significant oxide layer.
During the experimentation, the composition of the electrolyte was kept constant. The primary purpose of the research was to identify the effect of the DC power source and additive absorption on the surface of the Al 6061 alloy during the oxidation process. The oxidation initiates with the plasma arc formation by connecting the two electrodes for ion exchange. A stronger arc promotes the metal ion exchange to form an oxide layer on the substrate [30]. Hence the varying parameter was taken distance between the elec- The electrolytic chamber consists of a stainless-steel container with insulation and cooling water circulating through copper tubes. The electrodes supply the DC power up to 200 V. The cooling water maintained the electrolyte temperature at a constant low level. Al 6061 substrate represents the anode, while stainless steel represents the cathode. The compositions of the electrolytes NaOH and Na 2 SiO 3 are shown in Table 2. The geometry of the electrolyte chamber consists of an outer chamber and an inner chamber. The inner chamber comprises stainless steel with internal asbestos insulation 50 mm in thickness, and an outer chamber (550 mm × 400 mm × 400 mm) is a rectangular box. The insulation contains copper tubes for the fluid flow (coolant or water). The cooling arrangement is necessary to keep the temperature at lower values. Asbestos ensures the safety of handling the equipment during the operating condition. The stirrer is mounted inside the container during the process. The continuous movement of the electrolyte can reduce the agglomerate of masses.
During the experiments, 6061 Al alloy acts as a substrate, with an alkali silicate-based electrolyte containing graphene as an additive in the PEO process with one variable at a time approach. The coating condition was evaluated for the significant growth layer (i.e., optimized electrolyte concentration, current density, and process duration) on the surface of the substrate. In an earlier literature review for most of the cases, the electrolyte container was considered a cathode. The current PEO setup is different in the view of using the separate anode-cathode arrangement. This helps to understand the effect of electrode distance parameter and its significance in arc formation pattern due to lesser gap. The lower distance in the electrode is beneficial for strong arc formation resulting in a significant oxide layer.
During the experimentation, the composition of the electrolyte was kept constant. The primary purpose of the research was to identify the effect of the DC power source and additive absorption on the surface of the Al 6061 alloy during the oxidation process. The oxidation initiates with the plasma arc formation by connecting the two electrodes for ion exchange. A stronger arc promotes the metal ion exchange to form an oxide layer on the substrate [30]. Hence the varying parameter was taken distance between the electrodes. Experiments for each configuration were repeated three times, and the average values of the output parameters were obtained.
The experiments were conducted with electrolyte composition Na 2 SiO 3 : NaOH as 1:4.5 as shown in Tables 2 and 3. In addition to this, a few experiments were also conducted using additive particles of graphene in 2 g/L concentration. The distance between the electrodes was maintained at 20, 30, 35, and 40 mm, respectively, to check its effect on the process output. The distance variation between the cathode and anode causes arc patterns and affects the graphene absorption on the surface layer. The cuboidal samples of dimensions 100 mm × 10 mm × 10 mm were polished by gritted 800 # SiC sandpaper. The time of the coating process was taken as 20 to 40 min. The formation of the oxide layer and modified surface properties are discussed in the Result and Discussion section. The temperature measurement was carried out by a thermal imaging camera Testo 872, which can measure the temperature precisely at any point on the object. Hence, it is used to detect the temperature at different locations on the electrode in the electrolyte. It may help find out the heat transfer during the process. The sample surface microstructure was inspected with a Field emission scanning electron Microscope (FEI Nova NanoSEM 450, Make: JFEI company Of USA (S.E.A.), Hillsboro, OR, USA) The Energy Dispersive Spectrometer investigated the elemental distribution in the oxide layer formed on the substrate surface (EDS: Bruker XFlash 6I30, Make: Bruker Nano GmbH, Berlin, Germany). The thickness gauge was used to monitor the thickness of the oxide layer formed during the process. SEM examined a few samples for particle distribution, while FESEM examined the rest for cross-sectional development of the oxide layer on the substrate surface. Surface roughness was measured by Mitutoyo portable surface roughness tester (SURFESTEST-SJ-210 series, Make: Mitutoyo Europe GmbH, Neus, Germany) ISO 1997. The instrument gives Ra values for the surface. Three values for each face were taken before and after the coating process.
Results and Discussion
The experiments were conducted with the primary objective of coating the substrate surface with a uniform oxide layer using the Plasma Micro Arc Oxidation method for Al 6061 alloy. The efficiency of experiments conducted was evaluated with the help of various characterization tests, i.e., Scanning Electron Microscopic (SEM) Analysis and Field Emission Scanning Electron Microscopic (FE-SEM) analysis. The coating material, inclusion in the coatings, and oxidation properties were studied by Energy Dispersive Spectrometer (EDS) analysis, which gives elemental distribution on the sample surface [27,28]. The residual powder was also analyzed to identify the elements that did not adhere to the surface. The results are discussed in subsequent subsections. Figure 2 shows the results achieved and the values of the optimized parameters. The coating material, inclusion in the coatings, and oxidation properties were studied by Energy Dispersive Spectrometer (EDS) analysis, which gives elemental distribution on the sample surface [27,28]. The residual powder was also analyzed to identify the elements that did not adhere to the surface. The results are discussed in subsequent subsections. Figure 2 shows the results achieved and the values of the optimized parameters.
Formation of the Metal Oxide Layer
It is observed that the metal oxide layer formed on the substrate surface depends on various input parameters. The experiments for the oxide layer formation using the micro-arc oxidation method were designed to study the effect of factors, i.e., operating current, potential difference between the electrodes (voltage), and the gap between the electrodes. The variation of these parameters and their effects on the oxide layer thickness is depicted in Figure 3. The oxide layer is formed without additive for some samples at 150 V and 197 V and different values of distance between electrodes. The graphene is also added to electrolytes for generating a few samples to study the effect on absorption.
It is observed that the current becomes stable after some time during the process. The reason is due to the formation of the metal oxide layer that acts as a barrier for the substrate material. It is observed that the growth of the oxide layer is due to the changes in the voltage and current values during the experimentation. The metal oxide layer forming on the substrate face at initial current values prevents the further exchange of the metal ions. Due to stabilization, the current starts reducing and becomes constant after some time and forms the outer layer. The sample with the lowest electrode gap, i.e., 20 mm, shows a drastic change in the current values.
Formation of the Metal Oxide Layer
It is observed that the metal oxide layer formed on the substrate surface depends on various input parameters. The experiments for the oxide layer formation using the micro-arc oxidation method were designed to study the effect of factors, i.e., operating current, potential difference between the electrodes (voltage), and the gap between the electrodes. The variation of these parameters and their effects on the oxide layer thickness is depicted in Figure 3. The oxide layer is formed without additive for some samples at 150 V and 197 V and different values of distance between electrodes. The graphene is also added to electrolytes for generating a few samples to study the effect on absorption.
It is observed that the current becomes stable after some time during the process. The reason is due to the formation of the metal oxide layer that acts as a barrier for the substrate material. It is observed that the growth of the oxide layer is due to the changes in the voltage and current values during the experimentation. The metal oxide layer forming on the substrate face at initial current values prevents the further exchange of the metal ions. Due to stabilization, the current starts reducing and becomes constant after some time and forms the outer layer. The sample with the lowest electrode gap, i.e., 20 mm, shows a drastic change in the current values. Figure 4 indicates the formation of the oxidation layer on the substrate surface with three different regions in the PEO method. The transition layer, formed in the initial stage, provides a suitable platform for the functional layer. The range for each layer is approximately defined with respect to changes in current values, as a function of time. As the layer starts developing the current values are reduced and then stabilized after a certain time. The functional layer is the main layer of the coating formed during the PEO. The outermost layer is called a porous layer. It has a porous structure and is made of unevenly distributed oxides on the substrate. There is no formation of oxide layers on the substrate in the initial stages. So, the electrical conduction between electrodes is excellent, resulting in a significant potential difference between the electrodes. As the coating progresses with time, the oxide layer formed acts as an insulator, resulting in reduced potential difference and amperage between the electrodes. The increase in the oxide growth insulates the electrode and seizes the further development of the oxide layer. The oxide layer starts to develop after 20 min when the current stabilizes. This phenomenon indicates the relationship between the coating layer thickness with current and voltage. It is observed that the oxide layer thickness increased with an increase in the current [31]. With an increase in the current, the phase changes occur and mullite formation increases [13]. This was also supported by XRD images. At 0.3 A current, the oxide layer thickness was 5.1 μm. As the current value increased to 1 A, the oxide layer of thickness up to 79 μm was achieved. The effect of voltage on the oxide layer thickness is shown in Figure 4 indicates the formation of the oxidation layer on the substrate surface with three different regions in the PEO method. The transition layer, formed in the initial stage, provides a suitable platform for the functional layer. The range for each layer is approximately defined with respect to changes in current values, as a function of time. As the layer starts developing the current values are reduced and then stabilized after a certain time. The functional layer is the main layer of the coating formed during the PEO. The outermost layer is called a porous layer. It has a porous structure and is made of unevenly distributed oxides on the substrate. There is no formation of oxide layers on the substrate in the initial stages. So, the electrical conduction between electrodes is excellent, resulting in a significant potential difference between the electrodes. As the coating progresses with time, the oxide layer formed acts as an insulator, resulting in reduced potential difference and amperage between the electrodes. The increase in the oxide growth insulates the electrode and seizes the further development of the oxide layer. The oxide layer starts to develop after 20 min when the current stabilizes. This phenomenon indicates the relationship between the coating layer thickness with current and voltage. It is observed that the oxide layer thickness increased with an increase in the current [31]. With an increase in the current, the phase changes occur and mullite formation increases [13]. This was also supported by XRD images. At 0.3 A current, the oxide layer thickness was 5.1 µm. As the current value increased to 1 A, the oxide layer of thickness up to 79 µm was achieved. The effect of voltage on the oxide layer thickness is shown in Figures 3 and 5. Figure 4 indicates the formation of the oxidation layer on the substrate surface with three different regions in the PEO method. The transition layer, formed in the initial stage, provides a suitable platform for the functional layer. The range for each layer is approximately defined with respect to changes in current values, as a function of time. As the layer starts developing the current values are reduced and then stabilized after a certain time. The functional layer is the main layer of the coating formed during the PEO. The outermost layer is called a porous layer. It has a porous structure and is made of unevenly distributed oxides on the substrate. There is no formation of oxide layers on the substrate in the initial stages. So, the electrical conduction between electrodes is excellent, resulting in a significant potential difference between the electrodes. As the coating progresses with time, the oxide layer formed acts as an insulator, resulting in reduced potential difference and amperage between the electrodes. The increase in the oxide growth insulates the electrode and seizes the further development of the oxide layer. The oxide layer starts to develop after 20 min when the current stabilizes. This phenomenon indicates the relationship between the coating layer thickness with current and voltage. It is observed that the oxide layer thickness increased with an increase in the current [31]. With an increase in the current, the phase changes occur and mullite formation increases [13]. This was also supported by XRD images. At 0.3 A current, the oxide layer thickness was 5.1 μm. As the current value increased to 1 A, the oxide layer of thickness up to 79 μm was achieved. The effect of voltage on the oxide layer thickness is shown in Experiments were conducted with the electrode gap ranging from 20 mm to 40 mm. These experiments were repeated for different gap values. After the experiments, the thickness of the oxide layer formed was measured and documented as shown in Figure 6. It is observed that, at a gap of 20 mm between the electrodes, the process grew an oxide layer thickness of 80 μm approximately with a consumption of 1 A current. As the gap between the electrodes increased to 30 mm, the coating layer thickness reduced significantly as shown in Table 4. Experiments were conducted with the electrode gap ranging from 20 mm to 40 mm. These experiments were repeated for different gap values. After the experiments, the thickness of the oxide layer formed was measured and documented as shown in Figure 6. It is observed that, at a gap of 20 mm between the electrodes, the process grew an oxide layer thickness of 80 µm approximately with a consumption of 1 A current. As the gap between the electrodes increased to 30 mm, the coating layer thickness reduced significantly as shown in Table 4. Further increase in the gap distance resulted in a comparatively thinner oxide layer with reduced current amperage. Along the arc, the ion exchange occurs between the electrolyte and electrodes. This results in the growth of the oxide layer on the substrate. For the set of experiments conducted, the arcing between the electrodes was maximum at a 20 mm gap. As the gap increased, the arc efficiency and oxide layer thickness were reduced reducing the strength of the arc between the electrodes [32]. The energy consumed during the process depends upon the input parameters. The power increases with increases in the voltage resulting in greater reaction intensity. The input parameter increment enhances the formation of the uniform and dense coating [33].
SEM Analysis
The experiments were conducted to find the optimum ranges of the input parameters that can develop the significant layer of the oxide on the substrate during the PEO process. Hence, few samples were examined at the cross-section of the oxide layer to check the uniformity in the thickness. The experiments were conducted for more than 150 V as per the literature review reference [32,33]. Though literature used the AC voltage and current values, these values were taken as the base for DC values during experiments. It has been observed that the voltage values of 150-180 V were not sufficient to develop an oxide layer on the substrate surface. However, at 197 V, some layer thickness was visible, as shown in Figure 6. At current values 0.5 A and 1 A, the significant oxide layer is visible on a substrate surface. The uniformity and thickness of the oxide layer formed were measured at sample cross-sections using the FESEM tests. As 197 V was identified as the influencing voltage for the process, the microstructure at a configuration of 197 V and various current values was observed. At voltage values of 197 V and 0.3 A (Figure 6a) a minimal thickness was observed on the surface. As the electrode gap reduced, the strong arc formation promoted the growth of the oxide layer. Hence, the layer was thickened at a lower electrode gap and higher current values, as shown in Figure 6b,c. For current values up to 1 A, the oxide layer growth was up to 102.5 µm. The average oxide layer thickness developed is up to 79 µm as shown in Figure 6d. The layer became more prominent with the increase in voltage and current values [34,35].
The key area of this research was to study the effects of DC power input on the microarc oxidation process, as elaborated in the previous section. This oxide layer can be further analyzed to find the elements absorbed in the outer layer at the given input parameter configurations. It is observed that the percentage of graphene absorption increased at 197 V and current 0.3 to 1 A, as shown in Figure 7a The experiment conducted at 197 V, 0.3 A and 197 V and 0.5 A showed enhancement in the percentage of the C element. The weight percentage of the C element was significantly improved from 10% to 66%, as shown in Figure 8a-c. It verifies the effect of the absorption of the graphene particles on the surface layer. Other elements like N, Si also varied significantly from 5% to 9% weight. The work done by Leonid et al. worked identified the PEO process for developing the 75 µm for the process duration of up to 180 min [36]. The use of Basalt salt in the Silicate based electrolyte in the PEO process produces 80 µm for 180 min duration [37]. The present research work is of the duration of up to 30 min with a significant average oxide layer developed up to 80 µm. The lower process duration as compared to the other work done until the date signifies the importance of the use of a DC power supply for the PEO process.
Effect of Temperature
During the experimentation, the temperature was measured using a thermal imaging camera. The temperature distribution in the electrolyte and across the electrode length was monitored. As per the temperature distribution shown in Figure 9, the variation in the temperature causes heat transfer between the two electrodes and the electrolyte. Maintaining a constant temperature was important for electrolytic composition and the uniformity of the oxide layer. Even though the temperature was increased gradually due to the heat generation, the cooling arrangement around the electrolytic chamber keeps the temperature variation at a minimum. This arrangement helped in achieving a more stable environment for the process. Hence, the electrolyte characteristics can be maintained constant throughout the coating layer formation. [36]. The use of Basalt salt in the Silicate based electrolyte in the PEO process produces 80 μm for 180 min duration [37]. The present research work is of the duration of up to 30 min with a significant average oxide layer developed up to 80 μm. The lower process duration as compared to the other work done until the date signifies the importance of the use of a DC power supply for the PEO process.
Effect of Temperature
During the experimentation, the temperature was measured using a thermal imaging camera. The temperature distribution in the electrolyte and across the electrode length was monitored. As per the temperature distribution shown in Figure 9, the variation in the temperature causes heat transfer between the two electrodes and the electrolyte. Maintaining a constant temperature was important for electrolytic composition and the uniformity of the oxide layer. Even though the temperature was increased gradually due to the heat generation, the cooling arrangement around the electrolytic chamber keeps the temperature variation at a minimum. This arrangement helped in achieving a more stable environment for the process. Hence, the electrolyte characteristics can be maintained constant throughout the coating layer formation.
Element Distribution on Al 6061 Surface
The micrographs show the elements like Al, Si, C, O, and N as shown in Figure 8a. The varying percentage of oxygen indicates the oxide layer formed on the substrate. The uncoated Al 6061 has Al as 84.66%, which gets reduced to 11% when the input parameter was 197 and 0.5 A. On the other hand, the oxygen percentage was maintained between
Element Distribution on Al 6061 Surface
The micrographs show the elements like Al, Si, C, O, and N as shown in Figure 8a. The varying percentage of oxygen indicates the oxide layer formed on the substrate. The uncoated Al 6061 has Al as 84.66%, which gets reduced to 11% when the input parameter was 197 and 0.5 A. On the other hand, the oxygen percentage was maintained between 25% to 3 %, as shown in Figure 8b. For most of the samples, traces of silicon and carbon can be found in Figure 8a. The additive particles of graphene get adhesive at higher temperatures. The percentage variation in the carbon element was between 28% and 66%. On average, the carbon particle absorbance was 34%, as shown in Figure 10. This indicates that the input parameter has a significant effect on the formation and growth of the oxide layer. The percentage variation in the atomic weight shows that the concentration of graphene in 10 g/L was incorporated in the outer oxide layer of the substrate. There may be some chemical compounds formed at the oxide layer that can be analyzed further in XRD images.
EDS Analysis of Residue
The SEM and EDS images show a change in element composition on the aluminum substrate surface due to the oxidation process, as shown in Figure 11. At the given input parameter, the oxide layer formation was in the initial stages for all the samples. Hence to study whether the chemical reactions were occurring at given current-voltage parameters and electrolyte composition, the residue after the process needs to be analyzed. The EDS of the residue showed different element particles like C, O, Na, Si which represents that the reactions were taking place at the higher amount of heat generated in the electrolyte during the oxidation process.
(a) (b) Figure 10. Variation in elemental percentage for samples.
EDS Analysis of Residue
The SEM and EDS images show a change in element composition on the aluminum substrate surface due to the oxidation process, as shown in Figure 11. At the given input parameter, the oxide layer formation was in the initial stages for all the samples. Hence to study whether the chemical reactions were occurring at given current-voltage parameters and electrolyte composition, the residue after the process needs to be analyzed. The EDS of the residue showed different element particles like C, O, Na, Si which represents that the reactions were taking place at the higher amount of heat generated in the electrolyte during the oxidation process.
Phase Composition Analysis
The phase analysis was carried out for samples with additives Graphene and produced from the PEO process in the silicate-based electrolyte. Figure 12 shows the XRD diffraction pattern for PEO coatings. As shown in Figure 12 substrate surface due to the oxidation process, as shown in Figure 11. At the given input parameter, the oxide layer formation was in the initial stages for all the samples. Hence to study whether the chemical reactions were occurring at given current-voltage parameters and electrolyte composition, the residue after the process needs to be analyzed. The EDS of the residue showed different element particles like C, O, Na, Si which represents that the reactions were taking place at the higher amount of heat generated in the electrolyte during the oxidation process.
Phase Composition Analysis
The phase analysis was carried out for samples with additives Graphene and produced from the PEO process in the silicate-based electrolyte. Figure 12 shows the XRD diffraction pattern for PEO coatings. As shown in Figure 12
Surface Roughness Analysis
The most important parameter for the improvement of the substrate surface is roughness. It determines the contact resistance with the substrate material. A smooth surface with a uniform coating is characterized by a low coefficient of friction and low mechanical stress transfer [25]. The surface finish values are dependent on the voltage selection, electrolyte composition, and duration of the oxidation process. Figure 14 shows the surface roughness as a function of time. High discharge enhances crater formation resulting in a rough surface. The roughness values are lower for samples with an electrode distance of 20 mm and higher for 35 mm. The heat rise results in the local melting of elements in electrolytes present between the electrodes, and the solidification of the oxide layer occurs.
Surface Roughness Analysis
The most important parameter for the improvement of the substrate surface is roughness. It determines the contact resistance with the substrate material. A smooth surface with a uniform coating is characterized by a low coefficient of friction and low mechanical stress transfer [25]. The surface finish values are dependent on the voltage selection, electrolyte composition, and duration of the oxidation process. Figure 14 shows the surface roughness as a function of time. High discharge enhances crater formation resulting in a rough surface. The roughness values are lower for samples with an electrode distance of 20 mm and higher for 35 mm. The heat rise results in the local melting of elements in electrolytes present between the electrodes, and the solidification of the oxide layer occurs.
Conclusions
This research work involves the use of the Plasma Electrolytic Oxidation technique
Conclusions
This research work involves the use of the Plasma Electrolytic Oxidation technique to develop an oxide layer on the existing sample surface. The coating procedure is carried out on a sample made from Al 6061 alloy, using Graphene particles suspended in the electrolyte medium. The experiments were conducted using the direct current power supply as it is rarely reported in the literature. The variables in the experiments were voltage, current, and the gap between the electrodes.
The experiments were conducted with combinations of experimental variables and the samples were analyzed to evaluate the effect of the gap between electrodes, voltage & current, presence of Graphene Nanoparticles in the electrolyte, on the oxide layer formation.
The results indicate substantial information about the process and variables.
It can be concluded that the oxide layer can be successfully developed on the Al 6061 samples using the silicate-based electrolyte with Graphene nanoparticle suspension during the PEO process. The Oxide layer consisted mostly of Alumina or Aluminum Oxide (Al 2 O 3 ). The use of a direct power supply for the PEO process resulted in insufficient heat generation during the process and this has positive results due to the melting of solids in electrolytes between the electrode gaps. Hence more deposition occurs at higher temperatures and lower electrode gaps. From the study of SEM micrographs and EDS analysis, it is evident that there were very few traces of Graphene within the oxide layer. This concludes that irrespective of the Graphene content in the electrolyte, there is no significant Graphene absorption during the PEO process. There were significant changes in the percentage of elements like C, Al, Si, O as supported by SEM and EDAX analysis. Most changes occurred at 197 V DC and the current ranges from 0.3 to 1 A DC. The oxide layer formation was achieved up to a maximum of 200 µm in the cross-section of the samples with 197 V, 20 mm electrode distance, and 30 min oxidation time. The micro-arc oxidation process can be conducted at low temperatures through the controlled cooling of the electrolytic container. It helps to maintain the temperature at constant values avoiding excess heat generation. From the results of the conducted experiments, the optimized parameters obtained were 197 V and 1 A DC input, 30 min coating duration, and electrode gap of 20 mm generating a maximum of 102 µm of oxide layer on Al 6061 substrate in silicate-based electrolyte with Na 2 SiO 3 (10 g/L), NaOH (45 g/L), and Graphene (2 g/L). The surface roughness values for samples show a rougher surface at the small distance between the electrodes. Hence at a 20 mm electrode distance, the surface roughness was higher than the electrode distance of 40 mm. It can be concluded that the gap of the electrode and surface roughness are inversely affecting each other.
Data Availability Statement:
The data that support the findings of this study are available on request from the corresponding author (A.B.). However, the data is not available due to privacy/ethical restrictions. | 2022-02-24T16:25:52.240Z | 2022-02-01T00:00:00.000 | {
"year": 2022,
"sha1": "e262178b69230d8e4c4de66d2bae2ba375d55da4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/15/4/1616/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "25c88423d6a46c12d3c987765ea3d7276148b1e1",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
241192145 | pes2o/s2orc | v3-fos-license | Primary Ovarian Leiomyoma: A common tumor in rare location and report of eight cases
Background: Ovarian Leiomyoma accounts only for 0.5 to 1% of all benign ovarian tumors. Here we present a series of eight patients over a six-year period(2012-2018). The clinico-pathological features, diagnosis and management were discussed. Methods: We had experienced eight cases of ovarian leiomyoma.The clinical features,pathologic ndings,diagnosis, treatment were reviewed. Results: The mean age of these patients was 38.5 years. The majority of these patients may be asymptomatic and usually diagnosed incidentally during pelvic examination or pathologic examination after surgery. Six cases presented only with pelvic mass sized from 2 to 20 cm and even some patients persisted for more than twenty years while the other two were diagnosed due to four years of primary infertile or three months of irregular vagina bleeding. Three patients coincident with uterine leiomyoma while the others not. Three had ovarian leiomyoma degenerated and one case had a evidence of atypical 4/10. Conclusions : ovarian leiomyoma was very rare but it should be considered in the diagnosis of pelvic or ovarian solid masses. Magnetic resonance imaging was provital in its different diagnose while fast frozen pathology during operation was very necessary for its surgical decision. A proper surgical decision should be made according to patients age.
Background
Leiomyoma is one of the rarest ovarian benign tumors,accounting for 0.5-1% overall [1]. Sine the rst case was described in 1862 [2], less than sixty cases had been reported so far and the overwhelming of them were case reports with or without literature review as summarized in Table 1. The age of reported cases ranged from 17 to 79 years old and most of them were small in size,asymptomatic and diagnosed incidentally during routine pelvic examination,at surgery or even at autopsy [3,4].Few cases can also manifest with lower abdominal pain [5], ascites [6],pelvic mass,elevated cancer antigen 125(CA125) or Meigs , syndrome [4,7]. They occurred in unilateral ovary mostly while bilateral ovarian leiomyoma were mainly reported in young patients aged 16 to 25 and no has been reported in patients > 35 years old [5].
Due to its rarity and its similarities to pelvic solid mass, the ovarian leiomyoma was rarely diagnosed preoperatively, and thus the purpose of this paper was to summarize and analyze the clinical data of these ovarian leiomyoma patients during the recent six years along with a review of the literature with all cases to guide the clinical practice.
Methods
A six-year retrospective study of primary ovarian leiomyoma diagnosed in the Pathology Department of Women's Hospital, Zhejiang University School of Medicine was carried out during year 2012-2018. The Page 3/13 data were collected mainly by searching on medical records and department of pathology databases.
The clinico-pathological features, diagnosis and management were discussed.
Results
Ovarian leiomyoma was noted in eight cases.The mean age at presentation was 38.5 years with the youngest 17 and the oldest 65. Pelvic mass was the most common presenting sign(6/8 cases) (Fig. 1,2), while one case presented with four years of primary infertile and the other presented with three months of irregular vagina bleeding. Duration of signs ranged from 1 month to 20 years. The level of CA125 was all in normal range and no patient was diagnosed with ovarian leiomyoma before operation. The oldest two had bilateral salpingo-oophorectomy with or without hysterectomy and the rest of them all had myomectomy. These tumor sized from 2 to 20 cm and most of them come from one side of the ovaries except one from the left ovarian inherent ligament. Accidented surface with or without vascular engorgement was also found in these tumors and the youngest patient even had 100 ml ascites who was proved to have nuclear division(4/10) (Fig. 3). Leiomyoma degeneration such as hyaline and calci cation could also be seen and frozen section during operation contributed to its nature diagnoses or differentiating benign from malignant tumors. Clinical details of the cases are summarized in Table 1 and their macro and/or microscopic features were summarized in Table 2.
Discussion
Leiomyoma arising from ovary was considered a rare type of solid benign tumor and it could happened to people of all ages, but nearly eighty percent of them occurred in pre-menopausal women [43]. Our department was one of the largest top three specialized hospitals in China and the total number of gynecological surgery was about 17000-20000 per year,while only 8 patients were diagnosed with ovarian leiomyoma during the past six years.The youngest pediatric girl aged only 17, 4 fertile and 2 premenopausal women while the other one had been diagnosed 20 years ago when she was 45. Ovarian leiomyoma usually sized millimeters or a few centimeters and rarely exceeded 3 cm in diameter meanwhile both ovaries had the same morbidity [44]. Tumor diameter of our cases ranged from 2 to 20 cm which mainly measured 6-8cm that can be easily visible by ultrasound. Just because of their small size, the majority of them were usually asymptomatic and incidentally found when having pelvic examination or surgery for oophorectomy.However,the symptoms such as pelvic pain,abnormal uterine bleeding,constipation and frequent urination were associated with tumor size [45,46]. Literature had also revealed the relationship between ovarian leiomyoma and infertile [47].All our cases had normal CA125 level, the pediatric girl was diagnosed for menstrual disorder with ascites 100 ml and another one was diagnosed for four years of primary infertile.Rare symptoms including Meigs' syndrome, elevated T level and virilization, abnormal menstruation,urinary symptoms, appendicitis-like symptoms, paraneoplastic syndrome and acute abdomen caused by torsion had also been reported as summarized in Table 3.
To date, the histological origin of ovarian leiomyoma had not been excluded. Some studies had stated that it possible originated from smooth muscle in ovarian hilar blood vessel, ovarian ligament,ovarian stroma,mature cystic teratomas and the wall of mucinous cystic tumour and so on [48], while the most accepted one was that it arised from the smooth muscle of the ovarian ligament where they enter the ovary from the ovary vascular smooth muscle [49]. It could be implied from the enlarged ovarian ligament of our tissue photo.They frequently accompanied with other ipsilateral or contralateral ovarian lesions such as endometriotic cycst [49]although only one patient had contralateral ovarian follicular cycst in present report, thus careful exploration for both ovary was very necessary.
Strictly speaking, primary ovarian leiomyoma had to be entirely within the ovary with no similar lesions in the uterus or elsewhere [50]. However, literature had reported that about eighty percent cases accompanied with uterine leiomyomas [48].Without exception,three patients aged more than fty also had multiple uterine leiomyomas. As reported in the literature, leiomyoma degeneration such as hyaline and calci cation could also be seen especially in larger tumors,even nuclear mitosis was observed in the pediatric girl(4/10HPF). By literature review, there were ve reports revealing abnormal histologic features such as areas of myxoid stroma, atypical leiomyoma, nuclear pleomorphism, mitotic avtivity and nuclear enlargement in ovarian leiomyoma. Thus frozen section and immunohistochemistry could help in its right diagnose and making decision for surgical approach. Though 4 mitosis could be found in very 10 high power eld (HPF) in case 5(in Fig. 3), she had an ovary-preserving surgery for adolescence with 2 years no recurrence so far.
When diagnosing ovarian leiomyoma, it should be distinguished from sex-cord stromal tumors including ovarian broma,thecoma, leiomyosarcomas while uterine leiomyomas parasiting on the ovary should also be considered. Intra-operative immunohistochemical analysis and frozen section were very helpful in its right diagnosis and surgical decision-making. It would be better if a de nite diagnosis could be made before surgery. The majority of our cases had been diagnosed with sex-cord stromal tumors and seldom had a correct diagnosis before operation as showed in Table 1 which was in accordance with the existing literature. The imaging features of ovarian leiomyoma had seldom been described due to its rarity,there were few literatures reporting that intravenous administration of Gd-DTPA and the supplying vessels arising directing from the myometrium as a ow void could both contribute to the differentiation of it from uterine leiomyomas. A good example of this was case 8( in Fig. 1, 2)who had both right ovarian leiomyoma and left uterine subserous leiomyomas. On axial diffusion-weighted image(TR/TE: 4000/72.4 ms,b value = 500 s/mm2) the two masses showed the same intermediate signal intensity. On axial fat-saturated T2-weighted(TR/TE:3020/70.5 ms)image, the signal intensity of the left mass was continued with the myometrium while the right mass was well-circumscribed and sharply demarcated from the uterus.On sagital or axial T1-weighted with gadolinium administration(TR/TE:3.9/1.7 ms) the left mass showed low intensity identical to the uterus myometrium and ow voids could only be seen in left uterine subserous leiomyomas. Although these tumors were benign and rare of them had abnormal histologic features, radical surgeries were usually done for their complete removal because the ovary might become almost completely absorbed by the tumor or just persist portions [50]. A common surgical approach was salpingo-oophorectomy or an oophorectomy with or without hysterectomy,but an ovarypreserving surgery should be suggested in young and fertility-sparing women [51].
Conclusions
Primary ovarian leiomyoma was very rare in clinic and should be considered in the differential diagnosis of ovary solid tumors and subserous myoma .Magnetic resonance imaging was provital in its de nite diagnose before operation while fast frozen pathology during operation was very necessary for its surgical decision. Hysterectomy in conjunction with bilateral salpingo-oophorectomy were preferred in middle-aged to elderly patients but an ovary-preserving surgery should be considered in young and childbearing patients. The authors declare that they have no competing interests.
Consent for publication
Written informed consent has been obtained from all the cases or her guardian of 17 years old for publication of this paper report.The authors declare that they have no competing interests.
Ethics approval and consent to participate
This research conformed to the provisions of the Declaration of Helsinki and was approved by the ethics committee of Women's hospital Zhejiang University(No 2019026).The patients were informed and provided her written informed consent.
Availability of data and materials
The deta-sets used and /or analyzed during the current study are available from the authors on reasonable request.
Figure 1
Transabdominal intraoperative view of case 8 the right ovary and part of the uterus with the left subserous myoma(left). The right ovary is noted to be enlarged with a 8cm hard mass in diameter separated from the uterus not adherent to or in ltrating the surroundings(right) .
Figure 2
Magnetic resonance imaging of the pelvis from case 8. On axial diffusion--weighted image(TR/TE: 4000/72.4ms,b value=500s/mm2) the two masses showed the same intermediate signal intensity(A). On sagital T1--weighted with gadolinium administration(TR/TE:3.9/1.7ms) the left mass showed low intensity identical to the uterus myometrium B . On axial fat-saturated T2--weighted(TR/TE:3020/70.5ms)image, the signal intensity of the left mass was continued with the myometrium while the right mass was well-circumscribed and sharply demarcated from the uterus C).On axial T1--weighted with gadolinium administration(TR/TE:3.9/1.7ms) , ow voids could only be seen between the tumor and uterus myometrium on the left mass D . Figure 3 4 mitosis coulbd be found in very 10 high power eld (HPF)from case 5(H&E×400) | 2019-11-22T01:31:43.357Z | 2019-11-14T00:00:00.000 | {
"year": 2020,
"sha1": "e3a4c819d405b872c771b8696bbdd9df14155b7f",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-18276/v1.pdf",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "76827057038ce8071f9007b14aa6e319754833fe",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
56895360 | pes2o/s2orc | v3-fos-license | How to Achieve High Classification Accuracy with Just a Few Labels: A Semi-supervised Approach Using Sampled Packets
Network traffic classification, which has numerous applications from security to billing and network provisioning, has become a cornerstone of today's computer networks. Previous studies have developed traffic classification techniques using classical machine learning algorithms and deep learning methods when large quantities of labeled data are available. However, capturing large labeled datasets is a cumbersome and time-consuming process. In this paper, we propose a semi-supervised approach that obviates the need for large labeled datasets. We first pre-train a model on a large unlabeled dataset where the input is the time series features of a few sampled packets. Then the learned weights are transferred to a new model that is re-trained on a small labeled dataset. We show that our semi-supervised approach achieves almost the same accuracy as a fully-supervised method with a large labeled dataset, though we use only 20 samples per class. In tests based on a dataset generated from the more challenging QUIC protocol, our approach yields 98% accuracy. To show its efficacy, we also test our approach on two public datasets. Moreover, we study three different sampling techniques and demonstrate that sampling packets from an arbitrary portion of a flow is sufficient for classification.
INTRODUCTION
Network traffic classification is one of the key components of network management and administration systems. Classical machine learning approaches have been extensively used for more than a decade and showed good accuracy. However, the emergence of new applications and encryption protocols has dramatically increased the complexity of the traffic classification problem. Recently, deep learning algorithms have been developed for traffic classification. Deep learning approaches are capable of automatic feature selection and capturing highly complex patterns, and thus demonstrated high classification accuracy in comparison to other methods.
However, deep learning requires a large amount of labeled data during training. Capturing and labeling a large dataset is a non-trivial and cumbersome process. First, current Internet traffic is mostly encrypted that makes DPI (Deep Packet Inspection) based labeling impossible. Hence, most labeling procedures capture each traffic class in isolation. However, this is only possible at the edge of a network or in an isolated environment. Furthermore, such a process is often performed through scripts that mimic human behavior. Such scripts are not always accurate, and thus affect classification accuracy (as discussed in detail in Sec. 5.3). In contrast to the difficulties in obtaining labeled data, we note that unlabeled data is abundant and readily available in the Internet. Therefore, it motivates us to study how to use easily-obtainable unlabeled datasets to dramatically reduce the size of labeled dataset needed for accurate traffic classification. Furthermore, to make our approach practical, in particular in high speed links or data centers, we propose to use sampled time series features instead of an entire flow to do packet classification. This approach also allows us to avoid the dependency on handshake data usually needed for header/payload based approaches. Newer protocols, such as QUIC and TLS 1.3, use 0-RTT feature that reduces unencrypted information exchanged during handshake phase and thus renders header/payload based approaches ineffective. Using sampled packets also reduces memory and computation complexity needed for entire time series features.
In summary, in this paper, we make the following contributions: (1) We propose a semi-supervised approach that utilizes large quantities of unlabeled data and just a few labeled samples. Specifically, we first train a model on a large unlabeled dataset and then re-train the model with a few labeled data on the target classification problem. (2) We study three different sampling methods on the encrypted traffic classification problem: Fixed step sampling, random sampling, and incremental sampling. We show that good sampling methods can almost achieve the upper-bound accuracy in certain datasets. (3) We evaluate the proposed approach using captured QUIC traffic and show that our semi-supervised approach using sampled time series features can accurately classify QUIC traffic that has fewer unencrypted fields during handshake. (4) We test our approach on different public datasets. Surprisingly, we show that we can train a model using a completely separate unlabeled dataset and then retrain the model with a small number of labels in the target dataset and still achieve good accuracy.
RELATED WORK
Several recent studies have developed deep learning models, such as Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM), for network traffic classification in a fully-supervised fashion. In [19], authors trained six different CNN models based on LeNet-5 model on an old public dataset with 12 classes. They converted 249 statistical characteristics into a 2-d 16×16 image and reported high accuracy. In [11], authors used CNN, LSTM and various combinations of them on a private dataset captured at RedIRIS, a Spanish academic and research backbone network. They used time series features of the first 20 packets, including source port, destination port, payload size, window size, etc. In [12], a framework comprising a CNN and Stacked Auto-Encoder (SAE) was trained on a dataset containing 12 VPN and non-VPN traffic classes. They used raw header and payload bytes as input. Although no class labels are needed during the first step of training SAE, in the fine-tuning step they re-trained SAE in a supervised fashion with the entire dataset. In [2], Reproducing Kernel Hilbert Space (RKHS) was used to convert the time series features of a flow to an image. Then, produced images were used as input to a CNN mode. Their dataset is private and contained 5 application and protocol classes.
The only study that investigated QUIC protocol is [16]. They captured five Google services: Google Hangout Chat, Google Hangout Voice Call, YouTube, File transfer, and Google play music. They used CNN model and reported high accuracy. They captured 150GB of data and trained the model in a fully-supervised manner.
A survey paper [14] presents a general framework, covering all previous deep learning-based traffic classifiers, was introduced that can deal with any typical network traffic classification task. The paper provided a seven-step training process, including data capturing, data pre-processing, model selection and evaluation, etc. The framework also assumes large enough dataset for training.
In summary, in comparison, all work discussed above assumes large quantities of labeled data. Furthermore, packet sampling is not considered in their approaches.
In [9], authors used co-training method where two learners are trained iteratively on each other's output to be able to cover the entire unlabeled dataset. To start the training process, a few labeled samples are needed for the first (source) learner to be trained on. Two datasets of different app configuration used for the first and second (target) learner. They used Random Forest (RF) and AdaBoost with decision tree as learners.
Their approach, however, is different from ours in several ways. First, co-training needs labeled data from the source task whereas our approach needs labeled data from the target task. As a result, in their approach, the number of classes and the corresponding names should be known for the target task for which no labeled data is needed. In contrast, we can use any unlabeled dataset without knowing anything about in our approach. Second, their co-training will only work when the source and target tasks are similar and share same labels while we have shown that our approach can be used as a general statistical feature estimator when unlabeled dataset does not contain target task traffic classes.
PROBLEM STATEMENT
As discussed earlier, deep learning models have been developed for (encrypted) traffic classification. Because deep learning approaches are capable of automatic feature selection and capturing highly complex patterns, they demonstrate high classification accuracy. However, a critical challenge is that deep models require large amounts of labeled data during training. Capturing and labeling a dataset is a non-trivial and cumbersome process.
First, because encryption mechanisms are heavily used in today's Internet, labeling a dataset captured in an operational network is almost impossible unless an accurate classifier is already available. As a result, labeling process is usually done by assuring that only one target class is available during capturing by turning off all other applications on the device used to generate traffic. This is also typically done in an isolated environment or at the edge of the network. Traffic distribution in such an environment could differ from an operational network, especially at the core of the network. In addition, to capture large amounts of labeled data used in literature, one often runs scripts to automatically perform certain actions that can be captured and labeled without manual labor. However, we will show that network traffic generated by scripts may have a different distribution from that of human-generated traffic and, consequently, a model trained on script-generated traffic may not be accurate on human-generated traffic in inference time.
In contrast, unlabeled data is abundant and readily available in an operational network. Capturing a large unlabeled dataset is an easy task. There are also many large and publicly available datasets. Hence, our objective is to use easilyobtainable unlabeled datasets to significantly reduce the size of labeled dataset needed for training an accurate traffic classifier.
Specifically, we hope to classify a number of traffic classes, called the target task. We only have a few labeled samples for each class that we are interested in. We call this dataset D l . At the same time, we assume we have a large unlabeled dataset D u , say with hundreds of thousands of flows. This unlabeled dataset D u may contain numerous flows from various traffic classes, even from classes that we are not interested in, i.e. they do not exist in D l . The goal is to leverage the two datasets to train a traffic classification model for a target task while heavily exploiting D u dataset for training.
METHODOLOGY 4.1 Key Components
Our objective is to obtain an accurate traffic classifier with only a small number of labeled samples from each traffic class. To achieve this objective, we propose a semi-supervised learning approach. There are three key components in the proposed scheme.
The first is the classification model trained through semisupervised learning. Specifically, we pre-train a model with D u and then transfer the model to a new architecture and re-trained the model with D l . This is called semi-supervised learning. This approach considerably reduces the number of labeled data needed for the second supervised learning part. Moreover, the pre-trained model can be reused for other network traffic classification tasks.
In order to use an unlabeled dataset, D u , we pre-train a model, F , such that it does not need human labor for labeling. We used pre-training and re-training to distinguish between the first and second step of our semi-supervised approach.. An important step in the pre-training stage is to decide the target of the regression function. We choose a set of statistical features for this purpose. Moreover, the input of the model is a set of packets sampled from the entire flow. For each packet, we only observe length, direction, and relative time. In other words, F is pre-trained to estimate statistical features of the entire flow by taking a set of sampled packets as an input. Hence, F can be used for any other dataset for which statistical features give good classifier. The idea is based on the assumption that not all traffic patterns are valid and a model pre-trained on a large unlabeled dataset will lay on a manifold of valid patterns and hopefully can estimate statistical features.
Next, the pre-trained model, F , will be used as a part of another model, G. Then, G will be re-trained with a small labeled dataset. Since a part of the model has already observed many traffic patterns, G needs considerably less humanlabeled data. Note that F might not necessarily be an accurate estimator of statistical features. But, it will be re-trained quickly to help the classification task since it has already seen numerous traffic patterns during the pre-training.
The second key component is the features, including input features and the pre-training targets. As it is categorized in [14], there are three input features heavily used for network traffic classification tasks: time series features (such as packet length and inter-arrival time), statistical features obtained from the entire flow (such as average packet length and average byte sent per second), and header/payload features (such as TCP window size field, TLS handshake data fields and data content). Header/payload data features have been used for classification of encrypted traffic such as TLS 1.2 [12,15]. These methods rely upon unencrypted data fields or message types exchanged during handshake phase of TLS 1.2. However, state-of-the-art encryption protocols, such as QUIC and TLS 1.3, aim to reduce the number of handshake messages to improve the speed of connection establishment. As a result, fewer unencrypted data fields and packets are exchanged in these protocols that makes it harder for header/payload based approaches to achieve high classification accuracy. Furthermore, statistical features require the model to observe the entire flow before classification which is not efficient in practice. However, statistical features present useful information of flows, and thus we use statistic features as the target in the pre-training stage. The third key component of our approach is sampling. As discussed earlier, statistical features and header/payload features have their limitations. Therefore, we aim to use time series features. Specifically, we use time series features of only a part of a flow as an input to predict the statistical features as a regression target. Additionally, instead of using only the first few packets, which is used in most studies based on time series features [2,3,11], we sampled a fixed number of packets from a flow for the input. First, in practice, sampling is the only practical solution in some scenarios, such as in high bandwidth links or data centers. Moreover, sampling obviates the need to start capturing from the beginning of a flow or capturing the entire flow. Furthermore, storing the entire flow needs a large amount of memory and a model trained on would be more complex. Second, it allows the model to capture patterns from different part of a flow, not just the beginning of a flow. The approaches that used the first few packets can only capture specific patterns taken place at the beginning of a connection. However, these patterns may not necessarily be distinguishable from one class to another. For instance, user-specific behaviors, such as changing the quality of Youtube video, renaming a file in a Google Drive, etc., are mostly performed at the middle of a flow, not within the first few packets. Third, it allows us to sample a single flow several times which can serve as a data augmentation method.
In this paper, we propose a method to use time series features available regardless of encryption mechanism. We show that time series features can be used to classify services that use QUIC protocol. Moreover, only a small number of sampled packets are used for classification instead of the entire flow that reduces the memory and computational cost. To make sure that our approach can be used as a general framework, we also test our approach on two public datasets.
Semi-supervised Learning
4.2.1 Convolutional Neural Networks. We used Convolutional Neural Network (CNN) as a part of our model. CNN is a type of neural networks commonly used for visual recognition tasks [10]. Traditional feed-forward neural networks are hard to train due to large number of learnable parameters when the input dimension is large. CCN mitigates this problem by sharing a set of small filters covering the entire receptive fields.
Similar to other nueral network models, CNN consists of an input layer, an output layer and multiple hidden layers, as shown in Figure 1. Hidden layers can be convolutional layer, pooling layer, or fully connected layer. In convolutional layer, a set of learnable kernels (or filters) are used. These kernels are considerably smaller than the entire input, but they are used along the width and height of the input image to cover the entire input. By using the same set of small kernels over the entire input, the number of learnable parameters decreases such that it can be trained with reasonable memory and computational budget. Additionally, it allows the model to be shift invariant, that is, a pattern can be captured by CNN even if it is shifted to another region of input.
Pooling is another important layer in CNN which allows down-sampling. Pooling layers are usually used after a few convolutional layers as a form of non-linear down-sampling. There are several pooling layers, but max pooling is the most prevalent one. In max pooling, the maximum value of a set of adjacent neurons is used as a down-sampled output. Typically, after a set of convolutional and pooling layers, a few fully connected layers are used. These layers are responsible for high-level reasoning and associating captured patterns by previous layers with target class labels.
Proposed
Architecture. We used CNN as a part of the model architecture because of its shift invariant feature. Recall that we used sampled packets as an input to the model and as a data augmentation technique we sampled multiple times from different part of the flow. Hence, same set of patterns may be observed in different part of the input. Hence, the shift invariant model is more suitable.
As shown in Figure 2, a CNN-based model is first pretrained with an unlabeled dataset. Then, the learned weights of the convolutional layers are transferred to a new model with more linear layers 1 . Finally, the new model is re-trained on a small labeled dataset. The details of the model structure Step (pre-training) Second Step (re-training) Pool5 FC6 Figure 2: Semi-supervised steps and model architecture Table. 1. We used max pooling and Rectified Linear Unit (ReLU) activation function. Batch normalization is also used after convolutional and max pooling layers.
The input of the model is a 1-dimensional vector with 2 channels. The first channel contains inter-arrival time of sampled packets and the second channel contains the packet length and direction combined. To compact the input data as much as possible, positive values of length represent the length of packet for forward direction and negative values represent the length for backward direction. Moreover, we normalized the input data range in [-1,+1] by considering the maximum value of 1434 Bytes and 1 second for length and inter-arrival time 2 .
Sampling Techniques
In this paper, we used 3 different sampling methods to examine whether type of sampling affects performance of the traffic classification task.
• Fixed step sampling: In fixed step sampling, a fixed number, l, is chosen and only packets that are l packets away from are sampled • Random sampling: This technique simply samples each packet with probability p<1. This is a common technique in operational networks with high bandwidth 2 When sampling a flow, it is possible for inter-arrival time to be greater than 1 second. But, those rare cases barely change the performance of the model.
because it requires less memory and computational overhead than other methods. • Incremental sampling: Incremental sampling has three parameters, (l, α, β). Similar to fixed step sampling, it samples packets that are l packets away from, but it increases the value of l by multiplying it by α after sampling each β packets.
During data augmentation, we sampled a flow 100 times from the beginning of the flow when random sampling was used. However, it would have given us the same set of packets if we had started from the beginning of a flow multiple times when using fixed step or incremental sampling. Hence, we started sampling at different part of a flow 100 times, if the flow was long enough.
Datasets
As explained earlier, our semi-supervised approach needs an unlabeled dataset for the pre-training stage and a labeled dataset for the re-training stage. In this paper, we conducted experiments with three datasets: 4.4.1 QUIC Dataset. This is a dataset captured in our lab at UC Davis and contains 5 Google services: Google Drive, Youtube, Google Docs, Google Search, and Google Music [5]. We used several systems with various configurations, including Windows 7, 8, 10, Ubuntu 16.4, and 17 operating systems with Chrome browser. We wrote several scripts using Selenium WebDriver [18] and AutoIt [1] tools to mimic human behavior when capturing data. This approach allowed us to capture a large dataset without significant human effort. Such approach has been used in many other studies [3,8,16]. Furthermore, we also captured a few samples of real human interactions with Google services to show how much the accuracy of a model trained on scripted samples will degrade when it is tested on real human samples.
During preprocessing, we removed all non-QUIC traffic. We also ignored short flows because when short flows are sampled, there will not be enough packets to feed a classifier. In our evaluation, short flows are those with less than 100 packets before sampling. Note that all flows in our dataset are labeled, but we did not use class labels during the pretraining step. We used class labels of all flows to show the accuracy gap between a fully-supervised approach and our semi-supervised framework.
Unlabeled Waikato
Dataset. WAND network research group at the university of Waikato published several unlabeled traces from 2009 to 2013. One of the most recent dataset that we used in this paper is Waikato VIII [6] captured at the border of the University of Waikato. The entire dataset is unlabeled and it is not clear what traffic classes exist in the dataset. However, the dataset definitely do not contain QUIC traffics because it was captured before the emergence of any practical implementation of QUIC protocol. We used Waikato dataset to pre-train a network that we later re-trained on QUIC and Ariel datasets.
Similar to QUIC dataset, all small flows were removed during preprocessing. Moreover, the dataset is extremely large and due to the limited time and computational budget, we only used traces of the first month which is around 4% of the entire dataset.
Ariel Dataset. Ariel dataset [4]
was captured in a research lab at Ariel university over a period of two months. The original paper [13] used a fully-supervised method to classify three category of class labels: operating system, browser, and application. However, only the operating system and browser labels are available in the public dataset. Similar to other datasets, we removed all short flows. In this paper, we only used a small portion of the dataset to re-trained a pre-trained model to test our methodology.
EVALUATION 5.1 Implementation Detail
We used python and implemented the CNN architecture using PyTorch. We used a server with Intel Xeon W-2155 and Nvidia Titan Xp GPU using Ubuntu 16.04. The architecture of the CNN model has already been explained in section 4.2.2. During the pre-training, we trained the model with Adam optimizer and MSE loss function for 300 epochs. We used 24 statistical features as targets of regression. We used minimum, maximum, average, and standard deviation of packet length and inter-arrival time. For each one, we considered forward, backward and both flow directions that gave us a total of 24 features. During the supervised re-training, we used Adam optimizer with cross-entropy as a loss function 3 .
Performance Metrics
There are two category of performance measures to evaluate a classifier: macro-average and micro-average metrics [17]. Whenever the accuracy of the entire model is shown, we used macro-averaging where the accuracy is averaged over all classes. For pre-class performance evaluation, we used microaveraging metrics, including accuracy, precision, recall, and F1, as follows: In the above equations, TP, FP, TN, and FN stand for True Positive, False Positive, True Negative, and False Negative, respectively.
QUIC Dataset
We first pre-train our model on QUIC dataset consisting of 6439 flows without using class labels. Recall that because we sample each flow up to 100 times, total number of samples during the training is 544744. We use 24 statistical features calculated from flows as regression targets. Then, we transfer the weights to a new model and re-train with class labels. For this step, we capture 30 flows for each class and divide the training and test with different number of flows 4 . We also perform cross-validation. Moreover, we train the same model without transferring the weight to show the performance gap.
To find a good parameter for the sampling methods, we separate 30 files for each class and conduct greedy search. We train the model only with 20 labeled flows and test with other 10 flows. This small number of training data does not probably gave us the optimal parameters, but we assume 3 Our code is available at https://github.com/shrezaei/ Semi-supervised-Learning-QUIC-4 In the entire paper, we use flow to refer to an unsampled flow and we use sampled flow for the sampled case. In other words, a dataset with 30 flows per class contains up to 3000 sampled flows per class because each unsampled flow is sampled multiple times. that only limited labeled data is available for any supervised training we conduct. The parameters we use for fixed sampling, incremental sampling and random sampling are 22, (22, 1.6, 10), and 1/22, respectively. We also sample only 45 packets from the entire flow. Figure 3 presents the accuracy of our model with various settings. It shows that increasing the size of the training set improves the accuracy except for random sampling. The reason is that during random sampling, we always started sampling from the beginning of the flow each 100 times we sampled a flow. Hence, the data augmentation method with only 5 files gives the model enough data to capture the patterns corresponding to the beginning of the flow. However, the accuracy of random sampling barely increases when increasing training size. We conjecture that this is because imposing randomness during random sampling makes it more difficult for the model to fit to the true distribution. Fixed step and incremental sampling methods dealt with sampled flows captured from different part of flows. Therefore, if different part of flows shows different patterns, data augmentation of these two methods does not necessarily give more samples for each pattern. Hence, increasing the training size boosts the accuracy.
As shown in Figure 3, incremental sampling outperforms other sampling methods. When random or fixed sampling is used, it is not easy to capture both long and short patterns. Incremental sampling allows sampled flows to contain many packets in short range and some packets in long range. That is why incremental sampling outperforms other sampling methods when transfer learning is used. Furthermore, the figure clearly shows the efficacy of our transfer learning model on fixed step and incremental sampling, as expected. Our method improves the accuracy around 10% when compared with a model without the pre-training step. Table 2 represents the accuracy of CNN model when the entire labeled dataset is used in a supervised manner. In that case, we do not conduct the first step pre-training because all the dataset was used to train the second model. This gives us the upper-bound for the accuracy of a fully-supervised learning when sampling is used. The accuracy of incremental sampling is close to the upper-bound when only 20 flows per class was used for re-training. But, fixed sampling and random sampling require larger labeled training set during the re-training. To find how much sampling degrades accuracy, we also train a Random Forest (RF) classifier that takes statistical features as input and predicts the class labels. The accuracy of RF was 99.87% which shows that sampling degrades performance up to around 4% for our dataset. Note that we deliberately avoid using deep models such as CNN for this part because the input is a set statistical features which is not suitable for a shift-invariant model, such as CNN. Fully connected neural network is also unnecessarily complex to be trained with our relatively small dataset. Note that our dataset has only around 6500 flows. When using statistical features as an input, it is not possible to augment the dataset. As a result, total number of data points used during training RF was around 100 times fewer than training a model with sampled data.
In the second experiment, we study the effect of number of sampled packets on the accuracy of the model. We change the number of sampled packets from 30 to 75. We set the parameters of fixed step sampling method so that it always covers around the range of 1000 packets. Interestingly, the accuracy drops when we increase the number of sampled packets, as shown in Figure 4. Increasing the number of sampled packets improves the accuracy of statistical feature prediction. However, it is harder for the model to learn class labels when input dimension is larger because of the small training set. Figure 5 presents the performance metric of our best model, that is, a model re-trained on a pre-trained model using 20 training flows with incremental sampling. The high performance metric shows that it is possible to train a good classifier with as small as 20 flows for each class if our semisupervised approach is used. Therefore, it dramatically reduces the data collection and labeling that are the most timeconsuming and labor-intensive steps. To study whether automatically generated data with script represents human interaction, we capture 15 flows for each class from an interaction of real human in those 5 Google services. We only use this dataset to test the same model described above. Figure 6 illustrates the performance metrics of all classes. Interestingly, accuracy of the Google search and Google document have not changed significantly. However, the accuracy of Google drive, Youtube, and Google music drop up to 7%. This depends on how much human interactions can change the traffic pattern, which is class-dependent. Moreover, there are some actions, such as renaming a file or moving files in Google drive, that our scripts do not perform. So, these patterns are not available during re-training. This shows the limitations of dataset and studies [3,8,16] that only use scripts to mimic human behavior.
QUIC protocol has been only developed and used recently. Therefore, Waikato dataset captured before 2014 cannot have QUIC protocol traffic. Our intuition is that a model pre-trained on Waikato dataset can still be useful for the re-training step because it basically learned how to predict statistical features. It can be considered as a naive statistical feature predictor based on sampled data. Table 3 presents the accuracy of our method when Waikato dataset is used for pre-training. Note that sampling method's parameters are different for this experiment. Waikato dataset has many small flows that are not suitable for our previous sampling parameters that covers around 1000 packets. Hence, we change the parameters of sampling methods, similar to the parameter used in next section. That is the reason why the accuracy of even model without pre-training is lower than the experiment shown in Figure 3. Table 3 clearly shows that the pre-trained model can boost the accuracy even when it had not seen any traffic of the target task during the pre-training stage. But, the improvement is limited to less than 10% in this case.
Ariel Dataset
We conduct two additional experiments to evaluate the performance of our semi-supervised learning method on public datasets. In the first experiment, we pre-train a model with the unlabeled Waikato dataset to predict statistical features based on sampled flows. Then, we re-train the model with 5 flows of each class in Ariel dataset. We randomly select 5 flows and use the remainder as a test set and repeat the procedure 10 times 5 . We only show the performance of OS class labels here due to the lack of space. In the second experiment, we pre-train the model with both Waikato and Ariel datasets. Then, we re-train the model similar to the first experiment. When we combine two datasets for pre-training, Ariel flows constitute only around 6% of the entire combined dataset. We deliberately allow the combined dataset to remain imbalanced to mimic real scenarios where only small portion of unlabeled dataset contains the target labels. Additionally, the parameters we use for fixed sampling, incremental sampling and random sampling are 10, (8, 1.2, 10), and 0.15, respectively.
The accuracy of both experiments as well as the one without semi-supervised learning is shown in Table 4. For brevity, we represent Waikato and Ariel datasets with W and A, respectively. When there is no pre-training, random sampling shows the best accuracy. The similar trend is observed with our QUIC dataset as well, in previous section. Interestingly, there is a small but meaningful gap between the two experiments. That shows that even if the target classes are only a small portion of dataset during the pre-training step, they can improve the accuracy. This is useful because the percentage of target task's flows might be probably small in real word when data is captured from an operational network.
To measure the performance gap between semi-supervised and fully-supervised learning, we also train the same CNNbased architecture with the entire Ariel dataset in a fullysupervised manner. First, we train the CNN model with augmented sampled flows from Ariel dataset using fixed step sampling. As it shown in Figure 7, the accuracy is around 89% which can be considered as an upper-bound when sampling is used. To compare how statistical feature prediction degrades the accuracy in comparison with using true statistical features, we perform the following experiment: We feed the true statistical features to the last three layers of the model directly and remove all previous layers. However, the training phase is extremely unstable and high variance with low accuracy. The main reason is that several dense layers of fully connected neural network is too complex to be trained 5 The reason we do not perform cross-validation is that if we limit each training folds to only contain 5 flows, the total number of cross validation rounds would be too large to be practically evaluated. with a small dataset. Recall that when we do sampling, we can sample a single flow multiple times which increases the dataset size. However, in the case of feeding the true statistical features, there is only one set of statistical features for each flow leading to small size dataset. Therefore, we use K-Nearest Neighbor (KNN) for a fully-supervised training with statistical features 6 .
The performance gap is shown in Fig. 7. We only show results of the fixed step sampling. The trend of other sampling methods are similar when number of flows for the supervised re-training increased. Interestingly, if a pre-trained model contains the target class labels, it can reach the upper-bound accuracy (fully supervised with sampled flows) with only around 30 labeled flows for each class. This is dramatically smaller than typical datasets used for fully-supervised methods in literature. Moreover, even if the unlabeled dataset does not contain target task's flows, the pre-trained model can act as a general function approximation of statistical features because it has already observed a large number of samples during the pre-training step.
DISCUSSION
Typically, network traffic classifiers use one or combination of the following features [14]: statistical features, time series, and header/payload contents. The choice of input features depends on many factors, which are comprehensively explained in [14]. During the pre-training step, our approach uses sampled time series features and outputs statistical features. Hence, our approach does not work on datasets for which time series or statistical features are not good enough for classification. For instance, we also conducted an experiment on ISCX dataset [7] for which the accuracy of model based on statistical and time series features were reported to be around 80% [7]. Our approach failed to produce a model with higher than 68% accuracy when only 20 flows were used from each class during the supervised re-training step. However, a CNN model using payload information achieved above 95% accuracy on the same dataset with fully-supervised learning in [12].
During the pre-training step, the model will probably see most possible traffic patterns, even the patterns that are not similar to any of the target classes. However, it is possible that during the supervised re-training step, some distinctive patterns are missed from labeled dataset which degrades the accuracy dramatically if the model is used in real environment. Hence, the small labeled dataset should be captured carefully. For instance, it has been shown that user actions in certain Android applications, such as Facebook or Twitter can be identified using time series features [3]. These actions are sending message, posting status, opening the application, etc. This means that if target classes are Android applications, one should ensure that all actions are included in the small labeled dataset at least once because the pattern of each action is different from another. This is similar to the experiment we did to test our model on human-triggered data where we realized some actions in some Google services did not exist in our training set, such as renaming a file in Google Drive.
We show that incremental sampling outperforms other sampling methods. However, it should be used with caution. Recall that we use CNN model for classification. CNN uses a set of kernels (or filters) to cover the entire input. In incremental sampling, the sampling parameter l is increased by α after sampling each β packets. If one chooses a large value for α, distance between the first few sampled packets are significantly smaller that the last few packets. In that case, CNN model is not suitable because the same filter which is supposed to capture certain pattern for close sampled packets will be used on the last few packets which are far away from. Hence, when using incremental sampling, one should not use large α. Otherwise, CNN model will fail to work properly.
CONCLUSION
In this paper, we propose a semi-supervised learning method that reduces the number of labeled data significantly for network traffic classification. We use 1-D CNN model that takes sampled time series features as input. Hence, the method is also suitable for scenarios in which only sampled data is available, such as in high-bandwidth links. In the pre-training step, the model is trained to predict statistical features of the entire flow, which does not require human effort for labeling. The main idea is to pre-train a model on a large unlabeled dataset and then transfer the learned parameters to a new model and re-train it with a very small labeled dataset. We capture 5 Google services that use QUIC protocol to evaluate our model. We show that with the proposed semi-supervised learning approach and 20 labeled data from each class the model achieves high accuracy close to a model trained on a large labeled dataset with certain sampling methods. We evaluate 3 sampling methods: fixed step, random, and incremental sampling.
We also conduct experiments on public datasets to show the generalizability of the proposed approach. We use the publicly available Waikato and Ariel datasets. We show a model pre-trained on Waikato dataset can improve the accuracy of Ariel dataset when the pre-trained model is transferred. This shows that a model pre-trained with our approach on a large unlabeled dataset can be used as a general traffic classifier that can improve the accuracy of probably any traffic classification tasks if weights are transferred. In addition, while the pre-training can take a relatively long time, the retraining process can be rather fast. Such evaluations demonstrate the efficacy of the proposed approach. | 2018-12-23T19:22:23.000Z | 2018-12-23T00:00:00.000 | {
"year": 2018,
"sha1": "2d98a6fcd278e1917e7efd9dce4606ad6822ce0b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "2d98a6fcd278e1917e7efd9dce4606ad6822ce0b",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
247266018 | pes2o/s2orc | v3-fos-license | Anterior substitutional urethroplasty using a biomimetic poly‐l‐lactide nanofiber membrane: Preclinical and clinical outcomes
Abstract The aim of this study is to investigate the feasibility and efficacy of a novel biomimetic poly‐l‐lactide (PLLA) nanofiber membrane in repairing anterior urethral strictures from both preclinic and clinic. Biomimetic PLLA membrane was fabricated layer by layer according to the structure of human extracellular matrix. Microstructure, tensile strength, and suture retention strength were fully assessed. Before the clinical application, the safety and toxicology test of the biomimetic PLLA membrane was performed in vitro and in experimental animals. The patients underwent urethroplasty used dorsal onlay or lateral onlay technique. Then, they were followed up for 1 month, 3 months, 6 months, and then annually after the surgery. The mechanical experiments showed well property for application. Biomimetic PLLA membrane was safe according to the in vitro and animal studies. Then, a total of 25 patients (mean age 48.96 years) were included in the study from September 2016 to December 2018. After a mean follow‐up of 33.56 months, 20 patients successfully treated with biomimetic PLLA membrane. Five patients (2 bulbar and 3 penile) suffered postoperational urethral stricture recurrence. None of infection or urinary fistula or any other adverse events related to the use of biomimetic PLLA membrane were observed during the follow‐up period for all patients. The preliminary result confirmed the feasibility and efficacy of the biomimetic PLLA membrane as a novel material for anterior urethral repair. The long‐term effects with more patients should be investigated in further studies.
| INTRODUCTION
Anterior urethral strictures can occur due to various reasons. In China, trauma accounts for the majority of urethral strictures. The incidence of urethral strictures caused by iatrogenic injury has increased. 1,2 The management of a complex and long-segment urethral stricture is one of the most challenging issues at present for reconstructive urologists.
The most frequently used method for this purpose is substitution urethroplasty using autologous tissues. Various autologous tissues, such as penis skin flaps, bladder mucosa, and oral mucosa, have been proposed for substitution urethroplasty. 3,4 However, the harvest of these autologous grafts inevitably has led to lesions at the donor site and has limited availability. The use of acellular matrix has provided new ways for urethral repair, such as bladder acellular matrix graft, 5,6 small intestine submucosa, 7 and urethral extracellular matrix (ECM). 8 After implantation, the acellular matrix acts as a scaffold for cell growth and tissue regeneration and provides a suitable microenvironment for cell growth. Finally, it can gradually degrade and be eventually replaced by new tissue. However, the source of human-or animal-derived acellular matrix material is relatively limited. Moreover, the potential risks of ethics, transmission of disease, and immunological problems have aroused concerns.
Poly-L-lactide (PLLA) is a kind of synthetic polymer, which approved by the European Union and FDA for medical use. The good biocompatibility, controllable mechanical properties, degradation rate, and topological microstructure make PLLA an extraordinary promising synthetic biomaterial in tissue regeneration. Various manufacture technologies had been applied on PLLA scaffolds preparation. There is a difference between the microstructure of human tissue and traditionally processed PLLA membrane, such as weaving, casting, and hot pressing membranes, which have relatively poorer elasticity, flexibility, cell adhesion and infiltration. The PLLA membrane used in the present study was prepared via additive manufacturing technology. And it is composed of biomimetic PLLA fibers that resemble the ECM in humans. The novel ECM structure has shown good regeneration capacity in tissue repair. The biomimetic PLLA membrane has been preliminarily applied in dura mater repair and complex wound healing. [9][10][11] However, only few bioactive materials finally applied in clinic, especially in urethral reconstruction field. In the present study, we aimed to investigate the preclinical and clinical outcomes of the novel biomimetic PLLA fiber graft in anterior urethral substitutional urethroplasty.
| Materials and characterization
The biomimetic PLLA nanofiber membrane used in this study was provided by Medprin Regenerative Medical Technologies Co. Ltd.
(Guangzhou, China), which were manufactured through additive manufacturing technology as described in previous report. 10 Briefly, bioresorbable PLLA fibers were fabricated and deposited layer by layer to form the fiber-structured membrane, which resembled the extracellular matrix of human soft tissue.
The microstructure of the nanofiber membrane was observed using the scanning electron microscope (SEM, JEOL JSM-5600LV, Japan). Membrane was cut into small pieces and attached on conductive carbon tape, and gold-spraying was processed prior to SEM observation. The thickness of the membrane was tested using a commercial hand-held thickness gauge. Pore size and pore size distribution was tested using automatic mercury porosimeter (MA-3000, NIPPON, Japan). The tensile strength and suture retention strength were evaluated using a universal material testing machine (Instron 5567, Instron, USA) to represent the mechanical properties of membrane. Tensile strength was examined according to ISO 527-3 standard. Membranes were soaked in distilled water for at least 1 min to fully hydrate and then cut into stripes with length of 6 cm and wide of
| Cell experiments
Human urothelial cells (HUCs, Sciencell, Cat 4320) were used to evaluate the effect of the nanofiber membrane on cell viability, attachment, and morphology. The HUCs were cultured in cell culture dishes in an incubator (37 C, 5% CO 2 ) with a humidified atmosphere.
Urothelial cell medium (HUM, Sciencell, Cat 4321) was used for cell culture and refreshed every 2 days. Cells at passage 4 were used for the cytotoxicity and attachment experiments.
The sterile membranes were cut into round pieces with diameter of 1 cm and introduced into 48-well plates. HUCs in 50 μl HUM were added onto the samples with a cell density of 2 Â 10 4 cells per well.
Four hours later, 500 μl HUM was replenished into each wells.
Cytotoxicity was evaluated using a Cell Counting Kit-8 (CCK-8; Dojindo, Japan) after HUCs seeded on samples and cultivated for 24 h, following the manufacturer's instructions. CCK-8 working solutions were added into each well to replace the HUM, and incubated at 37 C for 1 h. An enzyme-linked immune sorbent assay (ELISA) plate reader (Thermo Scientific, Thermo3001, USA) was used to read the absorbance of CCK-8 reaction solution. Six replicates were prepared for the assay and HUCs seeded directly onto the surface of plate were treated as control group.
Cell attachment morphology was observed using SEM after HUCs cultured on membranes for 4, 8, 24, and 48 h. At each time point, cells on membranes were washed with PBS twice and fixed with 4% paraformaldehyde for at least 4 h. After dehydrated with gradient ethanol and dried, cell morphology was viewed via SEM.
| Animal experiments
A total of 12 male New Zealand white rabbits (weighing 2.0-3.0 kg) were divided into two groups. Six rabbits were in experiment group and treated with biomimetic PLLA membrane. Other six rabbits were in control group. All surgeries were performed by the same surgeons using procedures described previously. 12
| Follow-up
The patients were followed up after 1 month, 3 months, 6 months, and then annually. Clinical evaluations included uroflowmetry or voiding function report. Urethrography or urethroscopy was performed if the maximum urine flow rate was continuously lower than 15 ml/s or selectively performed before the removal of the suprapubic catheter in some patients. Success was defined as a maximum flow rate of >15 ml/s and without the need for further surgical interventions, such as dilatation or optical urethrotomy.
Stricture recurrence was defined as recurrent symptomatic stricture requiring further operative intervention following initial intervention. F I G U R E 8 Representative hematoxylin and eosin staining (HE), Masson trichrome staining, AE1/AE3 (expression in the urothelium) immunohistochemical staining, and CD31 (expression in the vascular endothelium) immunohistochemical staining at 3 months after implantation in the two groups. The red arrows indicate surgical sites. The red triangles indicate blood vessels retentions of the scaffolds were 11.8% and 1.66 N in the dry state and 54.5% and 2.26 N in the wet state (Figure 4c). The suture retention strength of the biomimetic PLLA membrane was more than or close to 2.0 N in the wet state, which was generally accepted where suturing was required with tissue in implantation cases. 13
| Cell experiments
OD values of cytotoxicity experiment analyzed using CCK-8 kit were shown in Figure 5. There is no significant difference between the nanofiber membrane sample and the control group. And the relatively OD value of sample to the control group was 92.2%, which meant the cytotoxicity level of nanofiber membrane to human urothelial cells was zero.
The adhesion and distribution of HUCs after seeded on nanofiber membranes for 4, 8, 24, and 48 h were shown in Figure 6. Cell conjugation formed after 48 h of culture. HUCs had a relatively high crawling and proliferation rate on samples.
| Clinical application
Twenty-five patients were included in the study. The mean age of patients was 48.96 years (range from 19 to 83). Table 1 There are several studies reported clinical application of tissueengineered material in substitution urethroplasty. In 2011, a tissueengineered autologous urethra was reported used in patients with urethral defects. 16 The muscle and epithelial cells were harvest from the patients, expanded, and seeded onto tubularized PGA scaffolds.
After a mean 71 months follow-up, none of the patients suffer the recurrence. This is a great approach in urethral reconstruction. In 2018, a tissue-engineered oral mucosa graft named MukoCell ® was applied in anterior urethroplasty. 17 It also had an overall high success rate. Unlike these studies, we fabricate the biomimetic PLLA nanofiber membrane via additive manufacturing technology, mimicking the ECM structure of humans. Without harvesting harm, long-time tissue culture, and bio-safety problems, synthetic material provides a valid alternative to tissue-engineered ones owing to their lower costs (up to 10 times less), wider availability, and fewer ethical concerns.
Besides, the pore size, construction, and absorbability could be adjusted according to demands. It is a good graft choice for the urethral substitution. Meanwhile, several studies also showed that the ultrastructure and 3D architecture of collagen fibers of the acellular matrix were important in modulating the cells' ability to migrate into the scaffold or influence tissue-specific cell phenotype. 18,19 Thus, the exploration and development of a new urethral substitution material that can mimic the structures of an acellular matrix are urgently needed in clinic.
Using additive nanomanufacturing technology to prepare biomimetic scaffolds is realistic with the rapid development of technology in recent years. A biomimetic PLLA nanofiber membrane is sequentially fabricated in layers to form a 3D structure. The cell experiment showed a high biocompatibility (Figures 5 and 6), which is coincident with previous study. 9,10 Compared with the acellular matrix, the biomimetic 3D structure with a highly adjustable porous network can facilitate the passage and exchange of nutrients and gases, which are important for cellular growth and tissue regeneration. 20 Sufficient blood supply is beneficial to the surrounding cell infiltration and material degradation. In animal experiment, the biomimetic PLLA membrane was used in bladder substitution. Bladder had a better blood supply, lots of red blood cells, and a few of white blood cells filled in the space of the biomimetic PLLA membrane ( Figure S3). This phenomenon might also account for the quick degradation of biomimetic PLLA membrane in animal model. While, in human, the cavernous in bulbar urethra is stronger and thicker than penile urethra, and supplies with more blood. The residual urethral plate appearance of bulbar urethra is normally appearing much better than that of penile urethra after opening the narrowed urethral lumen. The anatomical difference led to the insufficient blood supply of the penile urethra, which might be one reason for the more failed urethroplasties in penile urethral strictures. Besides, the spongiofibrotic degree could be another influence factor for the blood supply. As our center is one of the biggest tertiary referral urethral reconstruction centers in China, before the patients came to our center, some of them had already received several interventions. The prior interventions could contribute a lot to the spongiofibrotic degree. Tissue regeneration was based mainly on the nutrition supply and inosculation of the urethral plate. 22 Thus, the spongiofibrotic segment of the urethra was not good enough to support the regeneration of urethral epithelial cells inside the lumen, which might be an important reason for the failed urethroplasties in the biomimetic PLLA membrane.
As for polymer materials, PLLA was confirmed to be degraded in vivo. Due to the ethical concern and for human rights protection, biopsy and extra examinations were not performed to analyze the degradation state. However, in the animal experiment, we found the biomimetic PLLA membrane was vanished in the rabbit bladder in 2 weeks, the fixed area showed no significant different in gross observation ( Figure S2). Besides, the stress-strain curves of postoperative bladder were also similar to the normal bladder ( Figure S5) In this study, the biomimetic PLLA membrane was used for substitution urethroplasty, which also seemed to be promising in urethra reconstruction. However, this study had some limitations. The sample size was not large enough to identify statistically significant differences between the succeed and failed patients. For safety concern, we only used our biomimetic PLLA membrane in patients with nonobliterated urethral stricture, which possess a well blood supply urethral bed. Thus, the indication of substitution urethroplasty using biomimetic PLLA membrane should be future investigated. Moreover, this finding requires confirmation in an adequately powered prospective randomized controlled trial with a long-term follow-up.
| CONCLUSIONS
This study showed that the biomimetic PLLA membrane was a feasible and effective novel material for the anterior urethral repair.
Urethral reconstruction using the biomimetic PLLA membrane should only be carefully considered with proper indications, including the stricture location, thickness of scar, and diameter of the remaining urethra lumen. Moreover, the long-term effects with more patients should be investigated in further studies.
DATA AVAILABILITY STATEMENT
The data that support the findings of this study are available from the corresponding author upon reasonable request. | 2022-03-08T02:12:26.725Z | 2022-03-05T00:00:00.000 | {
"year": 2022,
"sha1": "34d79287e388f803af7806b338d32d8272779410",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1002/btm2.10308",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "602b5c50d33d643ec1637580f8803c848b5f0f3e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14201431 | pes2o/s2orc | v3-fos-license | Percutaneous ethanol injection therapy is comparable to radiofrequency ablation in hepatocellular carcinoma smaller than 1.5 cm
Supplemental Digital Content is available in the text
Introduction
Hepatocellular carcinoma (HCC) is one of the most common etiologies of cancer-related morbidity and mortality. [1] Although resection offers a potential cure, most patients with HCC are not eligible for resection. [2] For some unresectable early stage HCCs, a variety of loco-regional therapies including microwave coagulation therapy, [3] percutaneous acetic acid injection, [4] cryoablation therapy, [5] laser interstitial thermal ablation therapy, [6] radiofrequency ablation (RFA), and percutaneous ethanol injection therapy (PEIT) have been widely applied. [7][8][9] Among these therapies, PEIT had been widely applied as a standard treatment for small HCC (necrosis rate of 90%-100% of the HCC 2.0 cm and 70% in tumors >2.0 and 3.0 cm). [10,11] PEIT usually required multiple treatment session to achieve complete necrosis in tumors >3.0 cm due to the presence of intratumoral septa. [11] Although the rate of initial response is improved, development of viable intratumoral nests or distant recurrence after PEIT is nearly mandatory during follow-up. In this context, PEIT has been substituted by more effective thermal ablation techniques such as RFA in many centers.
RFA was first conducted for the treatment of HCC in 1999. [12] Its advantages over PEIT include ease of performance, effectiveness similar to that of surgical resection, high safety, and low invasiveness. [13] RFA has been reported to provide similar efficacy in tumors <2.0 cm and better local control rate compared with PEIT in tumors >2.0 cm. [11] However, a recent study showed that RFA had better survival benefits over PEIT regardless of tumors >2.0 cm or 2.0 cm. [14] Despite a systematic review of randomized trials comparing percutaneous ablation therapies demonstrated that RFA showed significantly enhanced 3-year survival rate over PEIT, [15][16][17][18] most trials performed in Europe have shown no significant differences between RFA and PEIT regarding overall survival (OS). [19][20][21][22][23] There are still controversies concerning the survival benefit of RFA over PEIT. Moreover, when compared with PEIT, RFA has significant possible shortcomings such as a lower application rate depending on the tumor location and a higher rate of major adverse events. [15] This cohort study was aimed to compare the survival outcomes of RFA and PEIT in patients with early stage HCC.
Patients
The study protocol got approval from the institutional review board of Seoul National University Hospital, and the informed consent was waived due to a retrospective design of this study. In this cohort study, the study population comprised of consecutive 535 patients who were diagnosed with early stage (stage 0 or A) HCC in accordance with the Barcelona Clinic Liver Cancer (BCLC) [11,24] underwent RFA or PEIT, as an initial treatment, between January 2005 and December 2010 at Seoul National University Hospital.
The baseline information collected at the time of HCC diagnosis included demographic profiles, etiology of the chronic liver disease, laboratory findings, longest diameters of the tumors, severity and complications of liver cirrhosis, and treatment modality. All patients were given information about the details of RFA and PEIT. Initial treatment modality was selected according to the physician's advice and the patient's preference. Exclusion criteria of this study were as follows: the longest diameters of tumors >3.0 cm, the presence of extrahepatic metastasis or vascular invasion on imaging studies performed before the procedure, Child-Pugh class C liver cirrhosis, or history of other malignancies except HCC within 5 years.
HCC which was diagnosed by imaging modalities with/ without alpha-fetoprotein (AFP) level results or by pathology based on the guidelines of the European Association for the Study of the Liver or the American Association for the Study of Liver Diseases (AASLD). [11,24] The degree of liver dysfunction was estimated based on the Child-Pugh classification. The presence of portal hypertension was assumed when the platelet count was <100,000/mm [3] and accompanying splenomegaly or esophageal/gastric varices were detected. [25] The longest diameters of tumors and tumor responses were measured according to dynamic computed tomography (CT) and/or magnetic resonance (MR) imaging by an experienced liver radiologist independently not knowing the survival information of the study patients. Real-time US having a 3.5-MHz probe (IU22; Philips, Cleveland, OH) was selected as the first-line guidance modality in RFA group. [26] Right after the RFA procedures, contrast-enhanced, multiphase liver CT was checked in RFA group. These images were compared with those acquired before the RFA procedures to assess ablation success. Nodular areas of hypo-attenuation without contrast enhancement were considered to represent necrotic or treated tissue. [27] According to the immediate CT results, response to RFA was classified as complete or incomplete ablation. [26] In cases of incomplete ablation, another session of RFA was performed immediately after CT to accomplish complete ablation on that day. Primary technical success was defined as complete ablation of the target tumor, and secondary technical success was defined as achievement of complete ablation of the target tumor after repeat ablation. [27] Major and minor adverse events were assessed based on the Society of Interventional Radiology guidelines. [28] 2.3. PEIT procedures PEIT was administered to each patient (1-3 sessions weekly) by 1-2 injections of 99.5% sterile ethanol (1.6-73.9 mL, mean 17.6 ± 16.7 mL) delivered to each lesion with a 21-gauge needle (Ethanoject, TSK, Tokyo, Japan) having multiple-side-hole by 2 physicians (J.H.Y., and Y.J.K., who had 17 and 13 years of clinical experience in performing PEIT, respectively), depending on the size of the tumor and the distribution of the injected ethyl alcohol within the tumor. [29][30][31]
Assessment of treatment response
Follow-up examinations, including contrast-enhanced multiphase liver CT or MR imaging, measurement of serum AFP levels, and liver function tests, were performed in all patients 1 month after treatment. According to the 1-month follow-up CT or MR imaging results, the technical effectiveness of the RFA or PEIT procedures was assessed for each patient based on the standardized terminology of the Interventional Working Group on Image-Guided Tumor Ablation. [32] When persistent enhancing foci of tissue were observed at the site of the original lesion at 1-month follow-up, it was considered treatment failure. [32] If remnant or new HCCs were detected, a multidisciplinary approach, which included repeated loco-regional treatments, hepatic resection, liver transplantation, and TACE, was applied. Complete ablation observed on 1-month follow-up images was regarded as treatment success. In cases of treatment success, contrast-enhanced multiphase liver CT and/or MR imaging and the serum AFP level were followed up every 3 months.
OS was estimated from date of the 1st treatment to date of death or last contact. Time to progression (TTP) was measured from the date of treatment until the first documented tumor progression in imaging studies according to the modified Response Evaluation Criteria in Solid Tumors (modified Response Evaluation Criteria in Solid Tumors) [33] by independent radiologic assessment.
Statistical analysis
Conventional clinical characteristics at the time of enrollment were analyzed to identify predictors that influenced survival as determined by Kaplan-Meier method and compared using the log-rank test. Stepwise, multivariate analysis was performed with the Cox proportional hazards model to identify independent risk Yu et al. Medicine (2016)
Results
A total of 535 patients newly diagnosed with HCC were included in this study; patient characteristics are summarized in Table S1, http://links.lww.com/MD/B186. Two hundred eighty-eight (288/ 535, 53.8%) patients underwent RFA and 247/535 (46.2%) patients underwent PEIT as the initial treatment. RFA group had significantly larger longest tumor size and had lower platelet counts (P < 0.001 and P = 0.015, respectively) than PEIT group. However, PEIT group had significantly poorer hepatic function (higher model for end-stage liver disease score) (P = 0.013) than RFA group. These 2 groups similar in age, sex, etiologies of chronic liver disease, serum transaminase levels, prothrombin time, serum total bilirubin level, serum albumin level, Child-Pugh scores, AFP levels, prothrombin induced by vitamin K absence-II level, tumor number, and portal hypertension. The survival rates between the small tumor group (the longest diameter of tumors <1.5 cm) and the large tumor group (the longest diameter of tumors >1.5 and 3.0 cm) were significantly different (P = 0.014) (Fig. 1A).
Survival analyses after propensity score matching
One-to-one propensity score matching was performed to minimize confounding factors in survival analyses. A total of 175 patients from each group were matched, and the demographic and baseline clinical characteristics were well balanced between the 2 groups ( Table 1).
Survival analyses according to the longest diameter of tumors in propensity-matched cohort
Since the longest diameter of tumors was an independent prognosticator both in all study patients and in propensitymatched cohort, we next performed survival analyses according to the longest diameter of tumors in our propensity-matched cohort. In patients with the longest diameter of tumors ( 1.5 cm), multivariate Cox regression analysis demonstrated that age was an independent predictive factors associated with poor survival (HR, 1.051; 95% CI: 1.010-1.093; P = 0.015) but the treatment modality was not significantly associated with survival (P = 0.149) (Fig. 3A). The treatment modality was not significantly associated with survival in patients with the longest diameter of tumors (>1.5 and 3.0 cm) (P = 0.850) (Fig. 3B) and multivariate Cox regression analysis demonstrated that age (HR, 1.054; 95% CI, 1.011-1.099; P = 0.013), sex (HR, 2.255; 95% CI, 1.136-4.478; P = 0.020), and baseline serum AFP level (HR, 1.795; 95% CI: 1.263-2.550; P = 0.001) were independent predictors associated with poor survival ( Table 4).
Major complications
There was no procedure-related death in both groups. Major complications were observed in 5 RFA group patients (2.9%) (bile duct injuries [n = 1], colon perforation [n = 1], and stomach wall burn [n = 3]). One RFA group patient experienced cholangitis associated with a treatment-related bile duct injury. The patient fully recovered after a course of broad-spectrum antibiotics. One patient had colon injury adjacent to the ablation zone, as well as abscess formation, both of which were treated with colon resection. After surgery, the patient recovered completely. Three patient experienced stomach wall burn after RFA treatment and fully recovered with conservative
Discussion
In this study of patients with early stage HCC (BCLC 0 & A), PEIT showed similar survival benefit compared to RFA both in all study patients and in propensity score matched cohort. Especially, in patients with the longest diameter of tumors ( 1.5 cm), clinical outcomes including OS and time to progression were not significantly different between the PEIT group and the RFA group. Although, RFA was superior to PEIT with respect to time to progression in patients with the longest diameter of tumors (>1.5 and 3.0 cm), OS was not significantly different between 2 groups. To our best knowledge, this is first study demonstrating that both PEIT and RFA provide excellent and comparable survival in patients with the longest diameter of tumors <1.5 cm.
Tumor size is an important factor in the selection of local therapy. AASLD guidelines state that RFA is more effective than PEIT for HCC >2.0 cm, but the efficacy of PEIT and RFA may be equal in treating tumors 2.0 cm [11] . For patients with tumors >2.0 cm, previous studies have consistently showed that the survival benefit of RFA surpasses that of PEIT. [16][17][18]34] However, for patients with tumors >2.0 cm, there are still controversies about the survival advantage of RFA. [14,35] In this context, the therapeutic efficacy of PEIT is mainly dependent on tumor size. [36] Strictly speaking, tumor biology is significantly different between tumors 1.5 and 1.6-2.0 cm in diameter. [30,37] Pathologic findings identified in 106 resected HCCs <2.0 cm in diameter have demonstrated local intrahepatic metastases (located 1.0 cm from the initial tumor), and microscopic portal vein invasion among the most frequently occurring tumors (the so-called distinctly nodular types). [30] The frequency of portal vein invasion has been reported as significantly higher in HCC 1.6-2.0 cm in diameter (40%) than in HCC 1.1-1.5 cm in diameter (25%, P < 0.01). [30,36] Therefore, we performed survival analysis comparing the efficacy of PEIT and RFA using reference tumor size 1.5 cm in this study. There have been 2 studies considering tumor size of 1.5 cm prognostic factor. [30,38] In one small single arm study (n = 31), the authors argued that PEIT was best indicated for patients with HCC <1.5 cm but they did not analyze the survival benefit of PEIT with RFA. [30] Another small study (n = 23) demonstrated the difference in the incidence of local recurrence of HCC <1.5 cm in patients treated with RFA or PEIT. [38] In that study, the authors argued that PEIT Table 2 Factors identified on univariate and multivariate analyses that affect overall survival in HCC patients undergoing RFA or PEIT after matching by propensity score. was more effective than RFA in terms of the period between treatment and recurrence but they did not perform survival analysis. [38] Therefore, our study had several superiorities over previous studies, including the large number of patients with HCC (535 patients) and the survival analyses including OS and time to progression both in all study patients and in propensity score matched cohort. Indeed, the effectiveness of PEIT-induced tumor ablation is less predictable than RFA-induced ablation, especially in large HCCs, and this is due to the inhomogeneous diffusion of ethanol within the nodule because of the presence of fibrous septa and the better effectiveness of thermal ablation in the treatment of extracapsular invasion or satellitosis. [35] However, in our propensity score matched cohort, recurrence after PEIT did not negatively affect survival in patients with the longest diameter of tumors (>1.5 and 3.0 cm). This finding may be related to the timely and effective treatment of the locally recurrent tumor. In addition, it should not be overlooked that most patients (91.2%, 488/535) had portal hypertension and that progression of HCC was not the only cause of death in the whole study patients, whereas, death was caused by hepatic failure without HCC progression or by extrahepatic comorbidities. [39] Therefore, PEIT can be regarded as an effective alternative treatment option to RFA in patients with HCCs >1.5 cm in case of severely impaired clotting parameters or with HCC nodule located superficially close to the abdomen wall or in a site that would be dangerous for thermal ablation, such as near the gallbladder, major bile ducts or bowel loops, or decreasing the effectiveness of RFA-induced thermal ablation, such as close to large intrahepatic vessels.
Variable
This study has some limitations. Firstly, this is a single-centerbased retrospective study. Second, the prolonged enrollment period of the study could be additional sources of bias leading to time-related variability in pretreatment staging and posttreatment effectiveness assessment. Lastly, a rigorous cost-effectiveness analysis concerning the best therapeutic approach for this subgroup of patients was not performed in this study.
In conclusion, PEIT and RFA are equally effective for treating HCCs <1.5 cm in terms of OS and time to progression but cumulative HCC recurrence is significantly higher in patients with the longest diameter of tumors (>1.5 and 3.0 cm) who undergo PEIT. Therefore, RFA should be considered the standard treatment, whereas PEIT should be reserved as effective alternative option to patients with the longest diameter of tumors <1.5 cm. Table 5 Factors identified on multivariate analyses according to maximal tumor size that affect time to progression in HCC patients undergoing RFA or PEIT after matching by propensity score. | 2018-04-03T04:51:45.617Z | 2016-08-01T00:00:00.000 | {
"year": 2016,
"sha1": "01d1a8386ed11f45749203b3e686abb55398e8e4",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1097/md.0000000000004551",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "01d1a8386ed11f45749203b3e686abb55398e8e4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251322852 | pes2o/s2orc | v3-fos-license | Known phyla dominate the RNA virome in Tara Oceans data, not new phyla as claimed
The Tara Oceans project claimed to identify five new RNA virus phyla, two of which “are dominant in the oceans”. However, their own assignments classify 28,353 of their putative RdRp-containing contigs to known phyla but only 886 (2.8%) to the five claimed new phyla combined. I mapped their reads to their contigs, finding that known phyla also account for a large majority (93.8%) of reads. I show that 510 of their putative viral contigs contain cellular proteins and further that predicted polymerase structures for their claimed phyla have incomplete and malformed palm domains, contradicting viral polymerase function and calling into question whether their putative polymerase sequences are derived from viruses.
Introduction
In 2017, fewer than 5,000 RNA virus species were known (Wolf et al., 2018). Since then, high-throughput metagenomics has dramatically expanded the known RNA virome with studies such as Yangshan Deep-Water Harbour (Wolf et al., 2020), Serratus (Edgar et al., 2022) and Tara Oceans (Zayed et al., 2022) collectively adding more than 140,000 new species. These new viruses are classified primarily on the basis of their RNA-dependent RNA polymerase (RdRp) gene sequences, with clusters at 90% sequence identity (viral Operational Taxonomic Units, vOTUs) serving as proxies for species.
Claims made by the Tara study
Tara claimed to identify five novel RNA virus phyla from sequence analysis of this data, with no corroborating evidence from traditional methods such as isolated or cultured viruses or electron microscopy of virions. According to their Abstract, "'[s]pecies'-rank abundance determination revealed that viruses of the new phyla 'Taraviricota' ... and 'Arctiviricota' are widespread and dominant in the oceans". See my
Missing supporting data
Given the claim that novel viruses dominate the oceans by abundance, I would expect to see a table giving vOTU abundances per sample, but no such table is provided. Further, sufficient data and code required to independently check their abundance calculations are not provided in the supplementary materials or associated data repositories. Assemblies are deposited, but metadata associating a contig with its sequencing run is provided only for a small subset (35%) of contigs. Abundances per sample per contig as reported by CoverM are not provided, and the code required to calculate per-"megataxon" abundances from per-contig / per-sample abundances is also not provided.
To implement an independent assessment of vOTU abundances across the complete Tara dataset, I therefore had to start from scratch by mapping reads and writing my own analysis scripts. Code and intermediate data supporting my re-analysis are provided in my supplementary materials, Zenodo data repository and github.
Known phyla dominate Tara's contigs
The number of contigs per megataxon is shown in my Table 1, obtained by a one-line Linux command from their Table S6: grep "This study" TableS6.tsv | cut -f8 | sort | uniq -c | sort -nr This shows that "Taraviricota" and "Arctiviricota" account for 1.44% and 0.23% of their contigs, respectively, contradicting their claim that these viruses "dominate". Known phyla together account for 84.1% of contigs (excluding "not determined") and thereby dominate Tara's RNA virome as measured by total numbers of contigs.
Known phyla dominate Tara's reads
To map reads for all Tara samples to vOTUs, I used Serratus to launch a cluster of cloud CPUs running the diamond (Buchfink et al., 2015) read mapper. "Megataxon" assignments of vOTUs were taken from Tara supplementary tables and used to assign each vOTU to a phylum. The total number of reads mapped to each phylum is shown in my Table 2, showing that "Taraviricota" and "Arctiviricota" account for 3.75% and 1.28% of the mapped reads, respectively. Read frequency per phylum correlates well with contig frequency per phylum (r=0.79) as would be expected. Otherwise, read depth per phylum would necessarily be highly skewed, demanding further investigation and discussion. In fact, known phyla together account for 93.8% of mapped reads (excluding contigs annotated as "not determined" by Tara), and thereby dominate Tara's RNA virome as measured by total numbers of mapped reads.
Known phyla dominate the Arctic Ocean
On p.6, third column of their paper, Tara claim that "vOTUs belonging to the -ssRNA phylum "Arctiviricota" were, on average, the most abundant across most of the Atlantic Arctic waters (Fig. 4)." My
Focused re-analysis of selected geolocations
My findings of low abundances for "Taraviricota" and "Arctiviricota" in contigs and reads contradict Tara's main claim and raise the question of how the discrepancy might be explained. This question is difficult to address given the lack of supporting methods descriptions, data and code related to this claim. To the best of my knowledge, Tara Table 3). Geo-A was selected because it shows the largest abundance of "Arctiviricota" of all locations together with a large abundance of "Taraviricota", and should therefore clearly show high abundances of both these "phyla" by any reasonable abundance metric. Geo-D was selected because it is shown as having one of the highest abundances of "Taraviricota". Geo-B and Geo-C were selected as controls which have no visible abundance of "Taraviricota" and "Arctiviricota" in Tara's Fig. 4. I used four distinctly different read-mapping procedures to check that results are consistent and robust against anomalies in any single method (in particular, unusually large numbers of false-positive or false-negative alignments), as follows. (1) My own re-implementation of Tara's read mapping with bowtie2 where the reference database is nt sequences of all RdRp+ contigs; both reads and contigs were poly-A-trimmed by bbduk (https://jgi.doe.gov/data-and-tools/bbtools/) before mapping. (2) Serratus with its diamond aligner module, as used in mapping of the full Tara dataset. (3) The diamond aligner using standard NCBI tools to download reads. Method (2) generates identical results to (1) but is more computationally expensive. Methods (2) and (3) used a vOTU reference comprising 5,546 amino acid sequences trimmed to approximately full-length RdRp domains and clustered at 90% identity. (4) The diamond aligner using a reference constructed by 6-frame translation of all RdRp+ contigs, to check how many additional reads are mapped by diamond when non-RdRp sequence is retained and clustering is not used to reduce redundancy in the reference. I used 6-frame translation to ensure that all valid CDS is included (as opposed to ORF-finding or other methods designed to find CDS and discard non-CDS) because this method is robust against frame-shifts and non-standard genetic codes up to a small fraction of mis-translated codons which should have minimal impact on measured coverage.
The total data size of these four geolocations combined is 120Gb, enabling download and mapping on a single Linux server in a few hours and thereby facilitating independent verification of my results.
Abundances measured as number of reads mapped are summarized in my Fig. 3 and Table 4. My Fig. 3 shows that the five methods give a consistent qualitative picture of the abundances of the six groups used in Tara Fig. 4. My Table 4 shows that the largest fraction of reads mapped to "Arctiviricota" at Geo-A is 68/445,728 = 0.015% by Serratus, and no reads mapped to "Taraviricota" at any of the four geolocations by any of the five methods. This definitively contradicts the pictures shown at Geo-A and Geo-D in Tara Fig. 4, regardless of the chosen abundance measure.
Abundance measure
Tara calculated abundances using CoverM, but the details are not completely specified. It is described as follows: "For the vertical coverage (i.e., for abundance estimation), reads that mapped at ≥ 90% ID over ≥ 75% of the read length were extracted using CoverM v0.2.0-alpha6, calculating the trimmed mean (tmean) for each contig. . . Only adjusted abundances of the ≥ 1-kb contigs were kept, and final abundances of the vOTUs were calculated by summing the adjusted abundances of the ≥ 1-kb contigs belonging to these vOTUs" (their Material and Methods under 'Calculation of vOTU relative abundances').
No code is provided for calculating abundances from bowtie2 SAM files. Their supporting data does not include CoverM reports, or the calculated abundances per contig per sample or per vOTU per sample. It is not described how to calculate abundance of a "metataxon" or phylum from vOTU abundances, and a table of "megataxon" or phylum abundances per sample (needed to reproduce their Fig. 4) is also not provided.
Here, I measured abundance by the number of reads mapped because it is simple, transparent and adequate to identify "dominant" phyla. It is not stated whether Tara mapped reads from a given sequencing run to (a) the assembly for that run, or (b) to the combined set of contigs from all assemblies. Both alternatives are reasonable; (a) requires that a species is sufficiently abundant to be assembled before it is counted in a given sample, while (b) is more sensitive because it includes species which are rare in the reads for a given sample but are successfully assembled elsewhere. I could not do (a) because most contigs are not annotated with a sample or sequencing run, and therefore chose (b).
My result that ≪ 1% of reads map to "Taraviricota" and "Arctiviricota" at Geo-A and Geo-D necessarily implies that either Tara's results or my results are unambiguously wrong, because the discrepancy between my figure and theirs is far too large to be explained by technical details of read processing or the choice of abundance metric. My results from read mapping over the full dataset correlate well with Tara's own results for contig and vOTU frequencies, and I therefore believe that the balance of evidence suggests that Tara's Fig. 4 is based on incorrectly calculated abundance values. This could be clarified if Tara annotates their contigs with sequencing runs, and makes their SAM files, CoverM reports and data analysis scripts available for independent review.
Sequence and structure conservation in the RdRp domain
Viral RdRp belongs to the palm domain superfamily, with close homologs in several families of cellular palm domain proteins including group II introns. There are six essential motifs in the catalytic core of the polymerase palm domain (Lang et al., 2012), conventionally denoted by letters A through F which usually appear in order FABCDE in the primary sequence, though permuted variants are also known (e.g. (Gorbalenya et al., 2002)). A seventh motif G is sometimes included; I do not consider it further here because its structural conservation is less clear. Motifs F, A, B and C each have one conserved catalytic residue (F=ARG, A=ASP, B=GLY and C=ASP). Within a phylum, the A, B and C motifs are sufficiently well conserved to be recognizable by sequence (Babaian and Edgar, 2021), while the remaining motifs are more variable in primary sequence. Between different phyla, motifs are generally not recognizable by primary sequence similarity with the exception of GLY-ASP-ASP in motif C which has a few common variants such as SER-ASP-ASP in SARS-Cov-2. All six motifs are found in all solved structures for palm domain polymerases and reverse transcriptases (see (Lang et al., 2012) for motif coordinates in structures known in 2012). They are well conserved in structure, aligning well between viral RdRp and non-viral homologs including group II introns, as shown in my Fig. 4. If six new virus phyla can be discovered in Tara's data, then surely one or more new families of group II introns, or some other close cellular homolog, could also be discovered. Given that the diversity of palm domain proteins is known only sparsely at the present time, it is simply not possible to distinguish with certainty a highly diverged RdRp from, say, a highly diverged group II intron by sequence and structure alone.
Distinguishing viral RdRp from close homologs
In their Materials and Methods under 'Evaluation of authenticity and completeness of putative virus RdRps', Tara describe their protocol as follows: "Hits longer than 100 amino acids with a best match to an RdRp HMM and with a bitscore ≥ 30 were kept as true positives for proteins containing the virus RdRp domain. Lower-scoring hits were manually inspected for presence of the seven canonical RdRp domain motifs. In total, 44,779 contigs encoding putative virus RdRps were detected by these identification and curation processes." While this procedure seems reasonable, it is not definitive-certainly not to the standard that should be required for identifying novel phyla-and some residual false positive rate should be expected. "Best match to an RdRp HMM" is a top-hit / nearest-neighbor test which may give false-positive assignments because top hits are not necessarily evolutionary neighbors (Koski and Golding, 2001). Using a bit score threshold is unconventional, why were E-values not used? A bit score of 30 is low and typically corresponds to a high E-value; the corresponding E-value should therefore have been quoted and the choice of bit score threshold should have been explained. Presumably, some or all of the claimed novel phyla are among the most diverged and hence lowest-scoring hits, but absent documentation it is not possible to verify this. "Manual inspection" cannot reliably identify catalytic motifs in highly diverged proteins because the motifs are conserved in structure but not in sequence. Even within a known phylum, in my experience it is often difficult or impossible to identify D, E and F motifs without structure. While rules of thumb can be applied to motif sequences, e.g.
GLY-ASP-ASP in motif C is characteristic of RdRp while ALA-ASP-ASP is characteristic of reverse transcriptases, there is to the best of my knowledge no reliable method for distinguishing viral RdRp from a cellular homolog given sequence and structure alone because there are exceptions. For example, AOY33888 (RdRp of squash vein yellowing virus) has ALA-ASP-ASP in its C motif, and conversely WP 014123481 (group II intron of bacterium Tetragenococcus halophilus) has GLY-ASP-ASP. Tara should therefore have documented the criteria they used to decide whether motifs were indicative of RdRp, and provided a list of low-scoring hits together with their inferred motifs and functional assign-ments, to enable independent review.
Cellular proteins identified in Tara's putative viral contigs
I aligned Tara's RdRp+ contigs to the NCBI non-redundant protein database using diamond, collecting the top hits only. 510 contigs had hits to non-viral proteins with E < 10 −9 (My Supplementary Table S1), 137 of which had aa sequence identity > 90%. Many of these proteins are associated with cellular marine organisms including algae, plankton and dinoflagellates. If an RdRp-like sequence is found in a contig with an identical or highly similar match to a cellular protein, this favors the hypothesis that the contig was derived from a cellular transcript rather than a viral genome or transcript. The RdRplike sequence can then tentatively be identified as a cellular RdRp homolog such as a group II intron.
Other explanations are possible, including an incorrectly assembled contig comprising a viral/cellular chimera, a recent virus insertion into a host, or a recent capture of a host gene by a virus, but here the parsimonious explanation is a correct non-viral contig containing a cellular RdRp homolog, or at a minimum this should be the null hypothesis before claiming a novel virus, and the burden of proof should therefore be on Tara to rule out other explanations. Further, if the full-length contig is used for read mapping and virus abundance estimation, the cellular protein may cause spurious inflation of abundance due to mapping of non-viral reads onto the contig. An example is shown in my Fig. 5, which annotates a contig which Tara assigns to "Taraviricota" and meets their criteria for a reference sequence, i.e. it is > 1knt and has an estimated RdRp domain completeness of 100% according to their supplementary tables. This contig contains a 59% identity match to a dinoflagellate protein with E = 10 −31 .
Predicted RdRp structures for Tara's "phyla"
According to Tara's Material and Methods under 3D structure network analysis, they "predicted the 3D structures for the new megataxa from their representative primary amino acid sequences (the longest sequence with no ambiguous residues (i.e., no 'X's in the primary sequence) per megatxon) using Phyre2 in the 'Normal' mode". They deposited five structures, one for each "phylum" in cyverse: https://de.cyverse.org/anon-files//iplant/home/shared/iVirus /ZayedWainainaDominguez-Huerta RNAevolution Dec2021/Predicted 3D Structures/.
I downloaded the pdb files for these structures and aligned them to SARS-CoV-2 RdRp (PDB:7c2k) using pymol (https://www.pymol.org). The predicted structure for "Taraviricota" aligned well and in my judgment appears consistent with a polymerase or reverse transcriptase in the palm domain superfamily. However, structures for the other four "phyla" are obviously truncated and malformed. My Fig. 6 summarizes their alignments to CoV. These four predicted structures range in length from 142 aa ("Arctiviricota") to 211 aa ("Wamoviricota") and are thus much shorter than a complete RdRp domain which ranges from a minimum of more than 400 aa to a maximum of > 1, 000 aa (Lang et al., 2012).
The catalytic cores of the predicted palm domains are truncated such that one or more essential motifs are missing, most obviously in "Pomiviricota" which lacks motifs A, B and C and F. In "Wamoviricota" and "Parexenoviricota" roughly half the catalytic core is present but obviously malformed, as shown in my Fig. 7. For example, in "Parexenoviricota", one strand of the anti-parallel beta sheet of motif C is replaced by one helix turn and in "Wamoviricota", the alpha helix of motif B, which has five or more complete turns in all known structures, has a single turn followed by a loop. Therefore, if these predicted structures are substantially correct, they are sufficiently different from known palm domain polymerases to contradict the hypothesis that they are viral RdRp. Conversely, if the predictions have substantial errors and the true structures closely resemble known viral RdRps, then the predicted structures are sufficiently defective to undermine inferences of function and phylogenetic relationships by comparison with solved palm domain structures.
Annotation of predicted structures in Tara's Fig. S5
My analysis of the predicted structures conflicts with Tara's Fig. S5, reproduced with added notes as my Fig. 9. Tara marks 12 motifs as present which in fact are unambiguously absent due to truncation of the palm domain. Motif E is annotated as "naturally absent" in Lenarviricota PDB:3mmp-G, but in fact this essential motif is found at residue 397 as shown in my Fig. 10.
Tara's structure network
Tara's Fig. 3B shows a "structure network" which, according to the figure caption, is informative for "...inferring the early history of orthornavirans". As described in their Material and Methods under "3D structure network analysis", pair-wise structural alignments were constructed using Matras v1.2 (Kawabata, 2003). For each pair, the superfamily similarity reliability score was used as a distance. The network was generated by cytoscape (Shannon et al., 2003) from these distances, using the using the "Edge-weighted Spring Embedded" layout. No citations to previous applications of networks to phylogenetics are given, no justification is given from theoretical considerations, and no validation is reported which assesses the accuracy of inferences made from this type of network. The remarkable claim that this ad hoc method is informative for deep phylogenetic inference is thus not supported by any evidence.
Further, the procedures and criteria used to make inferences from the network are not explained; conclusions are simply presented as faits accomplis.
Why Matras scores and cytoscape? Could DALI (Holm and Sander, 1995) Z-scores or TM-align (Zhang and Skolnick, 2005) TM scores be used instead? Could the "Compound Spring" or "Prefuse Force" layouts be used, or is "Edge-weighted Spring" superior for some reason? Could the network be generated using Gephi (gephi.org) or Wandora (wandora.org)? More generally, which combinations of similarity scores and network construction methods are valid, and which are invalid? What objective criteria enable phylogenetic inferences from these networks? What types of hypotheses can be robustly confirmed or contradicted? How can the accuracy of these conclusions be assessed? In the absence of a framework for delivering evidence-based answers to questions like these, this type of method cannot support a rational approach to classification.
Tara's "megataxonomy"
Tara's classification method is sketched in my Fig. 11. Translated RdRp sequences from Tara contigs were combined with RdRps from Yangshan Deep-Water Harbour and GenBank. The combined RdRps were reduced to 13,109 non-redundant centroid sequences at 50% identity by UCLUST (Edgar, 2010).
The centroids were then clustered by MCL (Enright et al., 2002) from a matrix of pair-wise BLASTP bit scores, giving 19 "megaclusters". Each megacluster was assigned a "megataxon" based on the majority taxon or taxa according to annotations of its GenBank sequences.
In taxonomy generally, and per ICTV rule 3.3.1 specifically, monophyly is a fundamental requirement for defining taxa. However, it is textbook knowledge that clustering cannot reliably infer phylogenetic trees or monophyletic groups, with one exception based on unrealistic assumptions (the correct tree can be reconstructed by UPGMA if the molecular clock is ultrametric and true evolutionary distances can be calculated). This result was established in the 1960s in the first literature to consider mathematical and algorithmic applications to phylogenetics, and has been universally accepted since then. See Chapter 10 in "Inferring Phylogenies" (Felsenstein, 2004) for history, methodological survey and references.
MCL is a generic clustering method, and therefore cannot reliably predict monophyletic groups. Adopting an approach of this type for classification would require abandoning monophyly as a standard, without providing alternative classification principles. If a future study finds that MCL with inflation 1.1 based on BLASTP bit scores produces quite different clusters from Tara's when new data is added, which seems inevitable, then how to proceed? Should Tara's "megataxa" be abandoned? Should a different clustering method or similarity score be tried? Tara offers no guiding principles for answering such questions.
Phylogenetic tree
Tara's Fig. 3A shows a phylogenetic tree obtained by MSA and ML. Inspection of Tara's phylum-level MSA and tree (Global RdRp Tree sequence.fasta.aln and Global RdRp Tree newick.txt in Tara's DRYAD data repository) show that exactly one sequence was included for four of the five established ICTV phyla. It is therefore impossible by construction for any other sequence to place within the subtree of an ICTV phylum, with the exception of Duplornaviricota which has nine sequences. Thus, this tree cannot support a hypothesis that novel sequences fall outside known phyla.
Claim of 97% accuracy for taxonomy assignment
Tara claimed that their MCL clusters "nearly completely recapitulated the previously established phylogeny-based ICTV-accepted taxonomy at the phylum and class ranks (97% agreement)" on the basis of the ARI=0.97 value reported in their Fig. 1B. However, the agreement underlying this claim is not the fraction of correct classifications of phylum and class with GenBank taxonomy annotations as implied by the statement; rather, it is the adjusted Rand index obtained by comparing "megataxa" and megaclusters. "Megataxa" do not correspond to phylum and class separately or to phylum and class together; rather they are a heterogeneous collection of phyla, sub-phyla, polyphyletic groups (e.g. "Lenarviricota, others"), classes, unassigned ranks (e.g. "Wei-like"), and sub-families. With three exceptions (Chrymotiviricetes, Vidaverviricetes and Allassoviricetes), "megataxa" cannot be used to classify to ICTV class rank.
This ad hoc classifier was tuned to all available training data, which surely results in extreme overfitting because, as with the 3D structure network, there are no stated prior restraints on variations which can be tried, including the choice of distance metric, choice of clustering method, choice of clustering parameters (e.g. inflation value), and choice of ICTV taxa assigned to a cluster. The number of possible variations is thus astronomical, and the opportunities for over-fitting are therefore for all practical purpose unlimited. Given that a high degree of over-fitting should be assumed, it follows that the agreement between megaclusters and "megataxa" is not predictive of agreement when classifying new, unlabeled sequences (Santos et al., 2018), even if it is accepted that "megataxa" defined by this approach should supplant ICTV taxa.
A more realistic and informative assessment of classification accuracy would be obtained by randomized hold-out validation using ICTV taxa as the standard of truth as opposed to "megataxa".
Conclusion
The Tara authors' own summary data show that their claimed novel "phyla" comprise a small fraction of vOTUs and a small fraction of their contigs. My results show that the evidence for viral origin of their putative novel RdRps is equivocal, and regardless the relevant putative RdRp-containing contigs account for only a small fraction of the reads. Together, these results comprehensively contradict Tara's claim that new phyla are "dominant in the oceans". In fact, known phyla account for a large majority of their contigs, vOTUs and reads, and known phyla thereby dominate the Tara Oceans RNA virome.
Code and data availability
Code and data are deposited at https://github.com/rcedgar/tara oceans and https://zenodo.org/record/7194888. B., and Ideker, T. (2003). Cytoscape: a software environment for integrated models of biomolecular interaction networks. Genome research, 13 (11) Figure 1. Main claim of the Tara paper. Magenta colored underlines added for emphasis. The title describes "cryptic" (i.e., not well-characterized) marine viruses as "abundant", which implies a notably high abundance given its appearance in a short title. The Abstract claims more specifically that "Taraviricota" and "Arctiviricota" are "dominant in the oceans". Figure 2. Phylum abundances in the Arctic Ocean. The figure shows abundances in the region bounded by 100°W to 100°E longitude and 60°N to 90°N latitude. Top is this region as depicted in Tara's Fig. 4. Below is the same region with abundances measured as the numbers of reads mapped by Serratus. Reads are classified into six groups Tara ("Taraviricota"), Kitrino (Kitrinoviricota), etc., following Tara's Fig. 4. My pie charts show the number of reads mapped to each group on a logarithmic scale such that the radius of the segment is proportional to log 10 n where n is the number of reads mapped to the group. I do not understand the scaling used in Tara's pie charts and therefore did not attempt to reproduce them exactly. The total numbers of reads mapped in this region is shown at lower right, with only 164 reads assigned to "Arctiviricota" (0.01%). Location Geo-A shows high abundances of both "Taraviricota" and "Arctiviricota" in Tara's Fig. 4; this location is analyzed in more detail in my Fig. 3 and my Table 4.
Serratus
Tara Fig.4 diamond-6f bowtie2 Figure 3. Phylum abundances at four selected geolocations. Abundances are shown as pie charts in a four-by-four grid. Columns are locations Geo-A, Geo-B, Geo-C and Geo-D as specified in my Table 3. The top row shows the locations as depicted in Tara's Fig. 4. The pie-chart at location Geo-D was manually edited to remove the overlapping location adjacent to it (seen in its unedited form in the small inset). The three remaining rows show my own read mapping results using three different methods: Serratus is method (1), diamond-6f is method (3) and bowtie2 is method (4) as described in my main text under 'Focused re-analysis of selected geolocations'. Reads are classified into six groups Tara ("Taraviricota"), Kitrino (Kitrinoviricota), etc., following Tara's Fig. 4. My pie charts show the number of reads mapped to each group on a logarithmic scale such that the radius of the segment is proportional to log 10 n where n is the number of reads mapped to the group. I do not understand the scaling used in Tara's pie charts and therefore did not attempt to reproduce them exactly. However, all my methods agree that exactly one read mapped to "Taraviricota" at Geo-A and none at Geo-D, and my results therefore contradict their figure regardless of their scaling. See also my Table 4. Figure 5. "Taraviricota" contig with dinoflagellate protein. This contig is 2,804nt. It is assigned to "Taraviricota" with 100% estimated RdRp domain completeness by Tara Table S6. Positions 7 through 400 are shown as aligned to Polarella glacialis protein CAE8586037.1 by BLASTX. Positions 463 through 2,769 are an RdRp-like ORF on the negative strand. Predicted motifs A, B and C according to palmscan are shown in the lower inset. Numbers shown at top are nt coordinates counting from 1 at the beginning of the contig. Predicted motif coordinates by palmscan are aa positions counting from 1 at the start of the ORF. More than 500 of Tara's putative viral contigs have top hits to cellular proteins with E < 10 −9 , many of which have 100% aa identity (my Supplementary Table S1). . The CoV structure cartoon is colored to show the six essential motifs FABCDE, with catalytic residues indicated by spheres. Above the cartoon is a schematic showing inferred motif positions in each structure as residue numbers in the pdb files (note that residue numbers in the Tara pdb files are not 1-based). Chain lengths are given at the right-hand side. Phylum names are abbreviated to Pomi="Pomiviricota", Arcti="Arctiviricota" etc. All Tara structures except "Taraviricota" (not shown) are truncated such that one or more motifs are missing. Motifs D and E in solved structures did not align well to the corresponding regions in Pomi and Arcti and I was therefore not able to verify the presence of these motifs (see my Fig. 7). Tara's cyverse repository (phylum names without "viricota" for brevity). None of these structures has a complete palm domain. The Pomi structure is truncated such that the entire chain aligns approximately to a region starting after motif E, which implies that all conserved motifs are deleted if in fact they are present. In Arcti, the first three residues of the chain are GLY-ASP-ASP which align approximately to motif C, but the characteristic conformations of motifs C, D and E are not evident. Parexeno and Wamo align to roughly half of the palm domain, but the catalytic core is obviously malformed (my Fig. 8). Figure 8. Malformed catalytic cores of "Parexenoviricota" and "Wamoviricota" structures. The figure shows the "palmprint", i.e. the palm domain segment from motif A to motif C (Babaian and Edgar, 2021), in Parexeno and Wamo with six representative solved structures for comparison (phylum names without "viricota" for brevity). The alpha helix of motif B should have five complete turns, but in Parexeno there are three and in Wamo only one. In Parexeno, the three helix turns are followed by a beta sheet which is perpendicular to three helix turns at the corresponding primary sequence positions when aligned to solved structures. In Wamo, the single turn is followed by a loop. In Parexeno, the characteristic antiparallel beta sheet of motif C is replaced by a single-turn helix adjacent to a beta strand. Figure 9. Conflicting motif identification in Tara structures. The figure shows Tara's supplementary Fig. S5, with my notes added (magenta color). I find 12 motifs marked as present by Tara to be definitively absent due to truncation of the domains. Four motifs are present but obviously malformed.
G D D
Essential motif E is annotated as "naturally absent" in PDB:3mmp by Tara, but in fact this motif is present at residue 397 as shown in my Fig. 10. Re-define taxonomy using Megaclusters as a gold standard. Assign one "Megataxon" per cluster.
A Megataxon corresponds approximately to one or more taxa from ICTV at ranks ranging from phylum (Pisuviricota) to subfamily (Sedoreovirinae).
GenBank sequences are re-classified into Megataxa Figure 11. Tara's "megacluster" workflow and incorrect claim of 97% agreement with ICTV taxonomy. RdRp amino acid sequences were collected from GenBank, Yangshan Deep-Water Harbour (YDWH) and Tara contigs. These were clustered twice: first at 50% aa identity, giving 13,109 centroids. Centroids were clustered by MCL using BLASTP bit scores as a similarity measure, giving 19 "megaclusters" at inflation=1.1. Motivations for design choices including MCL for clustering, BLASTP for similarity, and inflation value of 1.1, are not explained. For each "megacluster", the consensus taxon or taxa of its GenBank sequences (the "megataxon" of the cluster) was identified. If the justification is entirely post-hoc, as is apparently the case, this procedure should be expected to over-fit because all labeled data is used in training with no hold-out for validation. "Megataxon" ranks range from phylum to sub-family (my Table 1), and with three exceptions (Chrymotiviricetes, Vidaverviricetes and Allassoviricetes), "megaclusters" do not attempt to classify at ICTV class rank. Tara measured agreement by adjusted Rand index (ARI) between "megaclusters" and "megataxa", not percentage agreement between megataxa against ICTV phylum and class as stated in their claim. | 2022-08-05T13:17:21.711Z | 2023-02-25T00:00:00.000 | {
"year": 2023,
"sha1": "2f9735c2e1315293bfa6254e259bf606bface4f7",
"oa_license": "CCBY",
"oa_url": "https://www.biorxiv.org/content/biorxiv/early/2022/10/16/2022.07.30.502162.full.pdf",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "19d6e9fc79d4a9449b97eb096f8951bc9bae7084",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
264397735 | pes2o/s2orc | v3-fos-license | Improving Efficiency of Direct Pro-Neural Reprogramming: Much-Needed Aid for Neuroregeneration in Spinal Cord Injury
Spinal cord injury (SCI) is a medical condition affecting ~2.5–4 million people worldwide. The conventional therapy for SCI fails to restore the lost spinal cord functions; thus, novel therapies are needed. Recent breakthroughs in stem cell biology and cell reprogramming revolutionized the field. Of them, the use of neural progenitor cells (NPCs) directly reprogrammed from non-neuronal somatic cells without transitioning through a pluripotent state is a particularly attractive strategy. This allows to “scale up” NPCs in vitro and, via their transplantation to the lesion area, partially compensate for the limited regenerative plasticity of the adult spinal cord in humans. As recently demonstrated in non-human primates, implanted NPCs contribute to the functional improvement of the spinal cord after injury, and works in other animal models of SCI also confirm their therapeutic value. However, direct reprogramming still remains a challenge in many aspects; one of them is low efficiency, which prevents it from finding its place in clinics yet. In this review, we describe new insights that recent works brought to the field, such as novel targets (mitochondria, nucleoli, G-quadruplexes, and others), tools, and approaches (mechanotransduction and electrical stimulation) for direct pro-neural reprogramming, including potential ones yet to be tested.
Introduction
Acute traumatic damage to the central nervous system (CNS) is one of the major health problems continuing to loom large worldwide and affects a significant number of individuals, many of whom die as a result or remain disabled for the rest of their lives.Such damage includes abrupt or sustained traumatic injuries of the spinal cord (traumatic spinal cord injury; SCI), brain (traumatic brain injury; TBI), and peripheral nerves and can be subdivided into primary and secondary injuries, caused by direct structural damage and subsequent molecular and cellular response of the tissue, respectively.It is estimated that approximately 2.5-4 million people are affected by SCI worldwide [1].SCI bears great socio-economic consequences as it often affects young individuals.
At the same time, as a result of global aging, the incidence of SCI among aging people may increase as well.Despite all advancements made in translational neuroscience, the most effective therapeutic approach to SCI that minimizes damage, regains spinal cord conductivity, and replaces injured non-functioning tissue with fully functional tissue has not been found yet.There is a wealth of reviews on the topic of post-SCI neuroregeneration, and the number continue to rise ( [2][3][4][5], to name but a few).Many of them focus on the use of transplanted cells as a therapeutic approach to SCI.There are three main types of such cells that are commonly used, namely, induced pluripotent stem cells (iPSCs), multipotent mesenchymal SCs (MSCs) and directly reprogrammed neural progenitor cells (drNPC).Of them, iPSCs pose tumorigenic risks, while MSCs-despite promising results and undoubted clinical potential-do not demonstrate truly breakthrough results in clinical trials for SCI, as discussed in detail further in the text.Thus, using significantly less tumorigenic patienttissue-derived drNPCs that are capable of fast pro-neuronal differentiation is an attractive alternative strategy.
Therefore, in our narrative review, we only focus on directly reprogrammed neural cells (drNPCs) and approaches to increase the efficiency of direct pro-neural reprogramming.We leave other therapeutic approaches to SCI, for example, immunotherapy [6], the use of tissue-engineered scaffolds (TES), and animal models of SCI recently reviewed elsewhere beyond the scope of the current review.Our work fills several critical gaps in knowledge, namely, (1) it briefly overviews the most recent literature on the use of reprogrammed cells for SCI, (2) summarizes existing approaches to enhance direct reprogramming to pro-neuronal lineage, and (3) proposes several possible approaches yet to be tested.
Adult Neurogenesis in CNS Post-SCI
Lower vertebrates such as the zebrafish are capable of regenerating the injured CNS and even the spinal cord (though to various extents depending on the species) [7], and some other vertebrates, for example, the amphibian axolotl, can also regenerate anatomically dissected spinal cord tissue [8].Furthermore, in mammals, the possibility of scar-free spinal cord repair was demonstrated in neonatal animals (mice) [9].However, the functional recovery of adult human CNS tissues after damage is limited by their very low regenerative ability.Furthermore, while, in the case of TBI, neuronal plasticity may allow for the compensating for local damage, SCI generally cannot be repaired or functionally compensated, and the results for debilitating consequences include complete loss of motor function, paralysis (paraplegia or quadriplegia), and dysautonomia.
Adult neurogenesis in the CNS of laboratory model mammals (mice) is restricted to several regions of the brain and is not sufficient to replace the tissue lost due to neurotrauma [10].Moreover, modern single-nucleus transcriptomic studies have not confirmed neurogenesis in the neural stem cell niches of human adults at all, based on the absence of robust transcriptomic and histological signatures of neurogenesis [11].As for SCI, the current concept is that the human spinal cord lacks a capability for neurogenesis, although some works support the view that ependymal cells of the central canal lining may have some neurogenic potential (i.e., the ability to generate neurons) in vitro in some mammals [12].Also, the presence of cells expressing markers of neuroblasts was reported in the post-SCI lesion site in mice, suggesting the possibility of the cellular shift toward neurogenesis after SCI [13] (notably, according to this work, only spinal NG2 glia cells but not astrocytes or ependymal cells have neurogenic potential).A recent study in mice demonstrated that neurons, following SCI, can revert to a somewhat embryonic-like state (as confirmed by their "regenerative transcriptome" indicating a reversal to an "embryonic transcriptional state"), and such a state can be sustained via grafts of neural progenitor cells (NPCs) [14]-this approach is yet to be evaluated in primates.It should also be noted that molecular and cellular responses to SCI, as well as mechanisms of neurogenesis and neuroregeneration in general, in humans and small laboratory animals (such as commonly used rodents or zebrafish) are fundamentally different in some aspects (as reviewed in [15]).
The zebrafish is perhaps the most popular animal model worldwide for post-SCI regeneration study [16].The axolotl is also an attractive model, used not only as a model of spinal cord transection but also as a more clinically relevant blunt contusion injury [17].As both the zebrafish and axolotl are so-called regenerative species, they are instrumen-tal in studying the mechanisms of post-SCI neuroregeneration that are "dormant" in non-regenerating species and, in this particular field of research, cannot be replaced by non-human primate models.However, once the molecular mechanisms underlying the aforementioned neuroregeneration are deciphered using these models, attempts can be made to apply this knowledge to human SCI regeneration.
Some large animal models of SCI (such as porcine and, especially, non-human primate models [18]) more closely resemble the pathophysiology of human SCI and, therefore, are considered more predictive and attractive intermediary translational models of SCI (as reviewed in [19]), though no model completely recapitulates all the processes occurring during human SCI.
Thus, despite the wealth of works utilizing animal models of SCI, not all of them may have immediate translational potential or clinical value.Either way, as mentioned, spinal cord damage in humans cannot be fully mitigated through endogenous mechanisms via stem cell differentiation into neuronal lineage.
Therapeutic Approaches to SCI
SCI is a multi-step disorder.The timeline of SCI and its sequelae can be divided into four phases: immediate (occurs within the first two hours after the trauma), acute or inflammatory (occurs within the first couple of days after the tissue damage, characterized by excitotoxicity, microglia activation, the post-traumatic inflammation and infiltration of the lesion area by immune cells, the imbalance of ionic homeostasis, and the loss of neurons and glial cells due to necrosis or programmed cell death), sub-acute (starts within two weeks after the initial damage, mainly characterized by tissue scarring and axon demyelination caused by the loss of oligodendrocytes), and chronic (takes months and years after the damage, characterized by further scarring, cellular death, demyelination, etc.) (as comprehensively reviewed in [20]).
It is intuitively clear that depending on the phase of SCI and, therefore, depending on the different molecular, mechanical, and cellular states of the injured tissue microenvironment, curative strategies must also differ.During the first phase, the common therapeutic strategies are surgical decompression, anti-edema therapy, and anti-inflammatory therapy.During the acute period, the most common therapeutic approaches are antiinflammatory and immunocorrective therapy, as well as strategies aimed to prevent excessive scar formation.During the subacute phase, the transplantation of autologous cells can be performed, and anti-inflammatory therapy, the activation of regeneration, and the prevention of scar tissue proliferation are still relevant.Finally, during the chronic phase, the prevention of ascending and descending axonal degeneration, the stimulation of neurite growth, enhanced rehabilitation with sensory input, and the activation of spinal neural networks are recommended.Some therapeutic interventions are most efficient if started during the first phase of SCI, thus preventing, delaying, or diminishing subsequent adverse ramifications, whereas others are instrumental during the later phases.For example, the inhibition of apoptosis within the first hours after the injury and the implantation of stem/progenitor cells within TES at later stages.
While acknowledging the importance of early interventions, in this review we focus on the latter strategy, including cell-based therapies.
Briefly, the main purpose of the implantation of stem/progenitor cells to the lesion area post-SCI is to replenish cells lost as a result of trauma or to help the remaining "host" cells repair the damaged tissue.
Historically, common approaches to manage SCI and its consequences included surgery (spinal decompression surgery), physical therapy, and pharmacotherapeutic interventions.However, significant strides have been made.Santiago Ramon y Cajal, the "founding father" of neurobiology, introduced a dogma (as applied to neurons): "Everything may die, nothing may be regenerated".A recent breakthrough in stem cell (SC) biology, the development of protocols for cell reprogramming, including direct neural reprogramming and the generation of new neurons via in situ cell reprogramming, revolu-tionized the field [21].Such an approach was further enforced by single-cell transcriptomics, allowing for the identification of novel targets and delineation of the fine-tuned mechanisms of post-SCI tissue remodeling [22], advances in biomaterial-based tissue repair in SCI (reviewed in [23]), and the development of novel drug delivery systems (DDS) based on nanoparticles for post-SCI regeneration [24].
Currently, cutting-edge approaches to neural regeneration and functional restoration post-SCI are multi-pronged and propose using various combinations of supportive TES with cell therapies (Figure 1), including cell therapy utilizing induced pluripotent stem cells (iPSCs) or directly reprogrammed neural precursor/progenitor cells (drNPCs), electrical epidural stimulation, and the application of bio-active compounds modulating molecular pathways critical for tissue regeneration and its normal functioning [25][26][27].
Historically, common approaches to manage SCI and its consequences included surgery (spinal decompression surgery), physical therapy, and pharmacotherapeutic interventions.However, significant strides have been made.Santiago Ramon y Cajal, the "founding father" of neurobiology, introduced a dogma (as applied to neurons): "Everything may die, nothing may be regenerated".A recent breakthrough in stem cell (SC) biology, the development of protocols for cell reprogramming, including direct neural reprogramming and the generation of new neurons via in situ cell reprogramming, revolutionized the field [21].Such an approach was further enforced by single-cell transcriptomics, allowing for the identification of novel targets and delineation of the fine-tuned mechanisms of post-SCI tissue remodeling [22], advances in biomaterial-based tissue repair in SCI (reviewed in [23]), and the development of novel drug delivery systems (DDS) based on nanoparticles for post-SCI regeneration [24].
Currently, cutting-edge approaches to neural regeneration and functional restoration post-SCI are multi-pronged and propose using various combinations of supportive TES with cell therapies (Figure 1), including cell therapy utilizing induced pluripotent stem cells (iPSCs) or directly reprogrammed neural precursor/progenitor cells (drNPCs), electrical epidural stimulation, and the application of bio-active compounds modulating molecular pathways critical for tissue regeneration and its normal functioning [25][26][27].TES transplanted to the lesion site provide mechanical support to the tissue, can be used as a drug delivery system, might have conductive properties, and guides cell differentiation, proliferation and migration.NPC cells transplanted within TES might either differentiate toward neurons/oligodendrocytes, thus compensating for the cell loss due to the SCI and/or contribute to neuroregeneration via pro-regenerative and neurogenic secretome milieu, modification of the extracellular matrix facilitating tissue repair, and recruitment or modulation of functionalities of sub-populations essential for post-SCI recovery (for example, M2 macrophages).(B) SCI lesion site.Without any therapeutic interventions, SCI lesion site is characterized by hemorrhage, edema, pronounced cell death, inflammation, glial and fibrotic scar formation, exacerbated tissue damage, secretion of pro-inflammatory molecules, and recruitment, activation, or phenotype switch of multiple sub-populations of cells (macrophages, residential astroglia, and others).
Notably, such TES can also be used as a drug delivery system [28], including conductive scaffolds [29], etc.
Finally, neuromodulation devices for artificial neural connections allowing neural data transmission from one undamaged part of the spinal cord to another might complement the aforementioned curative strategies [30].
Stem Cell Therapy for SCI
Cell transplantation for therapeutic applications has been gaining momentum over the last couple of decades.In the case of SCI, there were attempts to use several types of cells for transplantation therapy-SCs, cord blood cells, olfactory ensheathing cells (OECs), and others [31].Of particular interest are SCs.Briefly, SCs are, by definition, cells that are capable of self-renewal and have the ability to differentiate into several cell types [32].Based on their origin, SCs can be subdivided into embryonic SCs, adult or somatic SCs, and induced SCs; and, based on their ability to differentiate, they are subdivided into totipotent, pluripotent, multipotent, and unipotent (here, we refer readers to several recent reviews on this subject, for example, [33].Regenerative therapy with the use of SCs proved to be a success for many conditions, allowing functional and structural tissue restoration, at least partially.Promising trends were also demonstrated in the case of using SCs for post-SCI therapy [34], including results both from animal studies and from clinical trials [35].For example, the transplantation of human neural stem cells to the injured spinal cord of primates had some restorative effects; in particular, grafted cells survived for several months, host sinapse-forming axons regenerated into the graft, and implanted cells also extended their axons through the recipient tissues and formed synapses [36].
To sum it up, transplanted SCs may contribute to post-SCI neuronal regeneration via several mechanisms-by replacing dead cells at the lesion site and differentiating to the cells of neuronal lineage, partially restoring disrupted neuronal circuitry, promoting the remyelination of axons and contributing to long-distance axon regeneration, the secreting of neurotrophic factors, anti-inflammatory cytokines, pro-angiogenic factors, exosomes with bio-active cargo, their neuromodulatory activities, the potential for the activation of endogenous neurogenesis, etc.
There are three main strategies to use stem cells for post-SCI tissue regeneration-(1) to stimulate endogenous SCs, (2) to use the SC-derived secretome, or (3) to transplant exogenous SCs into the damaged tissue.
As for the stimulation of endogenous SCs, it is tempting to agree with the recent statement of DeFrates et al. that an attractive-although challenging at the same timetherapeutic strategy for SCI may be to evoke endogenous regenerative mechanisms in the damaged tissue of so-called "non-regenerative" species such as humans, which is incapable of epimorphosis.For example, one of the key influencers of mammalian tissue regeneration is transcriptional factor hypoxia-inducible factor-1 (HIF-1a), also known to be implicated in stem cell maintenance [37].Supposedly, deciphering the regulation of molecular networks orchestrated by HIF-1a in regenerating species following injury may help us to evoke the endogenous neuroregenerative potential in non-regenerating species.It should be noted that the stabilization of HIF-1a post-SCI might be beneficial for many other reasons, as it leads to the inhibition of neural apoptosis and enhances axon regeneration [38].The activation of dormant ependymal cells post-SCI might also be in line with the aforementioned strategy.It is suggested that the major underlying mechanism of spinal cord regeneration in the axolotl is through the Sox2-dependent "awakening" of the dormant ependymal cells post-SCI, although detailed mechanisms are still enigmatic (as reviewed in [39]).In humans of a certain age, the latent ependymal population can be activated by injury, although its stem cell potential is a highly controversial topic, and the possibility of adult neurogenesis in humans (even if induced by external stimuli) remains in doubt.In mice, the population of immature ependymal cells as potential spinal cord stem cells was recently identified [40].Such approaches, of course, are only speculative, given that the identities of the ependymal cells of the adult spinal cord of the axolotl and humans (and even more closely related to human mice) are different, and there are other fundamental molecular and cellular differences between the species.Nevertheless, deciphering the molecular mechanisms of spinal cord regeneration in socalled "regenerative" species may provide new insight into this subject, and cell-based screenings or an in silico search for the bio-active compounds modulating such activation might be of potential clinical value.
As for the SC-derived secretome, based on animal studies SC-derived exosomes and their molecular cargo have the potential to become a cell-free therapy for SCI [41].Apart from exosomes, other types of bio-active molecules within the SCs' secretome might also have therapeutic value (as reviewed in [42]), and their intravenous administration might become one of the treatment strategies for SCI.
As for the third strategy based on the transplantation of exogenous SCs, there is mounting experimental evidence supporting the anti-inflammatory role of MSCs in post-SCI treatment and their ability to stimulate nerve regenerative signaling pathways and promote vascular repair [43].The strategy of cell-based therapy for SCI became particularly attractive after the publication of the seminal work of Takahashi and Yamanaka, describing the approach to reprogram somatic cells into a pluripotent state via forced expression of several transcriptional factors (TFs) [44]; this work led to a "gold rush"-like era of extensive efforts in SC-based therapy research and development.Since then, significant progress in the management of neurotrauma and SCI in particular was achieved by applying SCs as a therapeutic tool, including not only widely used MSCs but also the SC-derived biologically active secretome, induced pluripotent stem cells (iPSCs), and directly reprogrammed multipotent neural stem cells (drNSCs) or neural progenitor cells (drNPCs) capable of giving rise to neurons, astrocytes, and oligodendrocytes [45][46][47].A detailed comparison of the several types of SCs in the context of SCI cell therapy can be found in a recent encyclopedic review by Shao A. et al. [48], and, hereafter, we focus on drNPCs.
Directly Reprogrammed Pro-Neuronal Cells
From the point of view of a clinical neurologist, the use of drNSCs/drNPCs compared to iPSCs or MSCs has some advantages.First and foremost, it is widely accepted that, in the case of using directly reprogrammed cells, the tumorigenic risk is significantly lower compared to that of iPSCs.Secondly, the procedure of direct reprogramming is much faster and cheaper compared to iPSC generation.Thirdly, direct reprogramming preserves the epigenetic profile of the cell, which is impossible in the case of iPSCs [49,50].Many clinical trials demonstrated some benefits of MSC in SCI, perhaps due to the paracrine effect of the MSC-derived secretome.However, despite encouraging results with MSCs, drNPCs are even more promising as a candidate for the cell-based therapy of SCI, given their "pro-neuronal" features.Admittedly, in the case of in vitro direct reprogramming, the population of reprogrammed cells has some degree of heterogeneity; thus, there is always a risk of the transplantation of the sub-population of non-converted non-neuronal cells.It might be assumed that the transplantation of unmodified non-neuronal cells to the lesion site post-SCI (for example, fibroblasts, most commonly used for generation of drNPC) is either not detrimental, does not lead to any functional improvement [51], or perhaps might contribute to fibrotic scar formation [52].Thus, transplantation of the mixture of cells, some of them being non-neuronal cells, might potentially pose some risks.The commonly used strategy to mitigate this issue is the marker-based selection of cells from the heterogeneous population before implantation (for example, cells are captured by specific antibodies conjugated to magnetic beads, and fluorescence-activated cell sorting or magnetic-activated cell sorting are used [53]).This risk is also mitigated in case of the in situ reprogramming of cells already present in the lesion region.Furthermore, there were several attempts to perform direct reprogramming in situ, suggesting the potential feasibility of such an approach.Of course, any genetic manipulation involving exogenic integration bears potential risks; thus, the transplantation of non-modified cells like MSCs seems to be safer than genetically manipulated iPSCs or drNPCs (assuming that they were generated via the ectopic expression of TFs delivered by viral constructs).Either way, the SC-derived secretome (MSC-, iPSC-, or drNPC-derived) can be used as a safer cell-free therapeutic agent, given its role in pro-regenerative paracrine signaling.For example, as we previously demonstrated in non-human primates, drNPC transplantation partially compensates for the limited regenerative plasticity of the adult spinal cord and contributes to its functional improvement post-SCI (as can be assessed by the commonly used functional tests, namely, the kinematic assay, neurological assessment, and the neurophysiological investigation of the evoked potentials (SSEP and MEP)), supposedly through paracrine trophic support in the areas of active growth cone formation [54].
Hitherto, a wealth of protocols was established for the generation of iPSCs from somatic cells of any lineage and their subsequent differentiation into the cells of neuronal lineage, for different species including humans.In parallel, a mammoth worldwide effort was put into developing protocols for direct neuronal reprogramming, allowing for the direct generation of different types of cells of neuronal lineage from fully differentiated nonneuronal cells but avoiding the pluripotent state (as summarized in our recent review [55] and by others [56]; Table 1).
A number of studies during the last decades have accumulated evidence demonstrating the potential clinical value of NPCs, both non-differentiated and differentiated, including differentiation in region-specific cells.To name but a few, in a recent work by Xu et al., collagen scaffolds were populated in vitro by human NPCs, which were induced to differentiate into different types of dorsal and ventral neuronal cells.Next, scaffolds were transplanted into animal models of SCI (mice and rhesus monkeys) and demonstrated therapeutic effects that were more prominent compared to the implantation of non-differentiated NPCs within the same type of scaffolds [57].Furthermore, the direct reprogramming of human astrocytes into early neuroectodermal cells and their subsequent implantation into the SCI lesion area in a mouse model resulted in the differentiation of the implanted cells into region-specific neurons that formed synapses with the neurons of the host [58].
Mounting evidence from other similar studies suggests the potential clinical value of such an approach and advocates for the development and optimization of reprogramming methods.Briefly, in the majority of such reprogramming protocols, several key TFs, the socalled pioneer TFs-a subset of TFs that are capable to bind "silent" ("closed") chromatin and recruit other TFs to initiate lineage-specific transcription programs-are ectopically expressed in cells, resulting in global and local changes in the epigenome, transcriptome, metabolome, etc., as well as the overall shift of the cell fate.
Such TFs might be optionally supplemented or substituted by particular small molecule inhibitors/activators or regulatory microRNAs, proteins, etc., as well as supplemented by optional so-called cooperative TFs.In the case of direct reprogramming, there are also protocols that might require the repression of some lineage-specific TFs or "barrier" factors to achieve cell fate conversion (for example, REST1 is one of such "barrier" factors, and its suppression mediates the conversion of fibroblasts to neuronal lineage [63]).
There are different systems that can be used for TFs' ectopic expression, for example, the delivery of non-integrating plasmids via electroporation or lipofection, the delivery of genetic material with the aid of lentiviral vectors transduction, the use of Sendai virus (SeV) vectors, and others.There are advantages and disadvantages of different systems, as discussed elsewhere.Although many protocols claim that it is possible to generate drNPCs only using chemical reprogramming agents, without any TFs, we found that in many cases such "reprogramming" by fibroblasts allows to generate cells expressing some markers of the target cells of neuronal lineage (for example, beta-III-tubulin, commonly and most exclusively found in neurons) and also resembling the target cells in terms of size/shape, but these cells also retain some markers of fibroblasts and may fail in functional tests.One of the possible explanations of such an inconsistency in the published data is the fact that, in many publications reporting protocols for reprogramming, there were no functional tests performed; or, maybe, direct reprogramming via chemical cocktails should only be performed under hypoxic conditions, as reported in the original work by Cheng et al., who introduced this methodology [71].We insist that functional tests (for example, the commonly used electrophysiology tests, such as whole-cell patch clamp recordings [59]), are necessary, because the presence of some neuronal markers does not guarantee that the cell is a functional neuron.
Overall, even though small molecules are undoubtedly very attractive as clinical tools, perhaps using TFs (alone or in combination with other factors) is still the best strategy for direct pro-neuronal reprogramming; thus, using direct reprogramming for SCI therapy remains a knotty problem, given the hurdles of ectopic expression of reprogramming TFs in situ and the overall risks of genetic intervention (such as forced expression of TFs) as a therapeutic tool.This also raises a burning question: what evidence is necessary and sufficient to rigorously and convincingly confirm the direct reprogramming of cells to neuronal lineage?It should be noted that, apart from chemical reprogramming, other methods for the direct reprogramming (without the use of TFs) of somatic cells into neuronal lineage also exist, for example, via the transcriptional and chromatin modulations by the CRISPR activator system [67].Finally, there are non-chemical and non-genetic routes to reprogram fibroblasts to pro-neuronal cells, for example, based on mechanotransduction signaling and biophysical stimuli, as discussed in detail further in the text.
There are also several reports about the possibility of the in situ direct conversion of non-neuronal somatic cells to neurons, including in the case of the post-SCI tissue microenvironment.Firstly, there is a claim that the conversion of astrocytes was achieved in situ post-SCI via reprogramming by NeuroD1 in mice [68].Secondly, the in situ reprogramming of NG2 glia toward a neurogenic state in mice was reported [69].Lastly, it was suggested in recent work that the pharmacological inhibition of NOTCH1 signaling can also trigger the direct conversion of astrocytes to neurons in situ post-SCI in mice (the pro-neuronal conversion was assumed based on observed changes in expression of pro-neural TFs, namely, NeuroD1, NeuroD2, Pax6, Lmx1a, and Lhx6) [70].However, the presumed possibility of the direct conversion to neurons in situ remains a point of controversy, given the possibility of the misinterpretation of the observed results or the flows of the molecular tools used in the aforementioned studies [72,73].
Strategies to Increase Efficiency of Direct Reprogramming to Neuronal Lineage
The low efficiency of pro-neuronal reprogramming is a hurdle to its clinical application [74].Thus, methodological approaches allowing to increase the efficiency/speed of direct reprogramming to neuronal lineage are constantly being developed.Indeed, any molecular, mechanical, or physical manipulations "paving the way" for the preferential and easier conversion of the cell into the particular target cell type facilitate reprogramming (Figure 2).
Perhaps it might be manipulations affecting the cyto-and nucleoskeleton.As a proof of principle, targeting the actomyosin contraction of the cytoskeleton in fibroblasts evoked an "intermediate" neuron-like state in cells, making them more prone to subsequent reprogramming into neurons [75].Based on RNA-seq data, Herdy et al. uncovered several pathways critical for the conversion of fibroblasts to neurons (for example, integrin signaling, HIF-1a signaling, Rho-family GTP-ase signaling, and others) and identified molecular modulators of these pathways, allowing for a significant increase in the yield of reprogrammed cells [76].In particular, among these "reprogramming booster" molecules were Pyrintegrin (Integrin activator) and ZM336372 (Raf-1 activator); potentially, both compounds could promote cytoskeleton reorganization, thus minimizing mechanical stressinduced apoptosis during cell fate conversion.
A very interesting finding was reported in the recent article by Yang J. et al.Using mouse fibroblasts, the authors introduced DNA double-strand breaks in the region encoding ribosomal RNA (the rDNA region) in nucleoli, allowing them to be faithfully repaired, and observed that the cell fate of the treated cells was "primed" toward neurons.In particular, they observed changes in the histone modifications (a decrease in the H3K27me3 mark) in the promoter regions of the genes Neurod1 and Nef h, which play key roles in determining "neuronal fate", as well as transcriptome changes in the gene ontologies' neuronal processes.In such "primed" fibroblasts, direct reprogramming with chemical agents was more efficient compared to with non-primed fibroblasts, as assessed by the derepression of Neurod1 and Nefh and the neuron count [77].This finding is in line with the previously reported observation that, speaking about long-range chromosomal interactions, rDNA constitutively interacts with regions related to nervous system development and may play a role in the regulation of their transcriptional activity [78].Moreover, the direct interaction of rDNA with genes involved in differentiation was reported [79], and the role of nucleoli in orchestrating cell fate is well-documented.Finally, during the differentiation of ESCs into NPCs, neural genes located in the regions interacting with rDNA move away from nucleoli to become derepressed [80].Based on all of the above, we propose that rDNA/nucleoli might be a novel target to "prime" fibroblasts toward direct reprogramming into neurons, which requires further validation.Perhaps it might be manipulations affecting the cyto-and nucleoskeleton.As a proof of principle, targeting the actomyosin contraction of the cytoskeleton in fibroblasts evoked an "intermediate" neuron-like state in cells, making them more prone to subsequent reprogramming into neurons [75].Based on RNA-seq data, Herdy et al. uncovered several pathways critical for the conversion of fibroblasts to neurons (for example, integrin signaling, HIF-1a signaling, Rho-family GTP-ase signaling, and others) and identified molecular modulators of these pathways, allowing for a significant increase in the yield of reprogrammed cells [76].In particular, among these "reprogramming booster" molecules were Pyrintegrin (Integrin activator) and ZM336372 (Raf-1 activator); potentially, both compounds could promote cytoskeleton reorganization, thus minimizing mechanical stress-induced apoptosis during cell fate conversion.
A very interesting finding was reported in the recent article by Yang J. et al.Using mouse fibroblasts, the authors introduced DNA double-strand breaks in the region en- Furthermore, any shifts in cell fate are associated with-and at least partially initiated by-changes in the epigenome (histone modifications, 3D chromatin organization, and DNA methylation) or, as aptly asserted, "epigenetics: <are> judge, jury and executioner of stem cell fate" [81].Thus, changes in histone modifications and chromosome long-range interactions might also prime cells toward pro-neuronal reprogramming.Other ways to increase the efficiency of direct reprogramming include so-called "epigenetic resetting" [82] via introducing changes in histone modifications or in DNA methylation.
For example, the temporarily inhibition of histone deacetylase and bromodomain enhanced the kinetics of neuronal reprogramming of adult fibroblasts in a recent study [83].Another work reported that the "epigenetic resetting" of fibroblasts by DNA demethylation (treatment with 5-azacytidine), followed by a culture in neuronal differentiating media, resulted in the upregulation of "stemness" genes (Sox2, Klf4, Nanog, and Oct4) after the demethylation and expression of neuronal lineage markers after differentiation [84]; however, a significant weakness of this study was the absence of functional tests of the supposedly reprogrammed cells.
Apart from chemical and biochemical cues, physical cues may also affect cell fate.Epigenetic changes leading to a change in the chromatin landscape and, subsequently, transcriptome changes are involved in cell fate regulation, and mechanical stimuli (the stiffness of extracellular matrix, various external stimuli, etc.) can be transmitted through the cytoskeleton to the nucleoskeleton to elicit such epigenetic changes.
Reorganization of the cytoskeleton and, subsequently, reorganization of the nucleoskeleton architecture and epigenetic modulation can also be achieved via magnetic stimuli.The impact of such cues on neurogenic differentiation was recently comprehensively reviewed [85].Notably, non-invasive repetitive trans-spinal magnetic stimulation (rTSMS) can also modulate lesion scarring post-SCI in mice, by inhibiting demyelination and enhancing neuronal survival and axonal regrowth in part via stimulating ependymal cells to differentiate into astrocytes and oligodendrocytes [86] (of course, given the fundamental differences of neuroregeneration in mice and humans, the translational significance of such an observation is yet to be assessed).
As for mechanotransduction, it is known that scaffold-free 3D culture conditions enhance cell stemness, at least for some types of cells, via a variety of molecular mechanisms.As early as 2013, it was claimed that it is possible to convert fibroblasts into NPCs-like cells by forced growth in 3D spheres [87].Recently, it was demonstrated that culturing astrocytes in a non-adhesive 3D spherical culture system results in the partial conversion of astrocytes into NPC-like cells, as assessed by the levels of expression of the Sox2, Pax6, Oct4, Nanog, Sox10, and Pax3 genes [88].Perhaps such pre-conditioning might make cells more prone to pro-neural reprogramming.Mechanotransduction and the impact of mechanical forces also play a role in cell reprogramming, and the recent thought-provoking study by Song et al. demonstrated that transient nuclear deformation can boost reprogramming efficiency (the conversion of fibroblasts into neurons) via the induction of expression of Ascl1, a bona fide pro-neuronal pioneering TF, and through other mechanisms.Reprogramming was confirmed by elevated levels of neuron-specific markers such as class III beta-tubulin and Tubb3 at early stages of reprogramming, and markers of mature neurons, microtubule associated protein 2 (MAP2), and synapsin at later stages [89].Somewhat contradictory to the work by Song et al., it was also reported that soft substrates facilitate the direct chemical reprogramming of fibroblasts into neurons [90].Additionally, the enhanced conversion of fibroblasts to neurons was achieved using tunable electrical stimulation (ES) [91].There are scarcely any publications on the role of ES in direct pro-neural reprogramming.Moreover, the exact molecular mechanism explaining the impact of ES in pro-neuronal differentiation is still largely unknown.There were several works about the impact of ES on the cell fate of neural stem cells [92] and others published almost a decade ago, as summarized in [93].Recent work on iPSCs also demonstrated that ES induces robust neuronal fate determination [94].At the same time, other studies demonstrated that ES stimulates non-neuronal reprogramming as well; for example, it induces the direct reprogramming of human dermal fibroblasts into hyaline chondrogenic cells [95].As for the direct reprogramming of non-neuronal cells into neuronal progenitors, the role of ES in this process and its exact molecular mechanisms are yet to be deciphered.
Furthermore, manipulations affecting mitochondria can also play a role in the proneuronal cell fate switch, given the critical dependence of neurons on mitochondria function [96], the role of mitochondria-mediated metabolic changes in the regulation of neural differentiation [97], and the differences between the mitochondrial proteomes of neurons and other cells.For example, it was shown that the induction of neuron-enriched mitochondrial proteins stimulates direct glia-to-neuron conversion [98] and that increased mitochondrial activity accelerates neuronal differentiation [99].
It Is known that clear metabolic differences exist among fibroblasts, NSCs/NPCs, and fully differentiated neurons, in terms of their predominant modes of energy production.During neuronal differentiation, NSCs undergo massive changes in metabolism, including increased OXPHOS [99]; changes of a similar nature if not a similar scale might take place in the case of the direct and indirect reprogramming of fibroblasts to NSCs/NPCs.The inhibition or stimulation of glycolysis decreases or enhances, respectively, the efficiency of iPSC generation from differentiated somatic cells [100].
Nowadays, it is assumed that mitochondria and energy metabolism play a starringor even controlling-role in both neurogenesis and cell fate regulation [99,101].Thus, it is not unreasonable to suggest that fibroblasts can be "primed" toward direct reprogramming to neuronal lineage via induced alterations of the major cellular bioenergetic pathways, glycolysis, and oxidative phosphorylation (OXPHOS).Such alterations can be induced, for example, via simple changes in the composition of the growth media [102].The proposed approach is indirectly supported by the recent publication reporting that the glycolytic switch occurs during the direct reprogramming of fibroblasts to endothelial cells, and such reprogramming can be abrogated via the inhibition of the aforementioned switch [103].Furthermore, the inhibition of HIF-1a signaling with compound KC7F2 to promote oxidative OXPHOS over glycolysis resulted in the facilitated conversion of fibroblasts to neurons [76].Notably, metabolic alterations toward the glycolytic metabotype also occur at the stage of blastema assembly and are necessary for cell fate transition [104], highlighting their role in regeneration.Having said this, the bioenergetic of direct reprogramming and the effects of metabolic manipulation on the efficiency and molecular mechanisms of reprogramming remain largely understudied.
A very recent finding is the discovery that TFs ATF7IP, JUNB, SP7, and ZNF207 oppose the cell fate switch in all tested types of cells in mice.They pose a barrier to reprogramming through the downregulation of the genes required for such a switch and by maintaining, in a closed state, the chromatin loci that can be targeted by reprogramming TFs [105].Perhaps the pharmacological targeting of these TFs might be instrumental in enhancing the capabilities of direct reprogramming.Finding human-specific TFs with similar functions is a task of obvious priority.
Similarly, knockdown of transcription-coupled histone chaperone FACT (resulting in "disorganized chromatin") in combination with forced expression of TFs known to induce the reprogramming of fibroblasts into neurons increased the reprogramming rate up to 1.5-fold, and, furthermore, the reprogrammed cells were either generated earlier or matured faster [106].
Another guardian of the cell's fate is the nuclear scaffold and its key componentsnuclear lamins-in particular.Indeed, the 3D organization of the genome and, subsequently, the epigenetic state and transcriptional activity of the genes involved in cell fate decisions at least partially depends on the interaction of chromatin with lamins and the overall nuclear (and genomic) morphology.Moreover, in human fibroblast manipulations that affect nuclear scaffolds (such as the transient loss of the core component of the nuclear scaffold, Lamin A/C) resulted in the opening of the previously closed chromatin domains and, thus, facilitated the cellular reprogramming to pluripotency [107].
Additionally, the modification of TFs used for cell fate conversion is another promising approach to increase the efficiency of cell reprogramming.For example, Ascl1 is one of the transcriptional regulators determining neuronal differentiation, and its transcriptional activity and capability to drive ectopic neurogenesis is modulated by the multi-site phosphorylation status at serine-proline sites [108].Thus, it is not surprising that Ascl1 cannot be phosphorylated because its phosphorylation sites are mutated, which causes the enhanced neuronal conversion of astrocytes in mice [109].Perhaps using such "improved" versions of Ascl1 and other TFs commonly used for cell reprogramming might be an approach to facilitate the cell fate switch in humans as well.
Expanding the repertoire of such TFs is also worth a try.Of particular interest is the recently developed computational tool TRANSDIRE for the prediction of TFs that might induce direct reprogramming in several human cell types (in other words, pioneering TFs that are novel and potentially more potent in terms of evoking the translational changes prerequisite for the phenotype switch toward a particular cell type) [110].This tool allowed the authors to predict the TFs that could induce direct reprogramming from fibroblasts, based on the combined "omics" data.For neural reprogramming, such novel candidates were MEIS2, ARNT2, PEG3, and others, predicted by the TRANSDIRE alongside known TFs (NEUROD1, REST, and others).
G-quadruplexes (G4s)-four-stranded nucleic acid secondary structures formed by stacked tetrads of guanosine bases in both DNA and RNA-might also play a role in cell fate regulation.In human DNA, they are predominantly formed in enhancers, promoters, and intron/exon borders.G4s are known to be involved in the regulation of transcription, mRNA processing, and localization, including in the case of "neural" genes, and supposedly play a role in cellular differentiation [111,112].G4s are present in high numbers in human ESCs, and their levels dramatically decrease following differentiation and cell lineage specification; they are also found in SC regulatory elements [113].It was shown that targeting G4's stability in NSC promotes the production of oligodendrocyte progenitors [114].Thus, the role of G4 in generation of drNPCs and their differentiation and the role of the modulators of their stability in the aforementioned processes should be further elucidated.
Next, it was demonstrated in the seminal work by Roy et al., in 2018, that differentiated fibroblasts, if cultured in laterally confined conditions, become less differentiated (SC-like) even in the absence of exogenous reprogramming TFs (the phenomenon of mechanical reprogramming) [115], perhaps through Lef1 activation [116].A similar approach can also be used to facilitate the cell fate switch, for example, if differentiated somatic cells acquire phenotypic plasticity through the aforementioned approach, which, supposedly, would make them more prone to reprogramming.
Finally, as discussed above, the mechanical, physical, and topological characteristics of the substrate for cell culture dramatically affect some aspects of cell behavior, including the stemness/differentiation potential.This brings to our discussion another tool of post-SCI neuroregenerative therapy-TES.Indeed, in the past few years there seems to have been a steady increase in reports focusing on TES for successful neuroregeneration [117], including SC incorporating TES for SCI treatment (as comprehensively reviewed in [27]).Such TES should meet several requirements: briefly, they have to be biodegradable (allowing for their substitution with tissue), bio-compatible, and bio-mimetic, i.e., recapitulating key properties of the neural tissue, for example, conductivity [29].In tissues, the extracellular matrix (ECM) is critically important for the spatiotemporal positioning of regulatory biomolecules, for guiding cell migration and growth, and so on and so forth.When transplanted to the lesion site or to the perilesional area, TES should mimic the characteristics of ECM (to some degree) and compensate for its loss.Apart from simple mechanistic compensation for the tissue loss, TES also play many other roles in post-SCI neuroregeneration, as discussed below.It was shown that characteristics of the substrate, in particular surface topography, may guide SCs toward a particular lineage.For example, in the pioneering work of Ghazali et al., adipose-derived SCs were forced toward neural differentiation with the use of cellimprinted substrates.Briefly, the authors used polydimethylsiloxane silicone substrates to capture and recapitulate the topology of the target human NPCs and, subsequently, cultured adipose-derived stem cells on these substrates, which led to changes in the cell morphology and the upregulation of several markers of neural SCs as well as early neuronal markers [118].Later, another work confirmed this observation: that cell-imprinted patterns may harness SCs toward a particular cell fate [119].The sustained delivery of neurotrophic factors (brain-derived neurotrophic factor (BDNF), neurotrophin-3 (NT-3), nerve growth factor (NGF), and others) is another approach to SCI management (as reviewed in [120]).Although the concept of using them for post-SCI neuroregeneration is not novel, there were several recent studies in which they are used in combination with TES as modulators of the differentiation of NSCs, including works in which such factors are immobilized in TES and maintain their neurotrophic functions.For example, in a rat model of SCI, NGF was immobilized in TES based on silk protein nanofiber hydrogels, and it was demonstrated that such an NGF retained its ability to modulate the differentiation of NSCs [121].Many other up-to-date neuroprotective bio-active molecules for post-SCI therapy were summarized in the recent review by Shah et al. [122].
It should be noted that the differentiation of NPCs toward oligodendrocytes is also being extensively investigated [123,124] and has potential therapeutic applications.However, we leave this topic as beyond the scope of our focused review.
Potential Obstacles to Be Resolved
While acknowledging the undoubted translational potential of SCs, here we concede that, in their current state, protocols involving SCs are still not a therapeutic "holy Grail" for SCI; for example, multiple human clinical trials on the use of MSCs in SCI showed results that were definitely encouraging but still not a breakthrough [125].The clinical value of drNPCs and the drNPC-derived secretome is yet to be comprehensively assessed.The nonnegligible hurdles of the cell-based therapeutic approach to SCI therapy are the relatively low survival of the progenitor cells transplanted to the damaged post-SCI tissue and their differentiation toward astroglia in the inflamed tissue micro-environment.Indeed, while some authors provided evidence of the pro-neuronal differentiation and differentiation toward oligodendrocytes of the NPCs in the post-SCI lesion, some authors insisted that transplanted NPCs tend to differentiate toward astroglia (summarised in [126]).As for the survival of the transplanted NPC cells in the post-SCI lesion, in some experiments it was estimated as ∼25% [126].This can be mitigated by transplanting cells to the lesion site within TES in combination with bio-active molecules modulating cell fate and survival.
Finally, there are many fundamental differences between conditions in vitro and in vivo, obviously, especially in the case of the post-SCI in vivo micro-environment.For this reason, not all approaches to enhance the efficiency of direct reprogramming in vitro are applicable to in situ cell fate conversion.The protocols for reprogramming and differentiation developed in vitro under normoxia conditions might be not compatible with physioxia conditions.Indeed, normoxia (an atmospheric concentration of O2 ~20-21% commonly used in cell culture experiments) is significantly higher compared to the ~1-11% O2 observed in vivo in tissues (physioxia) [127].The same can be said about the substrate/microenvironment for cell attachment in vitro and in vivo, the "inflamed" micro-environment of the post-SCI tissue, and so on and so forth.
Furthermore, several approaches to SCI therapy failed on the path to translation [128].Noteworthy, the cell-based therapy described in the aforementioned review failed in translation from animal models to clinical use.As for the failure to translate from in vitro to in vivo, several classical examples of the poor predictability of in vitro models come from the field of the preclinical development of CNS-targeted therapies [129].Thus, unless the linchpin characteristics of the in vivo post-SCI conditions are recapitulated in the culture system used to establish the protocols of cell reprogramming and subsequent differentiation, there are risks that their clinical applicability is limited.
Conclusions
Directly reprogrammed pro-neuronal cells hold clinical potential, whether used as a standalone therapy or in combination with other therapeutic tools such as TES, small molecules, and others.These cells can also serve as a source of biologically active, proregenerative secretomes.Enhancing the efficacy of direct pro-neuronal cell fate conversion could further bolster the translational applications of drNPCs, including their use in SCI neuroregeneration.In this concise review, we provided an overview of various strategies to meet this challenge, including some that have yet to be tested.Encouraging data, including our own, support the continued investigation of neuronal progenitor cells for SCI treatment.
Figure 1 .
Figure 1.Combination therapy approaches to SCI. (A) Application of TES and cell therapies to SCI.TES transplanted to the lesion site provide mechanical support to the tissue, can be used as a drug delivery system, might have conductive properties, and guides cell differentiation, proliferation and migration.NPC cells transplanted within TES might either differentiate toward neurons/oligodendrocytes, thus compensating for the cell loss due to the SCI and/or contribute to neuroregeneration via pro-regenerative and neurogenic secretome milieu, modification of the extracellular matrix facilitating tissue repair, and recruitment or modulation of functionalities of sub-populations essential for post-SCI recovery (for example, M2 macrophages).(B) SCI lesion site.Without any therapeutic interventions, SCI lesion site is characterized by hemorrhage, edema, pronounced cell death, inflammation, glial and fibrotic scar formation, exacerbated tissue damage, secretion of pro-
Figure 1 .
Figure 1.Combination therapy approaches to SCI. (A) Application of TES and cell therapies to SCI.TES transplanted to the lesion site provide mechanical support to the tissue, can be used as a drug delivery system, might have conductive properties, and guides cell differentiation, proliferation and migration.NPC cells transplanted within TES might either differentiate toward neurons/oligodendrocytes, thus compensating for the cell loss due to the SCI and/or contribute to neuroregeneration via pro-regenerative and neurogenic secretome milieu, modification of the extracellular matrix facilitating tissue repair, and recruitment or modulation of functionalities of sub-populations essential for post-SCI recovery (for example, M2 macrophages).(B) SCI lesion site.Without any therapeutic interventions, SCI lesion site is characterized by hemorrhage, edema, pronounced cell death, inflammation, glial and fibrotic scar formation, exacerbated tissue damage, secretion of pro-inflammatory molecules, and recruitment, activation, or phenotype switch of multiple sub-populations of cells (macrophages, residential astroglia, and others).
Figure 2 .
Figure 2. Schematic representation of approaches to improve the efficiency of direct pro-neural reprogramming.Reprogramming efficiency can be enhanced via modulation of mechanotransduction and use of particular mechanical stimuli; modulation of epigenetic barriers to reprogramming (DNA methylation and/or histone modifications, globally or at particular loci); modulating chromatin organization, nuclear scaffold, and cytoskeleton; targeting rDNA and nucleoli; altering bioenergetic pathways and mitochondria; expanding the repertoire of reprogramming TFs and usage of genetically engineered ones; removing "molecular barriers" to reprogramming (silencing of particular TFs or regulatory proteins); electrical stimulation of cells; targeting G-quadruplexes or other regulatory DNA/RNA structures.Applied in vitro at the stage of direct reprogramming of non-neuronal somatic cells toward the pro-neural cell fate, they precede the transplantation of reprogrammed cells within TES to the SCI lesion site, as a novel cell-based therapy complementing conventional therapies.
Figure 2 .
Figure 2. Schematic representation of approaches to improve the efficiency of direct pro-neural reprogramming.Reprogramming efficiency can be enhanced via modulation of mechanotransduction and use of particular mechanical stimuli; modulation of epigenetic barriers to reprogramming (DNA methylation and/or histone modifications, globally or at particular loci); modulating chromatin organization, nuclear scaffold, and cytoskeleton; targeting rDNA and nucleoli; altering bioenergetic pathways and mitochondria; expanding the repertoire of reprogramming TFs and usage of genetically engineered ones; removing "molecular barriers" to reprogramming (silencing of particular TFs or regulatory proteins); electrical stimulation of cells; targeting G-quadruplexes or other regulatory DNA/RNA structures.Applied in vitro at the stage of direct reprogramming of non-neuronal somatic cells toward the pro-neural cell fate, they precede the transplantation of reprogrammed cells within TES to the SCI lesion site, as a novel cell-based therapy complementing conventional therapies.
Table 1 .
Current methodological approaches to direct neuronal reprogramming.
Table 1 .
Cont. in different combinations, in combination with optional addition of small molecules, overexpression of regulatory proteins, and use of other molecular tools. * | 2023-10-22T15:05:20.498Z | 2023-10-01T00:00:00.000 | {
"year": 2023,
"sha1": "69804f615589eb963cc27f5f448aefa5bef52ebd",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4409/12/20/2499/pdf?version=1697813486",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cc4978325b560ea3b5e686b875d7fbcffa64e070",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
263668977 | pes2o/s2orc | v3-fos-license | Race and sex differences in ROS production and SOD activity in HUVECs
Black individuals and men are predisposed to an earlier onset and higher prevalence of hypertension, compared with White individuals and women, respectively. Therefore, the influence of race and sex on reactive oxygen species (ROS) production and superoxide dismutase (SOD) activity following induced inflammation was evaluated in female and male human umbilical vein endothelial cells (HUVECs) from Black and White individuals. It was hypothesized that HUVECs from Black individuals and male HUVECs would exhibit greater ROS production and impaired SOD activity. Inflammation was induced in HUVEC cell lines (n = 4/group) using tumor necrosis factor-alpha (TNF-α, 50ng/ml). There were no between group differences in ROS production or SOD activity in HUVECs from Black and White individuals, and HUVECs from Black individuals exhibited similar SOD activity at 24hr compared with 4hr of TNF-α treatment (p>0.05). However, HUVECs from White individuals exhibited significantly greater SOD Activity (p<0.05) at 24hr as compared to 4hr in the control condition but not with TNF-α treatment (p>0.05). Female HUVECs exhibited significantly lower ROS production than male HUVECs in the control condition and following TNF-α induced inflammation (p<0.05). Only female HUVECs exhibited significant increases in SOD activity with increased exposure time to TNF-α induced inflammation (p<0.05). HUVECs from White individuals alone exhibit blunted SOD activity when comparing control and TNF-α conditions. Further, compared to female HUVECs, male HUVECs exhibit a pro-inflammatory state.
Introduction
Cardiovascular disease (CVD) is the leading cause of death in the United States, and race and sex are well established mediators of CVD risk [1].CVD is thought to be a culmination of chronic inflammation, oxidative stress, and vascular dysfunction, termed the vascular health triad [2][3][4][5].Vascular function describes the vasodilatory capacity of a blood vessel.Vasodilation is largely attributed to endothelial cells, which compose the innermost layer of blood vessels and release vasodilatory substances, such as nitric oxide (NO) [6][7][8].Endothelial cells are also impacted by inflammation and mediate oxidative stress via NO release and subsequent NO-mediated reactive oxygen species (ROS) scavenging [3,6,7,9,10].Because endothelial cells are implicated in vascular dysfunction, inflammation, and oxidative stress, they are often used as a model to investigate cardiovascular disease mechanisms.Human umbilical vein endothelial cells (HUVECs) are harvested from the umbilical cord following birth, and the sex of the baby and the mother's age and health status recorded.While not directly possible in many in vivo models, investigation of mechanisms of cardiovascular disease via treatment with inflammatory cytokines [11], vaccines [12], or differing shear stresses [13] is possible in HUVECs.
Importantly, compared with HUVECs from White individuals, HUVECs from Black individuals have exhibited greater inflammation, as indicated by higher NO levels and higher Interleukin-6 (IL-6) concentrations [14].HUVECs from Black individuals have also exhibited greater expression of ROS producing proteins, suggesting higher ROS production [14][15][16].Interestingly, HUVECs from Black individuals have exhibited higher total antioxidant capacity (TAC) but lower superoxide dismutase (SOD) activity, suggesting a greater overall ability to clear or remove excess ROS [11,13,14].Taken together, basally or at rest, HUVECs from Black individuals exhibit greater inflammation, greater ROS production, and higher ROS clearance capacity compared with HUVECs from White individuals.However, to the best of our knowledge, there is limited data regarding the role of race in oxidative stress (ROS production and clearance) following induced inflammation in HUVECs.
Sex differences in inflammatory response following induced inflammation have also been noted in HUVECs [17].In umbilical cord blood to male and female babies, there is no difference in estrogen exposure, however, male babies are exposed to greater testosterone concentrations than female babies [18].Interestingly, when primed with testosterone, but not estradiol, for 48 hours, HUVECs from both male and female individuals exhibit greater inflammatory responses to tumor necrosis factor alpha (TNF-α) [17].However, when primed with testosterone derivates for 1 hour before TNF-α stimulation, HUVECs exhibit a reduced inflammatory response to TNF-α [19].Thus, exposure time to sex hormones may play an important role in inflammatory responses.However, there is limited data regarding the role of sex in oxidative stress (ROS production and clearance) following induced inflammation.
Therefore, the purpose of the current study was to investigate the influence of race and sex on ROS production and SOD activity in HUVECs following induced inflammation.We hypothesized that HUVECs from Black individuals would exhibit greater ROS production and lower SOD activity than HUVECs from White individuals following induced inflammation.Further, we hypothesized that female HUVECs would exhibit lower ROS production and greater SOD activity than male HUVECs following induced inflammation.TNF-α was utilized to induce inflammation due to its role in inflammation and cytokine production in the human body [20] and its efficacy in previous studies [11,17].
Ethical approval
All procedures were approved by the Institutional Review Board at the University of Maryland-College Park (IRB# 1713185) and conformed to standards set by the Declaration of Helsinki.All protocols were performed on anonymized, commercially obtained HUVEC samples, making the protocol exempt from direct human subjects informed consent, as determined by the IRB.
Human umbilical vein endothelial cell lines
HUVECs were obtained from a commercial company, Lonza, from young, healthy donors free of overt disease or pregnancy complications.Eight total cell lines were obtained: two cell lines of each race and each sex (n = 4 for each racial group and sex group).For analyses of HUVECs from Black and White individuals (Figs 1 and 2), each racial group includes two female and two male cell lines (n = 9-12 per time point and condition).For analyses of female and male HUVECs (Figs 3 and 4), each sex group includes two cell lines from each racial group (Black and White; n = 9-12 per time point and condition).The inclusion of different sexes in the by race analyses and different races in the by sex analyses was done to balance the impact of different races or sexes.For SOD activity, one black, female cell line and one white, male cell line were not analyzed due to poor cell growth, leading to n = 9 for all SOD activity data.For female CellROX/ Hoechst data, one of the triplicate data points for one of the black, female cell lines had poor Hoechst absorbances, leading to n = 11.
Inflammatory stimulus
TNF-α, a pro-inflammatory cytokine, was used to induce inflammation in the HUVECs.First, optimization experiments were performed to determine the TNF-α concentration needed to elicit inflammation without unnecessary cell death.Pooled HUVECs (separate from those used in experiments) in endothelial basal media and 2% fetal bovine serum were exposed to increasing concentrations of TNF-α for 24 hours and cells were visually analyzed for cell death and detachment.ROS production was measured via CellROX green assay and cell count was assessed via Hoechst assay (Thermo Scientific-ThermoFisher Scientific; Waltham, Massachusetts).Based on the results, the experimental TNF-α concentration was determined, as the greatest ROS production per viable cells occurred at 50 ng/ml.
HUVECs experimental overview
All experiments were performed in triplicate on Black, White, female, and male HUVECs treated identically and cultured in parallel.Based on estimated between group means and standard deviations of n = 6/group from TNF-α stimulation in Brown et al. [11], a power calculation for the present study determined the effect size for SOD activity was 1.09 with α = 0.05, 1β = 0.80.HUVECs were received at passage 1 (P1) and stored in liquid nitrogen.HUVECs were thawed, grown to ~80% confluence, and stored in P2 aliquots.All experiments were performed in P3-5 HUVECs.HUVECs were grown to 80% confluence before being cultured in either (1) endothelial growth medium (Lonza; Basel, Switzerland) and 2% fetal bovine serum (time-matched control) or (2) endothelial growth medium, 2% fetal bovine serum, and the experimental TNF-α concentration (50ng/ml) in 96-well (10,000 cells/well seeded at least 18 hours prior to time 0 of experimentation) or six-well plates (60,000 cells/well seeded at least 18 hours prior to time 0 of experimentation).Six-well plate cell lysate samples were collected at 4 hours and 24 hours following time-matched control or TNF-α stimulation.For cell lysate collection, six-well plates were placed on ice and 20mM HEPES buffer containing 1mM EGTA, 210mM mannitol, and 70mM sucrose was pipetted onto the cells.After scraping with a sterile bent pipette tip, cell lysate samples were collected in Eppendorf tubes and rotated for 20 minutes at 4˚C.The samples were then centrifuged at 1500xg for 20 minutes at 4˚C.The supernatant was removed and stored at -80˚C for future assay.To determine protein concentration, Pierce BCA protein assays (Thermo Scientific-ThermoFisher Scientific; Waltham, Massachusetts) were performed on cell lysate samples according to manufacturer's instructions.Briefly, a series of known bovine serum albumin (BSA) standards were prepared to achieve BSA concentrations from 2000ug/ml to 25 ug/ml.Next, the BCA working reagent was prepared by mixing 50 parts of BCA Reagent A to 1 parts of BCA reagent B. Then, 25ul of each BSA standard and unknown were pipetted in duplicate into a 96 well plate followed by 200 ul of the BCA working reagent.Following adequate shaking of the microplate, the microplate was incubated at 37˚C for 30 minutes prior to absorbance readings at 562nm.BCA protein concentrations were then determined from the standard curve from the BSA standards, and protein concentration values were averaged for unknown samples.
ROS production
To determine ROS production at 4 hours and 24 hours following time-matched control or TNF-α stimulation, CellROX green assays (Invitrogen-ThermoFisher Scientific; Waltham, Massachusetts) were performed according to manufacturer's instructions.Cells were passed into 96 well plates with EGM-2 and 2% fetal bovine serum media and allowed to grow to 80% confluence prior to experimentation.On the day of experimentation, media with or without 50 ng/ml TNF-α was placed on the cells and allowed to incubate for 4 or 24 hours (separate plates used for each time condition).For the fluorescence assay, 5 μM of CellROX Reagent and 1 μM Hoechst in EGM-2 and 2% fetal bovine serum was placed on the cells (master mix contained 4.8ml of EGM-2, 4.8μl of Hoescht, and 7.2μl of CellROX).Following a 30 minute incubation, the medium was removed and cells were washed three times with PBS.Fluorescence was immediately performed via a micro-plate reader (Synergy H1 Hybrid Reader; BioTek, Winooski, VT) with excitation at 485nm and emission at 520nm for CellROX and excitation at 361nm and emission at 486nm for Hoechst.
Cell count was assessed via Hoechst assay (Thermo Scientific-ThermoFisher Scientific; Waltham, Massachusetts) performed according to manufacturer's instructions.To determine the ROS produced per viable cells, a CellROX/ Hoechst ratio was calculated and used to compare ROS production between race and sex groups.
SOD activity
SOD activity was subsequently determined in cell lysate samples via a commercially available SOD activity assay kit according to manufacturer's instructions (Cayman Chemical; Ann Arbor, Michigan).Through xanthine oxidase dependent superoxide production, this assay indexes activity of all three SODs: SOD1 (cytoplasmic), SOD2 (mitochondrial), and SOD3 (extracellular).One unit of SOD is the amount of SOD needed to elicit 50% dismutation of superoxide [21].
On the day of assay, 200μL of the diluted radical detector was added to each well in the 96 well plate.Then, 10μL of serially diluted standard or diluted sample was added to each well.20μL of xanthine oxidase was added to each well as quickly as possible.Following a 30 minute incubation on a plate shaker, absorbance was read between 440 and 460 nm using a plate reader.The absorbance of the known standards was used to create the absorbance-concentration equation.The determined equation was then used to calculate the concentration of each sample based on absorbance.Standards and samples were analyzed in duplicate and averaged across duplicates.SOD activity is normalized to protein content (U/mg).
Statistical analysis
The present study evaluated the impact of race and sex on inflammatory responses and ROS clearance capacity in HUVECs following induced inflammation.All data were assessed for normality and outliers via Shapiro-Wilk test for normality and the ROUT 1% method (1% = the false discovery rate of outliers) [22], respectively.Two-way ANOVAs with factors of race and time (4 hours and 24 hours) or sex and time (4 hours and 24 hours) were performed separately for the time-matched control and TNF-α conditions.Post-hoc t-tests were performed on any significant model effects.Data are presented as mean (SD) and effect sizes (Hedges's g s; Formula: Hedges's g s = mean group 1 þmean group 2 standard deviation pooled ) of significant findings were calculated using a supplemental effect size spreadsheet [23].Effect sizes represent the magnitude of difference between group mean in terms of standard deviations, whereby an effect size of g s = 1 represents a 1 standard deviation difference in means between two compared groups.For Hedges's g s , an effect size of 0.2 is considered a small effect size, 0.5 is considered a medium effect size, and 0.8 is considered a large effect size.All statistical analyses were performed in GraphPad Prism v9 (San Diego, CA).
No race differences in ROS production in HUVECs
For ROS production in the control condition, there were no significant model effects, suggesting similar ROS production within and between races during the time-matched control (Fig 1A).For ROS production in the TNF-α condition, there was a significant main effect of time alone (p = 0.0014; Fig 1B).In HUVECs from Black individuals, the effect size of ROS production at 24hr as compared with 4hr of TNF-α stimulation was Hedges's g s = 0.73 (p = 0.079).In HUVECs from White individuals, the effect size of ROS production at 24hr as compared with 4hr of TNF-α stimulation was Hedges's g s = 1.37 (p = 0.0025).
Race differences in SOD activity in HUVECs
For SOD activity normalized to protein content (U/mg), there was a significant effect of time for both the control (Fig 2A ) and TNF-α conditions (Fig 2B) (p = 0.0022 and p = 0.027, respectively).However, further analyses revealed that SOD activity normalized to protein content in HUVECs from Black individuals was similar between time points for both conditions (SOD activity (U/mg)-B control 4hr vs 24hr: p>0.05;TNF-α 4hr vs 24hr: p>0.05).SOD activity normalized to protein content in HUVECs from White individuals was significantly greater at 24hr as compared to 4hr in the control condition alone (SOD activity (U/mg)-W control 4hr vs 24hr: p = 0.0034, Hedges's g s = 1.54;TNF-α 4hr vs 24hr: p>0.05).
Sex differences in ROS production in HUVECs
For ROS production in the control condition, there was a significant main effect of sex (p = 0.0020, time effect p = 0.069, Fig 3A).Further investigation revealed female HUVECs exhibited significantly lower ROS production than male HUVECs at 24hr in the control condition (p = 0.0060, Hedges's g s = 1.20).For ROS production in the TNF-α condition, there were significant effects of time and sex (p = 0.0004 and p = 0.0009, respectively; Fig 3B).Both female and male HUVECs exhibited greater ROS production at 24hr of TNF-α stimulation when compared to 4hr of TNF-α stimulation (female: p = 0.050, Hedges's g s = 0.81; male: p = 0.0030, Hedges's g s = 1.34).Interestingly, female HUVECs exhibited significantly lower ROS production than male HUVECs at 24hr for the TNF-α condition (p = 0.0040, Hedges's g s = 1.29).
Sex differences in SOD activity in HUVECs
For SOD activity normalized to protein content in the control condition, there were significant effects of time and a significant interaction (p = 0.0014 and p = 0.045, respectively; Fig 4A).For SOD activity normalized to protein content in the TNF-α condition, there was a significant effect of time (p = 0.022; Fig 4B).Further within sex analysis revealed female HUVECs exhibited significantly greater SOD activity normalized to protein content at 24hr as compared with 4hr for both conditions (control: p = 0.0035, Hedges's g s = 1.52;TNF-α: p = 0.0098, Hedges's g s = 1.31), and male HUVECS exhibited similar SOD activity (U/mg) between time points for both conditions (p>0.05).
Discussion
The novel findings of the study are: 1) HUVECs from White individuals alone experienced an increase in SOD activity with increased growth time that was abolished with TNF-α treatment; 2) female HUVECs exhibited significantly lower ROS production than male HUVECs in the control and TNF-α conditions; and 3) female HUVECs exhibited significantly greater SOD activity with increased exposure time to TNF-α.
No racial differences in ROS production
Previous studies have shown that HUVECs from Black individuals have significantly higher basal protein expression of various superoxide-producing NADPH oxidases, suggesting higher basal ROS production [13][14][15].Interestingly, as compared with HUVECs from White individuals, HUVECs from Black individuals also exhibit greater endothelial nitric oxide synthase (eNOS) expression.Increased eNOS expression yet a concomitant greater ROS production, as in seen in previous literature, may indicate decreased NO bioavailability, increased NO-related clearance of superoxide, or increased ROS production in HUVECs from Black individuals [13][14][15].However, the findings from the current study are not in agreement with previous literature, potentially due to the magnitude of the inflammatory stimulus used, priming of HUVECs from Black individuals from chronic stressors to the mother, or differential inflammatory path activity or activation between HUVECs from White and Black individuals.In the present study, HUVECs from both the races exhibited increases in ROS production at 4hr and 24hr of TNF-α treatment with no differences between races basally.Plausibly the magnitude of ROS production, based on the effect sizes, was greater in HUVECs from White individuals (Hedges's g s = 1.37) as compared to HUVECs from Black individuals (Hedges's g s = 0.73).
A plausible explanation for discrepancies in the present study as compared to the previous literature is the differential inflammatory pathways in HUVECs from Black and White individuals.Specifically, recent studies suggest HUVECs from Black individuals have greater inflammatory responses (greater metalloproteinase-2 and endothelial microparticle release) to TNF-α stimulation as compared to HUVECs from White individuals [11,24].HUVECs from Black individuals have also exhibited greater C-reactive protein (CRP) receptor expression than HUVECs from White individuals before and after induced inflammation, potentially suggesting a greater ability to respond to CRP binding and inflammation generally [25].Taken together, the current and previous findings suggest differential inflammatory pathways in HUVECs from Black and White individuals, with HUVECs from Black individuals potentially exhibiting greater inflammation and potentially causing greater counteractive changes in oxidative stress-e.g., a greater responsiveness in terms of greater NO concentrations, greater eNOS expression, greater CRP receptor expression, and greater responses to induced inflammation and laminar shear stress [13-15, 24, 25].
As race is a social construct, a second potential explanation for the discrepancies in response to stressors in the present study versus previous studies could be social factors (beyond the scope of this study) whereby socially, Black individuals experience more chronic stress (racism, socio-economic, and housing-related stress, for example) than White individuals [26], and thus are more "primed" and less responsive to added stressors [27].While these previous findings are in humans, perhaps a similar phenomenon occurs physiologically, whereby, when HUVECs are treated with similar concentrations of TNF-α, the impact of those concentrations may be different at the cellular level in Black and White individuals.
Racial differences in SOD activity
SOD is an enzyme that primarily attenuates ROS levels by clearing superoxide.In the present study, when comparing the 4hr and 24hr control conditions, HUVECs from White individuals, but not HUVECs from Black individuals, exhibited greater SOD activity normalized to protein content.This may be due to the previously mentioned differential impact of the magnitude of the inflammatory stimulus used (i.e.HUVECs from Black individuals may be primed to respond to the inflammatory stimulus used while HUVECs from White individuals may not be as primed).The increase in SOD activity in HUVECs from White individuals in the control condition was absent in the TNF-α condition, suggesting a reduction or attenuation of the normally increased SOD activity when exposed to TNF-α in HUVECs from White individuals alone.In the present study, SOD activity in HUVECs from Black individuals did not significantly change with time in the control or TNF-α condition.Indeed, HUVECs from Black individuals have exhibited significantly lower SOD activity normalized to protein content basally and following 4 hours of TNF-α exposure when compared with HUVECs from White individuals [11,13,14].Yet, HUVECs from Black individuals have also exhibited significantly greater total antioxidant capacity than HUVECs from White individuals basally [14].Taken together, it is plausible HUVECs from Black individuals may be primed to respond to inflammatory stimuli due to a higher capacity to clear ROS from other non-SOD1 and SOD2 antioxidant sources (glutathione peroxidase, uric acid, SOD3 [extracellular]).The similar ROS production and SOD activity seen in HUVECs from Black individuals in the current study is plausibly explained by greater total antioxidant capacity in HUVECs from Black individuals.However, young, healthy Black individuals exhibit greater plasma and circulating immune cell oxidative stress and total antioxidant capacity and SOD activity [14,28,29].
Sex differences in ROS production and SOD activity
Sex differences in ROS production and SOD activity may be due to 1) male HUVECs being primed to exhibit a pro-inflammatory state due to higher androgen exposure in the umbilical cord [18], 2) male HUVECs having a dampened antioxidant response to TNF-α stimulation that may be sex chromosome related [30], and/or 3) male HUVECs having augmented cellular stress responses [31].In the present study, male HUVECs exhibited significantly greater ROS production than female HUVECs basally and following induced inflammation.With SOD activity normalized to protein content, only female HUVECs exhibited significantly greater SOD activity (U/mg) at 24hr when compared with 4hr for both conditions, further supporting better ROS clearance capabilities via increasing SOD activity in female HUVECs.
Sex differences in ROS production and SOD activity may be due to differences in androgen actions, total antioxidant capacity, or cellular stress responses.The concentration of estrogen in umbilical cord blood does not differ between fetal sex, and, thus, male and female HUVECs likely experience similar estrogen concentrations [18].Lower actions of androgens in female HUVECs as compared to male HUVECs could explain the lower ROS production and higher SOD activity in female HUVECs.In a previous study, androgen exposure primed the inflammatory impacts of TNF-α exposure in female and male HUVECs, and umbilical cord blood has shown sex differences in testosterone levels, with male umbilical cords exhibiting higher testosterone concentrations [17,18].Specifically, androgens have been shown to increase vascular adhesion molecule expression and monocyte adhesion, molecules and events implicated in CVD progression [17].It is plausible the greater androgen concentrations in umbilical cord blood of male HUVECs 'primes' a more pro-inflammatory state in male HUVECs as compared with female HUVECs.Whereby, exposure to a subsequent inflammatory stimulus, such as TNF-α, results in greater oxidative stress (heightened ROS production) and impaired SOD activity in male HUVECs as compared with female HUVECs.However, there are noted sexspecific effects of circulating androgens in men and women [32]; some evidence suggests testosterone is vasodilatory and antioxidative in men [33] while other evidence suggests no cardiovascular protection from testosterone in men [34].The effects of testosterone in men and women are still being studied.Thus, the present findings and the role of sex in the actions of testosterone warrant further investigation.
The present study's results suggest a greater ability to clear ROS via greater SOD activity following induced inflammation in female HUVECs, as indicated by lower ROS levels as compared to male HUVECs yet increased SOD activity in female HUVECs alone.Indeed, previous literature suggests about a quarter of endothelial cell transcripts for genes are influenced by sex [30].Thus, the present findings may be due to sex-based differences in the expression of SOD or other antioxidants, but further research is warranted to explore this potential relationship.Further, following cellular stress in a previous study, male HUVECs exhibited greater ROS production, lower cell viability, lower angiogenesis, and lower NF-kB pathway activation than female HUVECs, suggesting a greater oxidative response and lower inflammatory response in male HUVECs [17,31].Taken together, the current results and previous literature suggest male HUVECs may exhibit a more pro-inflammatory state (greater ROS production, lower SOD activity, greater apoptosis, impaired angiogenesis) than female HUVECs.
Considerations
These disparate physiological responses, especially in the HUVECs, could "stem from disparities in socio-economic status, educational status, as well as other social determinants of health which a growing body of research has shown to be linked to structural racism" faced by the mothers (beyond the scope of the current study) (p.H2372) [12].The purpose of this study is not to suggest one race has inherently lower physiological function.On the contrary, it is recognized that 'race' is a social construct-one with real, material consequences.The purpose of the current study is to present potential explanations of the observed results, which may be due to social determinants of health [12].Therefore, one major limitation is the data does not take into consideration how various social determinants of health, including systemic racism faced by the mothers, could prime HUVEC responses in the present study [12].Further, the maternal health environment (physical activity, social determinants of health factors, smoking status, social stressors, etc.) impacts the vascular health of the mother and, thus, the environment of HUVECs.Thus, it is important to consider the maternal health environment as a potential source of variation and differential responses in HUVECs [35].
Limitations
The current study includes some limitations.First, only SOD activity was measured, and, thus, a complete picture of ROS clearance capacity was not fully captured.Second, HUVECs are subject to venous circulation as opposed to arterial circulation, meaning HUEVCs undergo different hemodynamics and are in a different environment than arterial endothelial cells.Importantly, HUVECs are a commonly used endothelial cell model [11,12,15,25,31], arterial endothelial cells can be hard to harvest from healthy individuals, and adequately harvested arterial endothelial cells often are from diseased or older individuals.Third, the current study did not measure levels of specific ROS or reactive nitrogen species, limiting the assumptions regarding alterations in oxidative stress.Lastly, because of the observed sex differences and within race divergences, sex and race likely confounded one another and require further research.
Conclusions
HUVECs from Black and White individuals exhibit divergent responses to TNF-α-induced inflammation, with HUVECs from White individuals, but not HUVECs from Black individuals, exhibiting a blunted increase in SOD activity with increased exposure time to TNF-α.Interestingly, female and male HUVECS exhibited sex differences in ROS production and SOD activity; female HUVECs exhibiting significantly lower ROS production and significantly higher SOD activity than male HUVECs following TNF-α exposure, suggesting sex differences in susceptibility to induced inflammation in HUVECs.The current findings underly the importance of noting the race and sex (or indicating that race and sex are pooled if using pooled HUVECs) of HUVECs used in in vitro research. | 2023-10-06T05:07:58.180Z | 2023-10-04T00:00:00.000 | {
"year": 2023,
"sha1": "fcf336aea388cf272f018ef3934ceb1fba9644d6",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0292112&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fcf336aea388cf272f018ef3934ceb1fba9644d6",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251641173 | pes2o/s2orc | v3-fos-license | :
In this study, the antioxidative effect of melatonin was investigated in aging retinal pigment epithelial cells in vitro . The objective of this study is to explore the administration of melatonin pharmacological doses for the prevention of aging symptoms. Specific concentrations of melatonin (20 µM, 40 µM and 80 µM) were used to treat hydrogen peroxide-induced aging retinal pigment epithelial cells and flow cytometry was employed to examine retinal pigment epithelial cell apoptosis. A specific probe was utilized to detect the intracellular reactive oxygen species concentration and apoptosis-associated proteins were detected in real-time polymerase chain reaction. Commercially available assay kits detected the expression of oxidative biomarkers, superoxide dismutase, malondialdehyde and glutathione. In comparison to normal cells, the aging retinal pigment epithelial cell model had lower cell viability, higher apoptosis rates and a more severe oxidation status. Melatonin increased cell viability while lowering apoptosis and oxidative stress. It influenced the expression of apoptosis-related proteins as well as oxidative stress indicators. Finally, treatment with melatonin was able to regulate proliferation, oxidative stress and apoptosis in aging retinal pigment epithelial cells. The application of melatonin may be a novel strategy to protect against age-related changes in age-related macular degeneration. Melatonin has protective effects against hydrogen peroxide-induced retinal cell death. It inhibits hydrogen peroxide-induced retinal pigment epithelial cell damage and decreases the apoptosis rate.
Age-related Macular Degeneration (AMD) is one of the most prevalent and significant vision issues among the elderly and oxidative stress complicates its etiology.A prominent risk factor for AMD is oxidative stress in Retinal Pigment Epithelial (RPE) cells [3] and the disease's treatments are still limited.Patients with neovascular AMD can benefit from photodynamic therapy and anti-vascular endothelial growth factor agents, but there are currently no cures available [ 4] .As a result, the only option is to monitor and prevent progression from early to late AMD.Clinical trials focusing on the treatment of early AMD have yielded no conclusive results based on previous research.As a result, elucidating the pathogenesis of AMD is critical.AMD is thought to be linked to both advancing age and tobacco smoking [5] .Naturally, it is possible that agerelated structural changes in the retina play a role in AMD pathogenesis.RPE cells are found in the outer layer of the retina [6] and are involved in photoreceptor cell regeneration and repair.In the normal state, these cells have unique characteristics and their loss may result in retinal degeneration and possibly blindness [7] .Degenerative and progressive conditions of the RPE cells are also recognized as important pathogenetic mechanisms in AMD.Aside from RPE modification, clinically, one of the most important markers for AMD diagnosis was RPE modification [8] .Protection of the RPE cells from age-related changes could be crucial in preventing progression from early AMD to neovascular AMD.The aging of retinal tissues has been associated with oxidative stress, which has been demonstrated to have a substantial impact on a number of ocular illnesses.A reactive or pathogenic process marked by the formation of Reactive Oxygen Species (ROS) or a decline in antioxidant capacity is referred to as oxidative stress.Superoxide anions (O 2 ), hydroxyl radicals (-OH-) and Hydrogen Peroxide (H 2 O 2 ) are examples of ROS.The uncontrolled production of ROS can result in serious cellular damage.Since abnormal proteins and lipids accumulate with age, oxidative stress results in less cell repair and regeneration.Furthermore, given its high metabolic activity, the retinal pigment epithelium has a large number of mitochondria to generate a sufficient amount of Adenosine Triphosphate (ATP) to carry out all of its physiological functions [9] .Agerelated mitochondrial dysfunction can thus lead to an increase in oxidative stress in the RPE, resulting in AMD [10] .RPE cells are extremely important to photoreceptor function, as they are not only responsible for the recycling of the visual pigments but also key in the phagocytosis of photoreceptor outer segments.The retinal pigment epithelium is vulnerable to oxidative harm because of its placement in a highly oxygenated and illuminated environment, which can lead to cellular malfunction, inflammation and finally cell death [11] .Melatonin (N-acetyl-5-methoxy tryptamine) is a free radical scavenger and the primary antioxidant defender against reactive hydroxyl radicals.Melatonin has several beneficial effects on physiological functions associated with age, including metabolic sensing, mitochondrial modulation and proliferation, anti-oxidative protection of biomolecules and anti-inflammatory actions [12] .Nocturnal production of melatonin decreases as age progresses in animals of different species, including humans [13] .The grafting of the pineal gland by young donors either into the thymus of old syngeneous mice or in situ into pinealectomized old mice prolongs the life of the recipients [14] .Melatonin's possibilities to produce life-span extensions are a popular subject for discussion; numerous studies show melatonin's anticarcinogenic and anticancer capabilities [14] .
Cell culture:
The Aging Retinal Pigment Epithelial (ARPE-19) cells were sourced from the cells obtained from the National Cell Bank of Iran (Pasteur Institute, Iran).DMEM/ Ham's F-12 (1:1), supplemented with 5 mM glucose, 10 % fetal bovine serum and 1 % penicillin/streptomycin, was used to grow and passage the cells.For each experiment, ARPE-19 cells were employed at 80 % confluence and passaged every 3 d after being placed into culture medium.The ARPE-19 cells employed were in the 3 rd to 6 th passages.The cells were cultured at 37° in a humidified incubator containing 5 % Carbon Dioxide (CO 2 ).
Aging cell model:
The method described by Zhuge et al. was used to create a pulsed H 2 O 2 exposure aging cell model [15] .In general, passaged ARPE-19 cells were transplanted to a 10 cm culture dish with a 50 % cell density control.On the 1 st d, the cultivated cells were incubated for 2 h at 37° in full medium containing 800 M H 2 O 2 .The cells were then rinsed three times with phosphate buffered saline buffer after the growth medium was removed.For cell culture, fresher medium was introduced.The first H 2 O 2 exposures were done on d 2-5 and the cell culture medium was changed to complete medium on d 6.On d 8, the treated RPE cells were passaged and replated on 10 cm cell culture dishes after being digested with trypsin-ethylenediaminetetraacetic acid.
MTT assay:
The MTT assay was performed to test the vitality of the cells.The 96-well plate was filled with MTT dye (Sigma Aldrich) and incubated overnight at 37°.The 96-cell plates were extensively shocked for 10 min after adding 150 l of dimethyl sulfoxide to each well, until the crystals disintegrated completely.The BioTek ELx808 microplate reader was used to measure the absorbance of the wells at 540 nm.
Flow cytometry:
Apoptosis was measured using the Annexin V-fluorescein isothiocyanate apoptosis detection kit (eBioscience).All of the cultivated ARPE-19 cells in each group were trypsinized and centrifuged.The cells were treated with ice-cold 70 % ethanol after two washings with phosphate buffered saline.The cells were then transferred to a new Eppendorf tube after being resuspended in 1 ml of phosphate buffered saline.The Propidium Iodide (PI) solution and Annexin V-fluorescein isothiocyanate were added and the cells were grown in the dark for 20 min.Flow cytometry was used to examine the cells.All of the detections were carried out in duplicate.
Measurement of ROS:
The 2',7'-Dichlorofluorescin Diacetate (DCFH-DA) used in the ROS assay kit (Bioquochem-Spain) can be produced by stress circumstances such as exposure to oxidants or medicines.At a density of 1106 cells/ml, the cultivated ARPE-19 cells were transferred to a 12well plate.DCFH-DA, at a working concentration of 50 µM, was added to each well.The cells were then cultured in the dark at 37° for 15-30 min before being washed twice in phosphate buffered saline and used for advanced flow cytometry analysis.
Detection of oxidative biomarkers:
The levels of oxidative stress in different groups were measured using three typical oxidative biomarkers: SOD, MDA and GSH.We obtained aging RPE cells that had been treated with low (20 µM), medium (40 µM) or high (80 µM) doses of curcumin for 24 h.Thereafter, oxidative biomarkers in these three therapy groups were assessed.In general, commercially available assay kits were used to determine SOD, MDA and GSH activity levels.All procedures were carried out according to the manufacturer's instructions and each sample was tested three times.
Quantitative analysis of gene expression:
Total RNA was extracted from the different groups using a Pars Tous Kit (Pars Tous Co., Iran) according to the manufacturer's recommendations.Concentrations of RNA were determined by Ultraviolet (UV) spectrophotometry (Eppendorff, Germany).Complementary DNAs (cDNAs) were synthesized from 500 ng Deoxyribonuclease (DNase)-treated RNA samples with a QuantiTect reverse transcription kit using oligo (deoxythymine (dT)) primers.The specific primers used for Polymerase Chain Reaction (PCR) reactions are listed in Table 1.These primers were synthesized by Pishgam Company (Tehran, Iran).PCRs were performed using Master Mix and EvaGreen in an Applied Biosystems StepOne™ thermal cycler (Applied Biosystems, USA).The PCR program commenced with an initial melting cycle for 5 min at 95° to activate the polymerase, followed by 40 cycles of melting (30 s at 95°), annealing (30 s at 58°) and extension (30 s at 72°).Melting curve analyses were used to verify the PCR reactions quality.A standard curve was used to determine how efficient each gene was (logarithmic dilution series of cDNA from the testes).For each sample, the reference gene ((Glyceraldehyde-3-Phosphate Dehydrogenase (GAPDH)) and target gene were amplified in the same run.Reference genes were approximately equal.The target genes were normalized to a reference gene and expressed relative to a calibrator.
Statistical analysis:
All the data are presented as the mean±standard deviation of three independent experiments.GraphPad Prism was used to conduct all statistical analyses.The one-way Analysis of Variance (ANOVA) test was employed to observe differences and p<0.01 was considered to be statistically significant.
RESULTS AND DISCUSSION
Melatonin's effect on cell viability is described below.
In vitro, the effect of different doses of melatonin on ARPE-19 cell viability in both normal and senescent cells was assessed using the MTT assay.Melatonin exhibited a significant difference; a substantial decrease in cell viability under normal conditions only at 100 µM (fig.1).In senescent cells, melatonin at 10 µM and 20 µM concentrations showed a non-significant difference in cell viability when compared to the control group (there was a protective effect), whereas melatonin at 40 µM, 60 µM and 80 µM concentrations showed significantly increased cell viability (fig.2).However, in the aging cells, when compared to the control group, melatonin dramatically improved cell viability (fig.3).
Melatonin's protective effect on aging RPE cells was thus dose-dependent.
Antioxidant effect of melatonin in aging RPE cells is shown below.In cultivated ARPE-19 cells treated with different concentrations of melatonin, the levels of oxidative stress indicators MDA, SOD and GSH were assessed.The results revealed that the non-treated aging ARPE-19 cells had greater MDA levels and reduced SOD and GSH levels (p<0.001).In the aged ARPE-19 cells, treatment with melatonin resulted in significantly higher levels of anti-apoptotic proteins SOD and GSH and much lower levels of MDA when compared to the non-treated group.In this study, the MDA level was downregulated as melatonin concentrations increased, while SOD and GSH levels were elevated (fig.4-fig.6).The effects of melatonin on the expression of apoptosis associated genes, B-cell lymphoma 2 (Bcl-2), Bcl-2-associated X protein (Bax) and caspase-3 in aginginduced apoptosis in cultured ARPE-19 cells were measured using quantitative gene expression analysis.The aged cells had decreased levels of anti-apoptotic Bcl-2 and higher amounts of apoptotic Bax and caspase-3, according to the flow cytometry results (fig.7).When cells were treated with melatonin at different concentrations (20 µM, 40 µM and 60 µM), the expression of Bax and caspase-3 was reduced compared to non-treated cells.The level of Bcl-2, an anti-apoptotic protein, was also elevated as the quantity of melatonin was increased; Bcl-2 was enhanced by different amounts of melatonin, while Bax and caspase-3 were lowered.The same tendency was seen when oxidativeassociated proteins were treated with varying melatonin doses.The levels of Bcl-2 and caspase-3 in the 20 µM and 40 µM melatonin-treated groups were similar in general.However, as compared to the 20 µM and 40 µM groups, the increase in Bcl-2 in the melatonin 80 µM groups was not significant.Each protein's tendency to increase in melatonin-treated groups was substantial, as illustrated in fig.8.
In the current study, the intracellular ROS In the apoptosis groups, the results revealed a nonsignificant difference in apoptotic cell percentages.In the necrosis group, there was a significant increase in aging cells (33.63±1.76),with the highest increase in the 80 µM group (33.99±2.14),when compared with the control group (11.88±2.21),while there was no significant change in the 40 µM group (14.59±1.867).
According to growing evidence, the oxidative stress of RPE cells is a critical aspect of the pathophysiology of AMD [16] .As a therapy option for AMD, research has concentrated on finding techniques to protect RPE cells from oxidative stress.In this regard, H 2 O 2 exposure is a common model for conveying RPE cell oxidative stress susceptibility and antioxidant activity [17][18][19] .
As previously mentioned the circulatory melatonin level has been lowered with aging, in patients with AMD [20] .Melatonin has been shown to protect RPE cells against blue light and H 2 O 2 damage in vitro as well as against oxidative stress [17,18,21] .It is an antioxidant and endogenous ROS scavenger with a higher antioxidant capacity than other antioxidants such as vitamin E [22,23] .Different types of retinal cells, such as RPE cells and photoreceptors, may be protected by melatonin [23] .
According to a diabetes mellitus study, endogenous and exogenous melatonin may influence metabolic abnormalities not only by modulating insulin secretion but also by providing protection against ROS [24] .
According to the findings of the present study, melatonin has numerous anti-oxidative properties in RPE cells.Its administration resulted in lower cell viability, higher apoptosis rates and a more severe oxidative state in aging RPE cell cultures.
Melatonin has strong antioxidative properties, making it a promising candidate for the prevention of oxidative stress-related diseases such as premature aging and degenerative disease in humans.It could thus play a role in the pathogenesis of AMD, a disease that affects photoreceptors and the retinal pigment epithelium and has been linked to oxidative stress.A study has shown that melatonin can protect RPE cells from damage caused by ROS.Melatonin behaves similarly to synthetic mitochondria-targeted antioxidants, which concentrate at relatively high levels in mitochondria; thus, it may prevent mitochondrial damage in AMD [25] .
A number of blinding retinal diseases, such as diabetic retinopathy, retinitis pigmentosa and AMD, are linked to increased oxidative stress.Antioxidant vitamins and minerals have been demonstrated to lower the risk of AMD.The apoptotic death of RPE cells, followed by photoreceptor cell death, is a major factor contributing to the pathogenesis of the dry form of AMD [26] .However, MDA reflects oxidative damage in RPE cells.Therefore, to slow the progress and development of early AMD resulting from oxidative damage, it is critical to protect RPE cells by reducing ROS and MDA formation [27] .The current study indicated that the exposure of ARPE-19 cells to H 2 O 2 resulted in increase in ROS and MDA generation, but these effects were significantly ameliorated by treatment with melatonin.
In this study, we used a pulsed H 2 O 2 exposure paradigm as an in vitro aging model.The feasibility of this cellular model was higher.Furthermore, the results from this in vitro model would be more accurate than a single H 2 O 2 exposure.
According to this study, melatonin has antioxidant and anti-apoptotic properties in aged RPE cells that have been exposed to oxidative stress and apoptosis.
The findings also provide evidence for melatonin's
TABLE 1 : SEQUENCES OF THE PARS TOUS PRIMERS AND PROBES
Note: F represents Forward primer and R represents Reverse primer results suggest that melatonin may play a role in protecting RPE cells from oxidative stress.In conclusion, melatonin has protective effects against H 2 O 2 -induced retinal cell death; it inhibits H 2 O 2 -induced RPE cell damage, decreases the apoptosis rate, increase mitochondrial membrane potential and decreases caspase activation. | 2022-08-18T15:17:26.026Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "d010daeec85f660b12e815d48fabe8107e474dcc",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.ijpsonline.com/articles/melatonins-antioxidative-characteristic-in-human-aging-retinal-pigment-epithelial-cells.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "16dd06693329e03e26f6c6f06003442bb61fe50f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
37106621 | pes2o/s2orc | v3-fos-license | Prior Knowledge-Based Event Network for Chinese Text
Text representation is a basic issue of text information processing and event plays an important role in text understanding; both attract the attention of scholars.The event network conceals lexical relations in events, and its edges express logical relations between events in document. However, the events and relations are extracted from event-annotated text, which makes it hard for largescale text automatic processing. In the paper, with expanded CEC (Chinese Event Corpus) as data source, prior knowledge of manifestation rules of event and relation as the guide, we propose an event extraction method based on knowledge-based rule of event manifestation, to achieve automatic building and improve text processing performance of event network.
Introduction
Text representation is an important issue in natural language processing, such as information retrieval and text classification.An appropriate representation not only can reflect text semantic, theme, and structure but also can improve the computational efficiency.In recent years, there is a tendency to use richer text representations than just keywords-based and concepts-based ones in the field of text information processing.
Event originated from cognitive science often appears in the literature of philosophy, cognitive science, linguistics, and artificial intelligence.It has been widely used in the computational linguistics as well as information retrieval and various NLP applications, which plays a special and important role in understanding text semantic.It not only contains specific correlations among a group of text elements but also indicates logical dependencies of things and attracts more and more attentions of researchers.Cognitive scientists believe that event is not only the basic unit of human cognizing and understanding objective world but also storage cell of proposition memory [1].Most of the current natural language processing technologies lay particular stress on the theory of grammar structure, while ignoring the importance of semantic understanding, especially event semantic understanding [2].Event-based text representation conforms to the rules of human cognition and natural language understanding.
Seen from present literature on event-based text representation we have consulted, there are the following main problems: (1) The research on event-based text representation is still in its infancy; the thinking of event network is just beginning to blossom that it is necessary to be further explored.(2) The operations and applications on event network need to be raised and further researched.
Against the shortcomings of current traditional text representation, the paper takes event as feature item of text and proposes an event-based text representation method.Event is regarded as semantic unit of text, and the events are connected by certain types of relations in the text, and these events imply correlations of linguistic units in the text by making the linguistic units (word, concept, sentence, etc.) as certain elements of event.It no longer regards text as an aggregation of independent words; consequently, the problem of "a bag of words" in classic text representation is 2 International Journal of Digital Multimedia Broadcasting solved.Event network not only keeps semantic information of text and presents events and relations between events but also reflects importance and dynamic behavior of events.Compared to a traditional text representation, event network can express the higher granularity of semantic meaning, closer to the reality and easier for computers simulating text understanding and memorizing of human.It will provide new technology and method for semantic-based text information processing.
The paper is organized as follows: Section 2 introduces the related work.Section 3 constructs event network of Chinese texts in the field of emergencies.Section 4 evaluates the representing effect of event network.In Section 5, the formal definition of event network model for Chinese text is generalized by inducting and abstracting the instances of event network, and then the advantages of the model are analyzed.Finally, we summarize the paper and give an outlook of the future study.
Semantic information of text is composed of two parts: text component term (word, concept, sentence, etc.) and relationships between terms.Traditional text representation ignored the value of the order and relationships of the component terms on semantic expressing and assumed that the terms are independent, while, in fact, the semantic meaning of text is related to not only component terms and their frequency but also assembly rules and the order of terms, which means that the word-to-word and sentenceto-sentence relationships have an effect on text semantic.The same terms with same frequencies may express different semantic, such as the two following text snippets "Tom gave Mary a book as birthday gift" and "Mary gave Tom a book as birthday gift"; traditional text representation cannot express the difference between them [11].Text representation based on word unit or concept unit will miss the information of relationships between terms, which will loss semantic meaning of text and result in failing to reflect higher level of semantic information.From the view of event semantic understanding, the above two text snippets express two different events.
In various texts, such as novel, opera, biography, and news reports, that contain many events, traditional text representation did not pay enough attention to event or represent event and relations appropriately.From the perspective of semantic understanding, linguists think that text is not only a group of attributes and concepts but also a describer of a series of events in a higher granularity; according to the thinking, these texts should be regarded as a group of events related by some relations, which is much closer to the laws of human cognition and understanding.From the perspective of formation of text, elementary language units (word, concept, sentence, etc.) form sentence by certain linguistic rules and sentences form a sequence of sentences or paragraph and then form text and express some semantic meaning and theme.Taking event as semantic unit of text and text component term as event-element only solves the problem of "a bag of words" but also expresses the higher level of semantic information.
Event-Oriented Text Representation.
(Although the definitions of event are not unified in different applications, most of them emphasize two kinds of event attributes, action (verb or gerund) and characteristics of action (participant, location, time, etc.), so most researches are centered on verb and attributes of verb.In the paper, attribute of event is called event-element or element for short.)Looking from the current literature we have consulted, little research has been done on event-oriented text representation; the related work mainly includes the following.
Feng [12] proposed incident threading to represent English news reports at sentence level.The texts that describe the occurrence of a real-world happening are merged into a news incident, and incidents are organized in an incident threading by dependencies of predefined types.However, it does not do well in representing Chinese texts.
Glavaš and Šnajder [13] proposed an event-based text representation; however it only has temporal relation.Zhao-Man and Zong-Tian [14] proposed event lattice to represent narrative texts based on concept lattice.In the lattice, text is the object, event is the attribute, and binary relation is used to judge whether an event belongs to a text.Although lattice has precise mathematical properties, its describing power is weak, lacking the ability to express luxuriant relations.And obviously the event lattice has no meaning to one text.It is more suitable to represent inclusion between a group of texts and events than relations between events in a text.
Jian-fang and Yun-yu [15] expounded the thinking of event-based text representation in the paper named "The Research on Event-Oriented Text Representation."This paper discussed the feasibility and adaptability of eventbased text representation for Chinese news reports at genre and arrangement of text.However it oversimplified relations between events, resulting in the fact that its representing power is weak.Thus there are still many issues need to be further studied.
Extracting events is the most important thing of eventbased text representation.The three main approaches of extracting events are data-driven [16], knowledge-driven [17], and hybrid [18].The accuracy is about 70 percent according to ACE (Automatic Content Extraction).The paper uses prior knowledge-guided approach.
Constructing Event Network of Chinese Text in the Field of Emergencies
Our experimental corpora, CEC (Chinese Event Corpus), are collected from Internet, the texts of which can be divided into five categories: earthquake, fire, traffic accidents, terrorist attack, and food poisoning according to the classification system of news report about emergency event [19].Up to now, there are 500 texts in CEC, and 300 ones of them are human annotated event and relations.Some rules have been discovered based on the annotations using mining technology.KBR-EM (knowledge base of rule of event manifestation) has been constructed on CEC.
Verb plays an important role in semantic understanding; it is also core of event.As long as there is verb, it will involve maker and/or receiver of action, and certain regular collocation relationships will be established between action and involved entities; based on this, language would form various basic syntactic configurations and then explain construction of statement and relationship of vocabulary, and so forth.By annotating event on CEC, we find that event corresponds to verb or gerund, and 83% of these verbs or gerunds involve one or two entities, and arrangement of different text typology could affect the layout of events.The relations between events in text are as follows: some are contained in verb of sentence, some are expressed by conjunction (many conjunctions of text virtually show nontaxonomic relation between events, such as "because, therefore" indicates causation), and some are implied in the order of events (such as following relation); the experiment shows that two events will appear successively in text with great probability if there is a relation between them in reality.Our experiments show that events and relations meeting the above findings can cover 85% of entire text.Furthermore, following and causation are the largest number of relations, accounting for 81% of total relations.Statistics of the annotation are displayed in Table 1, where coverage of text is the ratio of event-contained sentences to total sentences.Thus it can be seen that event-based text representation will express text information appropriately.
The guidance of the KBR-EM modified the existing NLP tools (such as tokenizer, part-of-speech tagger, syntactic analyzer, and HowNet), and all of the programs are implemented in Java.Text is processed with word segmentation and POS tagging, syntax analysis and grammatical component tagging, identifying sentence and sentence components, and corresponding sentence or sentence components to event or event elements, regarding verb and gerund as trigger of event.and removing stop-using verbs, such as high-frequency verbs (be, do, have, etc.) and subjective verbs (feel, believe, etc.).Such events belong to stop-using events that are triggered by stop-using verbs; furthermore, stop-using events also include future events and negative events that are triggered by future-tense and negative-form verbs, respectively.Stopusing events should not be included in event network of text.Trigger-associated major components of action are other elements (time, place, subject/predicate-participant, etc.) of the event.
For the identified events, use electronic dictionary and ontology and make concept-climbing after mapping trigger of event into concept.Cluster event and generate event hierarchy by clustering based on the above climbed result, and taxonomic relations between events will be identified.According to the conjunction and other syntactic components of sentences where events are extracted from, consult the findings on relations mentioned above and identify nontaxonomic relations between events.
After identifying events and relations, event network is constructed as follows.Events in the text are arranged in a special directed graph.A named edge from event A to event B means that there is a relation between them in the text, either taxonomic (A is a B, forming multi-inheritance-allowed inheritance diagram) or nontaxonomic, such as causation (A leads to the happening of B), following (A precedes B in time), and composition (A is a part of B).And if there is more than one relation between event A and event B, then one relation is linked to one edge.
Experiment and Evaluating Representing Effect
Representing effect could measure whether a text representation method can represent information of original text appropriately and properly.The paper evaluates representing effect of event network with event recall rate (ER), event precision rate (EP), relation recall rate (RR), and relation precision rate (RP).
To compare between events and relations, the paper specifies some rules as follows: (1) Two events are identical if and only if corresponding event elements are identical that are contained in the individual event.
( Evaluating on event set of event networks of texts in the field of emergency, as shown in Figure 1, the average recall rate and precision rate are 82% and 88%, respectively.Evaluation of relation set is shown in Figure 2; the average recall rate and precision rate are 76% and 85%, respectively.Compared with previous method [15], the method constructs event network from tagged corpus with events, causation, and following relations, the resulting event network added another adjacent relation and event-element-shared relation.Its event recall and precision rate will be higher, and events contained in the event network can be viewed as complete and correct in theory.According to the findings described in Section 3, nontaxonomic relation recall rate should be at least 81%.However, there are large amount of redundancy and error in adjacent and event-element-shared relation; for example, adjacent relation could actually be following or with no meaningful relation, and event-element-shared relation is too general to specify a relation.So relation precision rate of this method is far inferior to the paper.
Incident threading [12] did well in representing preprocessed grouped English news texts; however, it is less suitable for Chinese text than event network.Evaluating the two representation methods is shown in Table 2.
Event Network Model for Chinese Text
An event network contains one or more events that are connected by relations.Events in the network are arranged in a graph, and two events are directly connected by one or more directed/undirected edges (the number of edges depends on the number of relations between the two events) and have some relations.The text representation method is called event network.Though constructing event networks of a large number of texts, we discover that event network is different from general directed digraph.There is information on its each node and each edge, and multiple edges may exist between two nodes.The formal event network model is defined as following by generalizing and abstracting instances of event networks.
( Event network can be seen as directed graph.It not only keeps semantic information of text and represents events and relations between events but also reflects importance, dynamic behavior, and state changing of events.Compared with traditional text representation such as VSM, the salient advantage of event network is that it implies correlations among linguistic units of the text in its events, which not only solves the problem of "a bag of words" but also inflects the higher granularity of semantic meaning.Meanwhile relations link events together that can express logical dependencies of things and reflect the occurrence and development process of event.
Event network is a directed graph with information on its nodes and edges.Using all information, various calculations can be done on it by considering some properties of directed graph; for example, an event network can be clustered according to the similarity of events, partitioned into hierarchical structure with different threshold value, and reduced according to importance of event or can keep some other properties.The similarity of texts can be calculated according to the matching of their individual event network; some knowledge can be obtained through mining frequent and simultaneous event elements in multiple event networks.These calculations must meet not only properties of graph but also meaning of information on nodes and edges of event network, so the unique properties and special computation model of event network need to be researched.Establishing abstract operations on event network, some problems will be solved by mathematical methods, which are a kind of good form for semantic calculation and will support event-based text information processing.
Conclusions and the Future Work
The paper introduced requirement of event-based text representation.The formal event network model for Chinese text is defined by abstracting instances of event network on CEC texts.The difference between event network and traditional text representation is that event network keeps semantic information of text, no longer regards text as an aggregation of independent words, and solves the problem of "a bag of words."In addition, it reflects relations among events, importance, and dynamic behavior of event.Our experiments demonstrate the feasibility, adaptability, and advantage of event network as a text representation method.
In the future work, we will study on event network by using graph theory, clustering, formal concept analysis, granular computing, and so forth, considering particularity of the model.In this way, various applications of text will be solved by mathematical methods.Theoretical model and method support for present text information processing based on semantic meaning will be provided.
Table 1 :
Statistics of annotated texts.
) Two relations are identical if and only if corresponding items are identical that are contained in the individual relation tuple.For taxonomic relation − ( , ), where is superevent or upperevent, is subevent or lower-event.For directed nontaxonomic relation ⟨ 1 , 2 ⟩ and undirected nontaxonomic relation ( 1 , 2 ), where is name of the relation, 1 and 2 are two events that are connected by the relation
Table 2 :
Comparison for event network and incident threading. | 2018-04-03T03:56:53.593Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "a014b6e1e6cb89795f0b62d7c74a3052224f4faa",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/ijdmb/2017/8594863.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a014b6e1e6cb89795f0b62d7c74a3052224f4faa",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
55539757 | pes2o/s2orc | v3-fos-license | Coexisting pseudogap, charge transfer gap, and Mott gap energy scales in the resonant inelastic x-ray scattering spectra of electron-doped cuprates
We present a computation of Cu K-edge resonant inelastic x-ray scattering (RIXS) spectra for electron-doped cuprates which includes coupling to bosonic fluctuations. Comparison with experiment over a wide range of energy and momentum transfers allows us to identify the signatures of three key normal-state energy scales: the pseudogap, charge transfer gap, and Mott gap. The calculations involve a three band Hubbard Hamiltonian based on Cu $d_{x^2-y^2}$ and O $p_x$, $p_y$ orbitals, with a self-energy correction which arises due to spin and charge fluctuations. Our theory reproduces characteristic features e.g., gap collapse, large spectral weight broadening, and spectral weight transfer as a function of doping, as seen in experiments.
Cuprates are widely believed to be charge-transfer insulators [1], with a Mott gap between Cu-d x 2 −y 2 orbitals much larger than the charge transfer gap between Cu-d and O-p orbitals. The upper (UHB) and lower Hubbard band (LHB) of the Cu orbitals are intimately related to the antibonding and bonding bands of the three band model, and it is important to understand how the strong correlations of Mott physics modify these bands from the conventional LDA-based picture. Meanwhile, a third energy scale, the pseudogap scale, has been found experimentally, and its origin and relation to the other two scales continues to be a matter of intense debate. Here we model electron-doped Nd 2−x Ce x CuO 4±δ (NCCO), for which the pseudogap is well described as a competing antiferromagnetic (AF) order.
Experimental access to the LHB and/or the bonding band has proven difficult and the corresponding optical interband transitions have not been observed. Moreover, while the antibonding d x 2 −y 2 band lies at the top of the d-bands, the bonding d x 2 −y 2 band is found in LDA to lie at the bottom of a veritable 'spaghetti' of d-bands and their associated oxygen orbitals, nearly 6 eV below the Fermi level as seen in Fig. 1. Therefore, it is difficult to extract this band from ARPES data. On the other hand, RIXS is a local probe directly rearranging the Cu and O orbitals, and as such can provide selective access to the bonding bands. Indeed, RIXS experiments report a strong feature in most cuprates in the 6-8 eV range which has been associated with this band [2][3][4][5][6][7]. In this article we show that by incorporating strong renormalization of the near Fermi energy bands by magnon fluctuations [8,9], the high-energy RIXS features are indeed consistent with transitions from the LHB to the UHB. This resolves a puzzling discrepancy in earlier calculations [10] which were unable to fit both the low and high energy parts of the spectrum. We also capture another important feature of the spectrum, the realistic broadening, which arises due to the strong coupling to bosonic quasiparticles.
Remarkably, we find that all three energy scales are strongly influenced by the Hubbard U . The three energy scales are the following: 1) the Mott gap scale which is the result of transitions from LHB to UHB, 2) the charge transfer gap scale which persists as a residual feature into the overdoped regime and 3) the pseudogap or AF gap scale which collapses in a quantum critical point near optimal doping. For convenience, we label the AF-split subbands of the antibonding bands as the lower (LMB) and upper ( UMB) magnetic bands. Cuprate magnetism naturally separates into two regimes: at high energies Mott physics produces localized spin singlets on each copper site, splitting the Cu dispersion by an energy ∼ U into upper and lower Hubbard bands. In the presence of hybridization with oxygens, the LHB [UHB] becomes identified with the bonding [antibonding] band of the threeband model. At lower energies, these singlets interact on different sites, leading to magnetic gaps in both UHB and LHB of magnitude ∼ m d U via more conventional Slater physics associated with AF order, where m d is the magnetization on Cu. The Mott physics arises as an emergent phenomenon. When the AF gaps open at half filling, hybridization between Cu and O is mostly lost. For instance, in the antibonding band electrons in the UMB have mainly Cu character, while the opposite happens in the bonding band [10]. Consequently, the states near the top of the lower magnetic band are of nearly pure oxygen character [10].Thus, due to strong correlations, the 'charge-transfer' gap at half filling coincides with the AF gap. At finite doping, these two features separate in energy: the AF gap collapses rapidly [11], while a residual charge-transfer gap persists in optical spectra at high energies, due to strong magnetic fluctuations, closely related to the high energy kink (HEK), or 'waterfall' effect seen in photoemission [8,12]. Here we show that this residual charge-transfer gap is also present in RIXS. arXiv:1202.1599v1 [cond-mat.str-el] 8 Feb 2012 In K-edge RIXS the incident x-ray excites a Cu 1s → 4p transition with an intermediate state shakeup involving mainly Cu d x 2 −y 2 and O p states. Within the RPA framework, the RIXS cross section for this process is [10,13,14] Here ω i is the initial photon energy (taken to be -5 eV) and ω, q are the energy and the momentum, respectively, which are transferred in the RIXS process. w(ω, ω i ) contains all the matrix-element information of the initial and final state transition probabilities [10], N is the total number of Cu atoms and R µ is the position of the µ th orbital present in the intermediate state.
representing the charge operator. It is straightforward to show that Y can be calculated as the convolution between the spectral weights (A) of the filled and empty states [13,15] as where f (ω) is Fermi function and σ is the spin index. The prime in the k−summation means that the summation is restricted to the AF zone. RIXS spectra are calculated using Eq. 2 in which the spectral weight A is computed using a three-band Hubbard model with the Hamiltonian: where ∆ d0 is the (bare) difference between the onsite energy levels of Cu d x 2 −y 2 and O p − σ, t CuO the copperd oxygen-p hopping parameter, t OO the oxygen-oxygen hopping parameter and U (U p ) the Hubbard interaction parameter on Cu (O). n j = d † j d j and n i = p † i p i are the number operators for Cu-d and O-p electrons, respectively. The equations were solved at Hartree-Fock (HF) level to obtain a self-consistent mean-field solution.
Hartree corrections lead to a renormalized Cu-O splitting parameter ∆ = ∆ d0 + U n d /2 − U p n p /2, where n d (n p ) is the average electron density on Cu(O) [16]. The AF order splits the three bands into six bands as seen in Fig. 1(a). Since self-energy corrections are explicitly included, we use bare LDA-like dispersions [16] in the three-band model rather than the dressed, experimental dispersions [10]. Thus hopping parameters are taken from LDA while interaction parameters U and ∆ d0 are adjusted to optimize agreement between the antibonding band splitting and earlier one band results [10,[16][17][18][19]. When this is done, we find that ∆ d0 is small and negative while U has a very weak doping dependence [20].
The renormalization of the antibonding band due to bosonic fluctuations is calculated via a self-energy based on the QP-GW formalism [21], Here Γ is the vertex correction and W = U 2 χ is the interaction term which includes both spin and charge fluctuations. AF order in included along the lines of Ref. 12 where the effective AF gap is kept the same as in the one band model. Finally the self-energy (Σ) is incorporated into the three band dispersion via Dyson's equation G −1 = G −1 0 − Σ and A is computed from the dressed G. Our calculation includes only fluctuations associated with the band closest to the Fermi level, which produces negligible broadening for ω > 4 eV. Therefore for higher energy bands we include a constant broadening, Σ = 0.5 eV, consistent with the ARPES data [22]. Figure 1 shows how self energy effects modify the dispersion of the various bands of the three-band AF model for x = 0.14, comparing bare (a) and dressed (c) bands. The imaginary part of the self-energy, plotted in Fig. 1(b), attains a maximum around 1.7 to 2 eV, which leads to a strong broadening of the spectral weight in this energy range, both below and above the Fermi level (denoted by the black line), producing a characteristic kink or 'waterfall' effect in the dispersion. We will see in connection with Fig. 2 below that this 'waterfall' effect leads to a significant broadening in the RIXS spectrum since the spectrum of Eq. 2 involves a convolution of the filled and empty states. Fig. 1(c) also shows that the selfenergy softens the low energy bands nearest the Fermi level. This renormalization should also show up in the lowest branch of the RIXS spectrum, but this is restricted to very low energies and does not appear prominently in Fig. 2. Figure 2 shows the calculated RIXS spectra of NCCO for x = 0 and x = 0.14, reflecting the modulation of the spectral intensities of Fig. 1 via matrix element effects, which are well known to be important in various highly resolved spectroscopies. [23][24][25] Frames (a) and (d) include AF order but without self energy corrections, whereas the calculations in frames (b) and (e) include the self energy. The high intensities at energies around 5.6 eV involve the transition from the lower Hubbard band to the unoccupied states of the antibonding band, reflecting the Mott gap feature. This '6 eV' feature is present for all dopings. At half filling, in frames (a) and (b), the high intensities around 2 eV occur due to the transition within the antibonding Cu-O band across the AF gap. This gap collapses with doping and as a result we find a smaller AF gap at 14% electron doping in frames (d) and (e), close to the QCP, consistent with earlier results [14].
A key result is that the self energy produces a realistic broadening comparable to that observed experimentally. In the RIXS calculations, the 6 eV feature is the most intense in the spectrum, consistent with most early experiments on a variety of cuprates [2,[26][27][28][29][30], but more recent experiments [31], including those of Figs. 2(c,f) [32], employ a range of ω i where the 6 eV feature is suppressed and the lower energy features can be more easily probed. Except for this feature, most features in the calculated RIXS intensities follow the experimental trends. In the undoped cuprate in Fig. 2 (b), we observe a broad peak at Γ around 2.5 eV, with the intensity decreasing around the zone corner (π, π) while it remains strong around (π, 0). A similar level of agreement is found in the case of 14% electron doping in Fig. 2 in panels (d)-(f) [33]. The black dots in panels (b) and (e) represent the peaks of the experimental spectra, reproducing the blue, black, and purple dots of panels (c) and (f). The agreement is remarkable for both dopings, x = 0 in frames (b) and (c) and x = 0.14 in frames (e) and (f) [34]. Results for x=0.09 are similar, and are omitted for brevity.
We comment here on the three energy scales. While the dispersions which follow from Eq. 3 are rather complicated, we find numerically that the Mott gap is approximately equal to U and the AF gap to U m d , as illustrated by arrows in Fig. 2(a). Also, the charge transfer energy is the difference between the average oxygen energy and the upper Cu band [1], which we find to be ∼ U/2. Thus all three energy scales are controlled by U . In our calculation the 6 eV feature represents transitions across the true Mott gap, and the good agreement with experiment indicates that RIXS can be used to probe this important scale and how it is modified by hybridization with oxygens -is the bonding band split as our calculations suggest? This feature will be discussed further below when we describe fits to individual q-cuts of the RIXS spectra. In optical spectra at half filling the ∼2 eV charge transfer gap is indistinguishable from the AF gap [9]. At finite doping these two features separate, with the AF gap reflected as a midinfrared peak which collapses rapidly with doping, while a residual charge transfer gap persists as a weak feature near 2 eV in the strongly doped regime. A similar evolution is found in RIXS. The RIXS leading edge follows the doping dependence of the AF gap [10,14], while in Fig. 3 we show that a residual charge transfer gap feature can be seen in the RIXS spectra near the Γ point. Our three band calculation successfully reproduces the experimental finding that the magnetization scales with the AF gap [35,36]. Our calculated three-band RIXS spectrum modified by self-energy beautifully displays the broad feature around 2 eV at Γ in panel (b), in good agreement with experimental results in panel (a). Panels (c) and (d) display RIXS intensities obtained from theory (blue) as well as experiment (red dots) as a function of energy at Γ, compared with the optical spectra [37] (green dashes) for x = 0.10 and x = 0, respectively. The peak of experimental intensity is shifted towards slightly higher energy than the theoretical intensity, but the broadening is comparable in both cases.
For more quantitative estimates of the broadening, Fig. 4 compares theoretical (blue solid) and experimental (red dashed line) RIXS intensities as a function of ω for several constant q-cuts. There is an overall good agreement in peak positions as well as lineshapes and broadening for all momenta. In particular, panel (b) shows the high energy RIXS feature from Ref. [2]. The good agreement with theory strongly suggests the identification of this feature with the LHB in NCCO. A similar peak is found in all cuprates, as would be expected for Mott physics.
In conclusion, we find that RIXS is a suitable probe across all energy scales, including AF gap, chargetransfer, and Mott physics. We provide a three-band model that is capable of explaining the experimental RIXS spectra over the entire energy and doping range. We find a good correspondence between the RIXS spectra at Γ and the optical spectrum, but RIXS has the additional advantage of full momentum-space resolution. While we have concentrated on the electron doped cuprates, our model should apply equally well to the hole doped case.
We thank A. Lanzara and A. Tremblay for very useful conversations. This work is supported by the U.S. Department of Energy, Office of Science, Division of Materials Science and Engineering grants DE-FG02-07ER46352 and DE-AC03-76SF00098, and benefited from the collaboration supported by the Computational Materials Science Network (CMSN) program under grant number DE-FG02-08ER46540, and from the allocation of supercom- | 2012-02-08T04:39:31.000Z | 2012-02-06T00:00:00.000 | {
"year": 2012,
"sha1": "45b338055109b1c54c2046f362c78dbd39c4ae5b",
"oa_license": "publisher-specific, author manuscript",
"oa_url": "https://link.aps.org/accepted/10.1103/PhysRevB.85.075104",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "c884d54c401eddda21253b22c87f30c490618fae",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
265053080 | pes2o/s2orc | v3-fos-license | Predicting multiple sclerosis severity with multimodal deep neural networks
Multiple Sclerosis (MS) is a chronic disease developed in the human brain and spinal cord, which can cause permanent damage or deterioration of the nerves. The severity of MS disease is monitored by the Expanded Disability Status Scale, composed of several functional sub-scores. Early and accurate classification of MS disease severity is critical for slowing down or preventing disease progression via applying early therapeutic intervention strategies. Recent advances in deep learning and the wide use of Electronic Health Records (EHR) create opportunities to apply data-driven and predictive modeling tools for this goal. Previous studies focusing on using single-modal machine learning and deep learning algorithms were limited in terms of prediction accuracy due to data insufficiency or model simplicity. In this paper, we proposed the idea of using patients’ multimodal longitudinal and longitudinal EHR data to predict multiple sclerosis disease severity in the future. Our contribution has two main facets. First, we describe a pioneering effort to integrate structured EHR data, neuroimaging data and clinical notes to build a multi-modal deep learning framework to predict patient’s MS severity. The proposed pipeline demonstrates up to 19% increase in terms of the area under the Area Under the Receiver Operating Characteristic curve (AUROC) compared to models using single-modal data. Second, the study also provides valuable insights regarding the amount useful signal embedded in each data modality with respect to MS disease prediction, which may improve data collection processes. Supplementary Information The online version contains supplementary material available at 10.1186/s12911-023-02354-6.
Introduction
Multiple sclerosis (MS) is a neurodegenerative condition characterized by potential disability, affecting the central nervous system comprising the brain and spinal cord.Estimations based on a ten-year accumulation up until 2010 reveal a prevalence of over 700,000 cases of MS in adult individuals within the United States [1].Recent advancements in MS research have unveiled a significant neuron count loss of up to 39% in patients who succumbed to MS compared to those unaffected by the disease [2].Although the human brain possesses inherent self-repair mechanisms and regenerative potential capable of addressing brain plaques [3], the extent of such abilities remains notably limited.Hence, timely intervention to prevent or decelerate brain damage assumes critical importance in MS treatment [4].Accurate grading of MS severity plays a vital role in determining effective treatment approaches, with scoring systems widely employed for this purpose.One such commonly employed ordinal scoring system is the EDSS [5], frequently utilized by healthcare providers to assess clinical disability in MS.This comprehensive scale encompasses diverse functional systems, including pyramidal functions (muscle strength, tone, and reflexes), cerebellar functions (coordination and balance), brainstem functions (eye movements, speech, and swallowing), sensory functions (light touch, pain, and vibratory sense), bowel and bladder functions, visual functions, cerebral functions (cognition), and ambulation.Building upon the EDSS, Roxburgh et al. proposed the Multiple Sclerosis Severity Score, facilitating the determination of MS disease progression using single assessment data, particularly in cases where only one evaluation is available throughout the course of the disease [6].Several milestones defined within the EDSS score have commonly been adopted to delineate different stages of the MS disease course.The EDSS 4 (significant disability but capable of walking without aid or rest for 500 m), EDSS 6 (requires unilateral assistance to walk approximately 100 m with or without resting), and EDSS 7 (ability to walk no more than 10 m without rest while relying on support from a wall or furniture) serve as notable milestones frequently employed in the study of MS disease severity.
The evaluation of a patient's EDSS score requires the expertise of a well-trained specialist to ensure accurate assessment, which limits its applicability to clinics specialized in MS disease.Several research studies have endeavored to tackle this challenge by employing machine learning or deep learning models.For instance, Pinto et al. proposed the utilization of machine learning models to predict MS progression based on the clinical characteristics observed during the initial five years of the disease [7].Zhao et al. employed a support vector machine (SVM) classifier along with demographic, clinical, and MRI data from the first two years to forecast patients' EDSS scores at five-year follow-ups [8].Sacca et al. explored various machine learning models, such as Random Forest, Support Vector Machine, Naive-Bayes, K-nearest-neighbor, and Artificial Neural Network, and employed functional MRI-derived features to classify MS disease severity [9].Narayana et al. proposed the adoption of the VGG-16 convolutional neural network (CNN) to predict enhancing lesions in MS patients using non-contrast MRIs [10].D'Costa et al. introduced a transformer model named MS-BERT to predict EDSS scores based on patients' neurological consultation notes [11].Ciotti devised a clinical instrument to retrospectively capture EDSS levels, achieving a Kappa score of 0.80 when comparing captured EDSS scores with actual values [12].Chase et al. also utilized neurological consultation notes, employing simpler models (Naïve Bayes classification model) and features (word frequency) [13].Dekker et al. employed multiple linear regression models on patient brain lesion volumes and their variations over time to predict physical disability [14].
The above studies explored the application of machine learning and deep learning methods, however, they predominantly focused on limited single modality patient information (such as clinical notes, basic lesion volume information extracted from MRI, or patient clinical characteristics).In recent years, the field of multimodal deep learning has witnessed significant advancements.These advancements primarily revolve around three key research questions: addressing modality heterogeneity, identifying interconnections between modalities, and representing their interactions effectively [15].Based upon the recent advancements in multimodal deep learning, it is reasonable to posit that leveraging multimodal deep learning approaches can integrate fragmented information from diverse modalities, leading to more accurate predictions of MS disease severity.Hence, this study endeavors to address the question of whether harmonizing the available EHR data modalities collected during patient clinic visits and leveraging longitudinal data can enable more precise prediction of MS severity.We investigate the potential of utilizing patients' MRI images, clinical notes, and structured EHR data, encompassing laboratory tests, vital sign observations, medication prescriptions, and patient demographics, collected during clinic visits, to predict MS disease severity three years ahead.
We propose a multimodal deep neural network architecture capable of leveraging diverse modalities within MS patient EHR data.This includes MRI images, such as pre-and post-contrast T1 weighted images, T2 weighted images, fluid-attenuated inversion recovery images, and proton density images.By harnessing this comprehensive set of modalities, our approach aims to achieve accurate prediction of MS disease severity.In addition, we propose the utilization of patients' longitudinal data for predicting EDSS milestones.This approach acknowledges that evidence regarding patients' MS disease severity is not solely confined to the most recent EHR data but is also abundantly present in the data from previous clinic visits.By incorporating both the current clinic visit and historical EHR data, our proposed multimodal deep neural network surpasses the limitations of using solely cross-sectional data (e.g., utilizing clinical notes from the current visit to predict EDSS scores [11]).Longitudinal data encompasses a wealth of MS disease progression information, surpassing that of cross-sectional data, thereby enhancing the model's ability to generate more accurate predictions of the patient's future status.
This study makes four key contributions.
Materials and methods
In this section, we provide an overview of the patients' data descriptions, our neural network architecture, and our innovative techniques for addressing the common issues in multiple data modality modeling, such as missing data, irregular sampling, data fusion.
Data description
Our database comprises a comprehensive dataset of 300 patients diagnosed with MS.Table 1 provides a summary of the demographic information of these patients.Each patient's data encompasses three distinct modalities: (1) neuroimaging data, (2) structured EHR data, and (3) clinical notes.The neuroimaging data is stored in NIFTI format and captures the patients' brain images.Most patients have undergone multiple clinical visits, and during each visit, a range of information is recorded in the structured EHR data.This includes laboratory test results, vital signs, prescribed medications, diagnoses, medical procedures, and treatments, which are stored in separate tables.
The clinical notes consist of detailed descriptions provided by physicians during each clinic visit, offering valuable insights into the patient's condition.Our proposed neural network architecture is specifically designed to handle the heterogeneous structure of these databases by learning representations from each modality.
The prediction objective of this research is focused on a classification problem, aiming to predict whether a patient will reach specific milestones on the EDSS with a specified time frame, particularly three years in advance.For all 300 patients, the EDSS was evaluated and recorded by physicians at the end of each clinic visit, and these scores serve as the ground truth labels.
For patients with a follow-up time (i.e., the time interval between the first and last clinic visit) of less than three years, we utilize their data from the first clinic visit to predict the score at the last visit.
Figure 1 illustrates the distributions of patients' ages and EDSS scores.Additionally, Fig. 2 presents the EDSS historical scores of all patients over the course of their disease, offering insights into the progression of their condition.
Brain MRI
A total of 360 MRI images were obtained for the 300 patients included in the study.The imaging studies were Fig. 1 The histograms of all patients by age; baseline EDSS (at initial hospital visit); EDSS at the last hospital visit; total hospital visits; years between the first and the last hospital visit; number of hospital visits during which brain MRI scan was performed Fig. 2 The MS disease progression of all patients.For clear illustration, patient were sorted by the total EDSS increase in their disease course and the trajectory of the top 10% cohort who grows the most were marked in red, and the rest 90% cohort were marked conducted using a Philips 3.0T Ingenia scanner (Philips Medical Systems, Best, Netherlands).Multiple MRIs were available for some patients, collected from different clinic visits.The MRI dataset encompasses five distinct sequences: pre-contrast and post-contrast T1-weighted sequences (T1-pre, T1-post), T2-weighted sequences, proton density-weighted sequences (PD), and fluid-attenuated inversion recovery sequences (FLAIR).
All MRI sequences were acquired with a field of view of 256 mm x 256 mm x 44 mm and in the axial plane.To ensure consistency and facilitate analysis, the MRI images underwent several preprocessing steps.First, they were skull-stripped using the Simple Skull Stripping (S3) method [16] and the SRI24 template [17].Next, a bias correction technique known as N4 Bias Field Correction was applied to adjust the low-frequency intensity variations [18].Finally, the images were co-registered to a common template (SRI24) using FreeSurfer [19].A representative example of the MRI sequences for a sample patient is displayed in Fig. 3.These processed MRI images serve as a crucial component of the multimodal dataset, contributing valuable information for the subsequent analysis and prediction tasks.
Clinical notes
The patient's clinical notes are documented in unstructured free-text format and provide a comprehensive account of the patient's health status.These notes encompass a range of vital information, including the physician's observations, patient demographics (such as weight, height, and BMI -body mass index), physiological condition, medical diagnoses, prescribed medications, and administered treatments.To ensure privacy and confidentiality, all clinical notes data underwent a rigorous de-identification process, where any personally identifiable information of both patients and physicians was removed from the dataset.This approach adheres to stringent privacy regulations and safeguards the anonymity of the individuals involved, allowing for secure and ethical analysis of clinical data.
Structured EHR
The patient's structured EHR consists of organized tabular data that encompasses various types of information, including laboratory test measurements (floating-point values), vital sign observations (floating-point values), medication administrations (binary indicator − 0 for not taken, 1 for taken), and demographic information (age: floating-point value, race/ethnicity/gender: binary indicators).The EHR tables are constructed in a standardized format, where each row represents an observational time stamp, and the columns represent specific features.It is important to note that the features within each table remain consistent for all patients, while the number of rows may vary depending on the number of observational time points for each patient.
To streamline the EHR tables and facilitate effective neural network training, we apply a time granularity of 4 h for laboratory tests, vital signs, and medication tables.During each 4-hour window, we calculate the average value for each feature if multiple observations are available.This approach serves to reduce table dimensions, eliminate observational noise, and prevent the creation of large and sparse tables that could hinder neural network training.When certain features lack observations within the 4-hour window, the corresponding entry is set to zero.
It is important to maintain the integrity of the data within clinic encounters, ensuring that each 4-hour window falls within a single encounter.This prevents the averaging of feature values from different encounters.For example, if a patient has two clinic encounters, one from 2014-05-05 1:15:00 PM to 2014-05-05 6:00:00 PM, and another from 2015-09-20 9:12:00 AM to 2015-09-20 1:00:00 PM, there would be four rows in each table representing the observations from specific time intervals within each encounter.Rows containing all zeros (indicating no observations for any feature) are deleted.The demographic data of all patients is structured as a fixed size vector, providing a standardized representation of the demographic variables.Table 2 presents the variables utilized in our dataset.
Model architecture
We propose a novel multimodal neural network designed to predict a patient's EDSS score.The proposed neural network architecture follows an encoder-decoder schema implemented in a sequential structure, augmented with a self-attention module for improved performance and feature extraction capabilities.
The objective of the encoder network is to effectively process data from various modalities and map them into dense embeddings within a shared high-dimensional latent space.Distinct encoder neural network architectures are employed for each modality, tailored to their respective learning tasks.For instance, CNNs are utilized for image processing and structured EHR data, while language models are employed for handling clinical notes, see Fig. 5.In the following, we introduce the details of each encoder architecture for each modality.
Structured EHR
The encoder network for structured EHR consists of multiple parallel 1-dimensional CNN channels.Each channel within the network follows a homogeneous network structure but incorporates distinct hyperparameters to accommodate EHR tables of varying sizes specific to each patient.This design allows for efficient processing and extraction of meaningful features from the structured EHR data, see Fig. 6.
The structured EHR data of patients comprises multiple tables containing information such as lab tests, vital signs, and medications.These tables are formatted with rows representing observation time points and columns representing specific features.However, the number of Fig. 5 The detailed architecture of one of the encoder channels for processing structured HER data.The figure shows the lab test channel as an example Fig. 6 The encoder network for our proposed deep neural network rows can vary for different patients and different tables of the same patient, resulting in irregular spacing along the time axis.This irregular spacing introduces heterogeneity in sampling intervals, posing challenges for analysis.
The irregular sampling issue is a typical issue in processing structured EHR data with multiple longitudinal features [20].Traditional methods such as multiple imputation [21] or Gaussian process-based imputation [22] address this issue by performing imputation.The essential idea is to establish a common regularly spaced time axis for all the features and then imputing missing values at these shared time points.Recent advancements have demonstrated that attention networks offer a more effective solution to this problem, yielding superior performance [23].This module enables the neural network to adaptively assign distinct attention weights to different time points in a patient's history, thereby effectively handling the irregularity in the sampling intervals.Specifically, the attention weights are computed for each row (representing a time point) through the application of multiple layers of 1D-CNNs on the feature dimension.This process results in the generation of a single attention weight for each time stamp.
The computed attention weights collectively form an attention vector, which represents the relative importance assigned to different time stamps.By applying this attention vector to the original input data, the network is able to generate a fixed dimensional embedding that remains consistent across all patients.This approach ensures that the neural network is able to capture and leverage relevant temporal patterns and dependencies in the data, enabling more accurate and robust predictions.
In each channel, there are stacked 1D convolution layers, followed by a ReLU activation layer and dropout layers.The number of layers varies depending on the number of features in each table (lab, vital, medication, etc.).For the i-th patient, the k-th data table D i k of dimension t i k × f k is fed into thek-th channel, where t k rows represent the time stamps of clinic visits and f k col- umns represent variables.Note that different EHR tables (laboratory tests, vital signs, medications, etc.) have different f k and different patients have different numbers of clinic visits t i k .Each row of the table is processed through a stack of multiple 1D CNNs (see Fig. 6) and is reduced to a single value (attention weight).The entire table will generate an attention weight vector α i k of size t i k × 1 .The attention weights can be viewed as the weight factor of all f k features at different time points.In the following, we omit the patient index i.
We multiply the attention vector α k with the input matrix D k to get the feature map ek for each table, where e k is the embedding vector of the k-th table for a certain patient.Specifically, each element in e k is calcu- lated as and e k is of size 1 × f k .
Image embedding
For the encoder channel dedicated to patient MRI images, we employ a different network structure compared to the structured EHR.Specifically, we utilize the ResNet architecture [24] to process the MRI images.Each MRI sequence, namely T1-pre, T1-post, T2, PD, and FLAIR, is individually fed into a corresponding ResNet model.The output of each ResNet model is a fixed-length embedding vector.
Clinical notes embedding
The encoder channel dedicated to processing patient clinical notes data employs a Graph Attention Convolution Model (GACN), which takes textual input and generates embeddings for each document [25].For medical word embeddings, we utilize a pre-trained database trained on PubMed + MIMIC-III [26].
In GACN, the entire document is treated as a word co-occurrence network, where words in the corpus of all patients' documents serve as graph nodes.Additionally, an extra "document node" is introduced, representing the entire document, and connected to all other nodes.To capture word co-occurrences, a sliding window mechanism is employed, and the resulting co-occurrences are represented as weighted and directed edges in the graph.This ensures that the word order is preserved within the sliding window, while maintaining meaningful semantics and word co-occurrence counts.
The training process of GACN is based on message passing.Specifically, we define G(V , E) as the graphical network, where V represents the set of nodes and E rep- resents the set of edges.Each node v(∈ V ) constructs a broadcasting message by aggregating the embeddings of its neighboring nodes (using a multi-layer perceptron).which can proceed in a parallel manner using matrix format, where H t ∈ R n×d is the d-dimensional node features of n nodes and A ∈ R n×n is the adjacency matrix, and MLP is multiple layer perceptrons neural network.All nodes update themselves by their own embedding and all messages from their neighbors using a Gated Recurrent Unit (GRU) network, again, in matrix format, After T steps, a final self-attention read-out layer is used to aggregate all nodes embeddings and output a latent vector to represent the entire document, where H T ∈ R n×d is the final node representation of all n − 1 nodes (remove the document node) after T time steps, and W T A is the network parameters (a dense layer).
Therefore, u T ∈ R d would be the final representation of the document, i.e., aggregation of all node features, which will be fed into a classification layer for document classification.
Multi-modality data fusion
Multimodal medical data often exhibit inherent logical relationships.For instance, vital signs and laboratory tests contribute to the diagnosis, which in turn determines the appropriate procedures and medications.Some information remains constant over time, such as demographics, while others evolve dynamically.
To take advantage of the intricate interplay among various types of medical information, we have designed a data fusion pipeline that leverages the causal relationships between variables.Vital signs, laboratory tests, and MRI scans leading to the diagnosis, which further influences prescription and procedure decisions, ultimately resulting in medication administration.This pipeline is implemented using a bidirectional GRU-based decoder, facilitating the integration of time-varying information.Therefore, the latent representation vectors obtained from each encoder network channel are combined into a structured matrix E in the above order (illustrated in the left part of Fig. 7).If the lengths of the vectors differ, zeropadding is applied to ensure a consistent matrix format.Each row of the matrix represents a specific modality, where
Decoder network
The decoder network structure is composed of a stacked bidirectional GRU (Bi-GRU) network with a self-attention module.It takes the feature matrix E as input.The self-attention serves to learn important weights on the state vectors from different data modalities.The Bi-GRU network takes K as the sequence length and d as the input size.We use C to denote the stack of hidden states of all time points, which is of dimension K × h , h = 2 × hiddensize (note that factor 2 comes from the bi-direction network being used).
Each state of the bidirectional GRU network is fed into an attention module, which is 1D convolution layer of multiple output channels.The attention module outputs a vector of attention weights γ of length g (hyper-param- eter, depending on the output channel of the convolution layer), and where B is of dimension K × g denoting the attention matrix.The attention matrix is multiplied with the GRU output, where O is of dimension g × h .Note that the purpose of this attention layer is to enforce a feature reduction from the high-dimensional GRU outputs to a smaller and more informative lower-dimensional embedding not only for reducing the noise but also for increasing the efficiency of neural network training.
The output matrix O is flattened, and concatenated with the patient demographic data vector d , and fed into a fully-connected (FC) layer for prediction, E = ZeroPadding(e 1 ) T , . . ., ZeroPadding (e K ) T T , see Fig. 7.
Results
The prediction model is implemented using Python and PyTorch, and the training process is conducted on a Tesla A100 graphics card.Before feeding the data into the model, a comprehensive quality check is performed on all modalities.Any low-quality data, such as empty clinical notes or meaningless lab test results, is carefully excluded or removed from the dataset.
To evaluate the model's performance, a 5-fold crossvalidation approach is employed.The dataset of 300 patients is randomly divided into five folds, with each fold used iteratively as the hold-out test set (20%) while the remaining folds serve as the training set (80%).This cross-validation strategy allows for a robust assessment of the model's predictive capabilities.
The performance of the prediction model in identifying patients with an Expanded EDSS score greater than 4.0 is summarized in Table 3.Multiple data modalities are considered, and their individual and combined contributions to the prediction task are evaluated.It is observed that the utilization of multimodal data inputs generally yields superior performance compared to single-modal inputs.Specifically, the top three performances in terms of AUROC are achieved when utilizing all data modalities (0.8380), combining EHR and clinical notes (0.8078), or combining MRI and clinical notes (0.7988).
Moreover, the degradation in prediction performance resulting from excluding either MRI or EHR data from the input is minimal.This can be reflected on the performance of MRIs & Notes and EHR & Notes in Table 3 which have four out of five metrics falling in the top highest, and three of five in top highest, respectively.This suggests that these modalities provide limited additional information compared to clinical notes when predicting the severity of MS.Notably, if clinical notes are entirely omitted from the input data, the prediction performance drops to 0.7836.Additional information on the model's performance in predicting other EDSS milestones, such as EDSS > 6 and EDSS > 7, using all data modalities, can be found in Table 4.
MRI Images
We introduce five channels to process the MRI sequence, where each channel employs a ResNet structure.The five channels are independent, and each is trained to learn from one sequence (T1-pre, T1-post, T2, FLAIR and PD).All MRI images are bias-corrected, skull-stripped, and registered and the intensity scale is normalized [27].
The following data augmentation is applied during model training.Image intensity normalization and random horizontal and vertical flip were performed both with a probability rate of 0.5.Randomly rotation was performed with a probability of 0.5 by a maximum of ± 0.02 degrees on all three dimensions.Random zoom-out (then resize) was applied to prevent neural networks to take shortcuts by remembering the pixel location instead of learning characteristic lesions areas to make predictions.If a patient performed MRI scans in more than one clinic visit, we use the last scan as it represents the patient's most recent disease status.Due to the relatively high imbalance of the positive and negative samples, we performed 10-fold resampling for the negative training samples during model training.
For each channel, a respective ResNet model is trained on the training dataset, and we select the trained model with the best performance on the validation dataset.Our goal is to learn a latent vector representation of the MRI image instead of performing disease classification at this stage, therefore, the training process is formulated as a metric learning task where each channel's ResNet is trained to learn an embedding for each MRI sequence of a patient.The triplet margin loss [28] operates directly on embedding distances by promoting the matching point (positive) to the reference point (anchor) and the nonmatching point (negative) away from the anchor.The network is trained to learn well-separated embedding vectors for positive and negative patients for downstream decoding networks to perform classification.The triplet margin loss is defined as where a i , n i and p i are an anchor, positive and negative sample in the batch, respectively.We set the anchor point in our model as a fixed point in the embedding space, therefore, the distance from the positive samples to the anchor is minimized and the distance from the negative samples to the anchor is maximized.
The margin in the triplet margin loss is chosen to be 1.5.The learning rate is set to be 10 −5 and the batch size is 10.The ResNet in each encoder channel is trained for 500 epochs.Early stopping criteria of not-improving for consecutive 50 epochs on the validation dataset are adopted.
We leverage the gradient-weighted class activation mapping (Grad-CAM) [29] model to locate and visualize the important regions the ResNet neural network is learning for predicting the target.The Grad-CAM uses flowing gradients of the prediction target into the last convolutional layer of the ResNet to produce a heatmap of the regions according to their contributions to the prediction, see Fig. 8.
Clinical notes
We preprocess patients' clinical notes by identifying and then removing all sensitive patient health information that is irrelevant to our prediction task, including the patient and physician's name, address, phone number, and email address.Similar to the MRI image data, we formulate the embedding generation problem from clinical notes as a metric learning problem, where the messagepassing graph neural network is trained to learn meaningful embeddings and their distances between positive and negative samples.Hence, the same loss function ( 14) is used for this encoder channel.We set the size of the window to be 10 (covering 10 consecutive words) and the message passing layer to be 2.The hidden side of the GRU network is 64.We trained the graph network with 500 epochs with a batch size of 128, the learning rate of 10 −3 , and early stopping criteria of 50 epochs (no improvements on the validation dataset).We choose the best-performing model on the validation dataset and run it on the test dataset to get the model's final performance.
Structured EHR
The patient's structured EHR consists of tables of 4 categories, the laboratory tests table, the vital signs table, the medications table, and the demographics table.The first 3 categories are in the format number of timestamps × number of features containing the laboratory test results (float), vital sign measurements (float), and medications (0/1 indicators), respectively.All numerical data (non-categorical) was standardized using max-min scalar to the range of 0 to 1. Table 2 shows a pre-selected subset of all the variables from the above 3 categories to be used in our model, based on their observation frequency.Features (lab tests, medications) that were taken by less than 10% of patients were discarded.The categorical features in the demographic table contains race (0/1, one-hot encoded), ethnicity (0/1, one-hot encoded), sex (0/1, male/female), and age (float, min-max normalized).The encoder network consists of 3 channels for each of the first 3 categories and the network parameters are described in Table 5.
A patient's three structured EHR's embeddings produced by the encoder network will be concatenated with the five MRI image embeddings produced by the ResNet, and together with the clinical note embedding to be fed into the decoder network.In the situation of a patient (a small amount) without MRI or clinical notes, the corresponding embedding will be set to an all-zero vector.In the decoder network, the bidirectional GRU network is set to have 4 layers and hidden size of 512.The self-attention module in the encoder channels corresponding to laboratory tests, vital signs, and medications can provide insights into feature importance.The importance of a feature represents how much the feature is being relied on making correct predictions.Figure 9 illustrates the importance of laboratory features evaluated on the test set of patients.Larger value indicates higher feature importance.From the figure, we observe that the top three important features for all patients are "Absolute Neutrophils", "Absolute Lymphocytes", and "Platelet".
Similarly, Fig. 10 depicts the feature importance for vital signs and medications.Our algorithm identifies certain medications such as "Baclofen 10 MG Oral Tablet", "Gabapentin 300 MG Oral Capsule", and "predniSONE 50 MG Oral Tablet" as having high importance, as they are commonly used to treat MS symptoms.Regarding vital signs, features such as "Temperature", "Respiration", "Pulse Quality", and "Respiration Quality" are generally regarded as less critical indicators for predicting the severity of MS in the clinical consensus.
These findings provide valuable insights into the relevance of specific features for the prediction of MS severity, aiding in understanding the underlying factors and potential treatment options.
Discussion
In this study, we propose a multimodal deep neural network approach that combines EHR and neuroimaging data to address the prediction of MS disease severity.By leveraging diverse sources of information such as laboratory tests, vital signs, medications, neuroimaging data, and clinical notes, our model aims to provide accurate predictions of the EDSS score, a widely used metric for evaluating MS disease severity.The study focuses on three EDSS milestones EDSS 4.0, 6.0 and 7.0 since they are widely accepted as critical transition points between MS stages.For example, Confavreux et al. used the above milestones to study the effect of relapses on the progression of irreversible disability [30].The same milestones have also been used to study the contribution of relapses to worsening disability and evaluate the MS therapies' effect on delaying the disability accumulation [31].A Sweden research group studied whether the risk of reaching the above disability milestones in MS has changed over the last decade [32].Rzepiński et al. used the EDSS milestones to explore early clinical features of MS and how they affect patients' longterm disability progression [33].The same milestones were also used to study how these factors affect the time to transition from relapsing-remitting MS (RRMS) to secondary progressive MS (SPMS).
While MRI images and clinical notes have been recognized as valuable sources of diagnostic information for MS, the role of laboratory test results in predicting the severity of the disease remains uncertain.This study aims to contribute to the understanding of this matter from an engineering perspective.Conversely, previous research has indicated that both MRI data and certain laboratory test results can provide meaningful insights into MS disease severity.Notably, studies have demonstrated a strong correlation between the thickness of cortical and deep grey matter in MRI images and the severity of MS, underscoring the informative nature of MRI data in predicting disease progression [34,35].Some laboratory tests were also documented as playing an important role in this regard, such as the cerebrospinal fluid (CSF) [36,37], and serum neurofilament light chain (nFl) [38].
The results show that despite the many publications, conventional MRI contains relatively less information about MS severity compared to other data modalities.However, T2 and Flair MRI performed relatively better than other MRI sequences.Clinical notes were welldocumented to be used for the prediction of EDSS, which has been re-verified in our experiment as the relatively not good performance of using MRIs, or EHR, or MRIs & EHR were all improved when clinical notes were added to the input.A re-examination of the data reveals a reasonable explanation that the clinical notes contain rich patient general disease information including patient status, medical procedures, and treatment information, which implicitly and partially embeds information from the EHR data and MRI images.
For MRI image processing, alternatively, other variants of ResNet [39] can also be utilized as embedding learning networks in our task.However, our experimental findings indicate that employing different network structures for the MRI sequences only leads to marginal improvements in prediction performance.This can be attributed to two reasons.Firstly, the inherent capabilities of the ResNet model enable it to effectively capture essential features within the MRI images, thereby generating diverse embeddings for positive and negative patients.Secondly, considering that the MRI data represents only a subset of the overall input multimodal data, the impact of ResNet variations on the final prediction outcome is diluted by the presence of other data modalities.
There are a few future research directions for this study.First, an equally interesting research question is to predict a patient's MS disease progression rate.This is because having an EDSS of 4.0 at the age of 65 and a disease duration of 40 years would mean a relatively benign disease but having an EDSS score of 4.0 only after 5 years of MS diagnosis is considered as "aggressive" MS.Moreover, the severity of MS can be seen as a relative concept instead of an absolute one.The severity of MS should be studied based on an understanding of the "natural" disease progression, and it varies in terms of many factors (e.g.sex, disease duration, lesion load, atrophy, etc.).Limited by the data size and commonly agreed on criteria to distinguish the "aggressive" cases from the rest, we focus on developing a tool to predict EDSS milestones now and leave the decision of MS severity to MS specialists by jointly considering all the above factors.In addition, this problem itself is quite an interesting research problem and could potentially be studied using survival analysis methods, the results will have a high impact on the prevention of rapid disease progression through early intervention.
The second is the limitation of the imaging data.While random rotation of MRI scans (a data augmentation technique used to train ResNet on the MRI sequences) helps generalizability, the use of only one scanner for all datasets makes it difficult to infer if the model would work in the same way when introduced to new images from a different scanner.Therefore, our work serves as a proof-of-concept regarding this question.Ideally, more data (especially data from external sources) needs to be collaboratively collected to verify the inclusion of MRI potentially has a positive impact on a multi-modal model.
Thirdly, the study was conducted on a cohort of 300 MS patients from a local academic medical center.An important future research direction is to evaluate the generalizability of the proposed model to other institutions.The result replicability should be checked from two perspectives, the first is the prediction accuracy with or without model re-training, i.e., model generalizability; and the second is if the ranking of importance for different data modalities is the same in general, for example, MRI images and clinical notes contains more signals compared to the structured EHR.If the results in this study are verified, it may serve as a cost-effective study recommending which electronic health information should be collected to reach maximum prediction accuracy.To address the issue of limited size of the dataset, collaborative studies are encouraged that involve pooling datasets from various sites.Such an approach could leverage federated learning with secure data sharing mechanisms to facilitate joint investigations.This not only has the potential to enrich our dataset but also aligns with the emerging field of Multimodal Federated Learning, offering an exciting avenue for future research.
Another compelling research question from a technical standpoint revolves around the utilization of time windows for averaging observations.As discussed, this technique proves valuable in reducing the size of longitudinal data while retaining essential temporal information.However, there exist more advanced methods for handling long sequences of temporal data.Although not the primary focus of this study, it is worth mentioning some notable techniques, such as data resampling (subsampling) and the application of deep neural networks capable of handling longer data sequences without encountering issues like the vanishing gradient problem, such as the use of transformer models.
Conclusion
The study focuses on predicting patients' MS severity three years in the future by using current and historical, and multimodal medical information, with the goal of developing an AI-based patient disease status evaluation tool to exceed human capabilities.
This research represents an initial exploration in integrating multiple data modalities for predicting MS severity, while also assessing the effectiveness of each modality in this prediction task.Our experimental results highlight the significant contributions of brain MRI images and clinical notes as the most informative modalities for predicting MS severity, while structured EHR data demonstrates relatively limited relevance to this specific prediction objective.By integrating and analyzing multimodal data, our approach aims to improve the understanding of MS disease progression and provide valuable insights for clinical decision-making and treatment planning.
Fig. 3
Fig.3The MRI sequences of a patient as an example
Figure 4
Figure 4 demonstrates an example patient's three clinic encounters.Note that not all data modalities were observed in each encounter.
Fig. 4
Fig. 4 Example: The clinic visits of an example patient and the information (data modality) recorded during each visit
Fig. 7
Fig.7The decoder network for our proposed deep neural network
Fig. 8
Fig. 8 Attention maps for MRI sequences of a sample patient
Fig. 9
Fig.9The attention weights for laboratory tests
Fig. 10
Fig. 10 The attention weights for (a) medications and (b) vital signs
Table 1
An overview of patient statistics in the dataset (SD: standard deviation)
Table 2
The features from the structured EHR data tables, including laboratory tests, vital signs, and medications
Table 3
Encoder network parameters (I: input channel size, O: output channel size, K: kernel size, S: stride size, P: padding size, R: (dropout) rate)
Table 4
Prediction accuracy performance of using different data modalities for predicting EDSS>4.In each evaluation metric, the top-3 highest scores are highlighted
Table 5
Prediction accuracy performance at different EDSS milestones | 2023-11-09T14:56:24.382Z | 2023-11-09T00:00:00.000 | {
"year": 2023,
"sha1": "0024dd062e26e429cf6635255cad30be676b8193",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "0024dd062e26e429cf6635255cad30be676b8193",
"s2fieldsofstudy": [
"Computer Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
220350902 | pes2o/s2orc | v3-fos-license | Disrupted classes, undisrupted learning during COVID-19 outbreak in China: application of open educational practices and resources
With the coronavirus (COVID-19) outbreak in China, the Chinese government decided to ban any type of face-to-face teaching, disrupting classes and resulting in over 270 million students being unable to return to their universities/schools. Therefore, the Ministry of Education (MoE) launched an initiative titled ‘Ensuring learning undisrupted when classes are disrupted’ by reforming the entire educational system and including an online education component. However, this quick reform in this unexpected critical situation of widespread COVID-19 cases harbours several challenges, such as the lack of time and teacher/student isolation. This paper discusses the possibility of using open educational resources (OER) and open educational practices (OEP) as an effective educational solution to overcome these challenges. Particularly, this study presents a generic OEP framework built on existing open-practice definitions. It then presents, based on this framework and based on the challenges reported by several Chinese education specialists during two national online seminars, a set of guidelines for the effective use of OER and OEP for both teaching and learning. Finally, this study presents some recommendations for the better adoption of OER and OEP in the future. The findings of this study can help researchers and educators apply OER and OEP for better learning experiences and outcomes during the COVID-19 outbreak.
Introduction
A pneumonia outbreak was first reported in the city of Wuhan, the capital of Central China's Hubei province, in December 2019. Experts have attributed the outbreak to a novel coronavirus (COVID-19) whose first four cases are linked to the Huanan (Southern China) Seafood Wholesale Market (Li et al. 2020). Since then, the COVID-19 has spread across China and worldwide, causing over 70,000 deaths. The World Health Organization (WHO 2020) defined coronavirus as 'a large family of viruses that cause illness, ranging from the common cold to more severe diseases, such as Middle East [respiratory syndrome] (MERS-CoV) and [severe acute respiratory syndrome] (SARS-CoV). A novel coronavirus (nCoV) is a new strain that has not been previously identified in humans'.
To contain the COVID-19 spread, the Chinese government issued a notice for all people, including students, to remain home for quarantine until further notice, resulting in almost 276 million students being unable to return to their schools and universities . UNESCO (2020) highlighted that over 1.5 billion learners around the world were not able to attend school or university due to the COVID-19 outbreak as of April 4th, 2020. In this context, the Chinese Ministry of Education (MoE) and several education specialists and universities have started discussing the use of information and communication technology (ICT) to reform the entire educational system in the midst of this pandemic and provide online and distance learning instead, even with disrupted classes. Generalised online and distance education in China started in the 1960s, when Chinese radio and TVs started offering distance education to remote areas, while several Chinese universities have adopted online education since 1998 (Ting et al. 2018). While online and distance education is not new in China, several challenges have arisen regarding this type of system in this unexpected and critical situation, namely, the following: 1. Lack of preparation time: Teachers have not prepared their learning content to adapt to online learning, and preparing such content will take time. Similarly, several universities and schools have not improved their online learning environments to support this kind of learning experience. 2. Teacher/student isolation: In this first-ever application of pure long-term online learning (without face-to-face learning or blended learning), both teachers and students should not feel that they are left alone during the teaching and learning processes. 3. Need for effective pedagogical approaches: New effective pedagogical approaches are needed to keep students motivated and engaged during this long period of online learning, especially that drop-out rates of distance learning is generally higher than in-campus based learning.
To help overcome the problem of limited time to prepare online learning content, teachers should make use of the thousands of open educational resources (OER) published by the MoE and available in other national and international repositories as well as public online tools, platforms, and enabling technologies. The term open educational resources was first coined at UNESCO's 2002 Forum on Open Courseware, and it was defined in the recent UNESCO recommendation on OER as 'learning, teaching, and research materials in any format and medium that reside in the public domain or are under copyright that have been released under an open licence that permit no-cost access, [reuse], [repurpose], adaptation, and redistribution by others' (UNESCO 2019). Blackall and Hegarty (2011) also mentioned that using OER can save time in preparing learning materials.
To solve the problems related to teacher/student isolation as well as the need for effective pedagogical approaches to keep students active and engaged, teachers should build their courses around OER and ask their students to find content to solve problems, write reports, or research on topics. Specifically, open educational practices (OEP)including open pedagogy, open collaboration, and open assessmentshould be implemented to keep the students motivated and engaged during this long period of online learning. Ehlers (2011) defined OEP as 'practices which support the (re) use and production of OER through institutional policies, promote innovative pedagogical models, and respect and empower learners as co-producers on their lifelong learning paths'. The recently approved UNESCO (2019) OER recommendation also stated that 'the judicious application of OER, in combination with appropriate pedagogical methodologies, well-designed learning objects, and the diversity of learning activities, can provide a broader range of innovative pedagogical options to engage both educators and learners to become more active participants in educational processes and creators of content as members of diverse and inclusive [knowledge societies]'. Chiappe and Adame (2018) stated that OEP have become a growing educational trend based on ICT.
This study discusses how to apply OER and OEP in the Chinese educational system during this COVID-19 outbreak. Particularly, based on a review of the literature and with the help of several government departments and education specialists, this study offers several guidelines for Chinese educators regarding OER and OEP application in education. Additionally, this study discusses strategies urgently applied by the Chinese government as well as Chinese companies and universities to support open and online learning. Finally, based on the challenges identified by Chinese education specialists and the authors of this study (themselves educators and researchers), several recommendations are presented to enhance the future adoption and application of OER and OEP in China and in other contexts. Specifically, the suggested enabling tools and technologies were identified for both the Chinese and international contexts. Particularly, examples of alternative international enabling tools and technologies were mentioned between "parentheses" for international readers to apply the suggested OEP in their respective contexts. For instance, instead of using Sina Weibo (Chinese social network), these readers can use Twitter or Facebook.
The rest of the paper is structured as follows. Section 2 presents a literature review about OER and OEP in China as well as a generic OEP framework for open education based on several definitions of OEP in the literature. Section 3 presents guidelines on using OER and OEP during this COVID-19 outbreak. Section 4 presents a set of urgent applied strategies to support open and online learning by the Chinese government as well as Chinese universities/schools and companies. Finally, Section 5 offers several recommendations to enhance OER and OEP adoption and application in China, concludes the paper, and presents future directions based on this research.
Literature review
This section starts by conducting a review on OER and OEP in China. It then presents an OEP framework for open education.
OER in China
The concept of OER was introduced in China following the MIT OpenCourseWare conference in Beijing in 2003. Since then, the Chinese government has launched several initiatives to enhance OER adoption. For instance, the MoE provided 5 years' worth of funds to support the Chinese Quality Course (CQC) project, which aims to provide open learning materials to the public for free. Particularly, Chinese universities provide bonus funding for teachers who contribute to the CQC project by publishing their courses within this project. The Chinese Ministry of Culture and Finance also funded the National Cultural Information Resources Sharing Project (NCIRSP), which focuses on sharing cultural resources that aid in the construction of the public service system of culture in China. Finally, the Chinese Ministry of Science and Technology funded the Science Data Sharing Project (SDSP), which focuses on providing open data to the public as well as guidelines and standards on open data, including data collection and publication.
Several institutional initiatives were also launched to support OER adoption. For instance, in 2012, the Open University of China launched the five-minute course initiative, which aims to build 30,000 five-minute courses involving 100 subjects in several fields. Another initiative is XuetangX, launched by Tsinghua University and MOOC-CN Information Technology in 2013, which provides access to over 1000 free courses from Tsinghua, Fudan, MITx, HarvardX, and other universities. Today, open teaching resources in China can be grouped into three categories, where some of them are not in compliance with all OER conditions as specified in the UNESCO OER definition (Tlili, Huang, Chang, Nascimbeni, & Burgos, 2019), namely: resources which are made publicly available by Chinese universities and libraries for free but without any open licences; resources which are under open licences or protected by Chinese copyright laws that allow their free use and/or reuse; and resources which are not under open licences and do not reside in the public domain yet are made available for free public use by government policies.
With the rapid evolution of the open education concept, researchers have shifted their focus from content-centred approaches, which focus on educational resources (creation, sharing, etc.), to more practice-centred ones that foster collaboration between learners and teachers for creating and sharing knowledge (Cronin, 2017). In other words, researchers and educators have shifted their focus from creating and publishing OER to practices that can be implemented using OER for education, referred to as Open Educational Practices (OEP). In line with these developments, this study focuses on OEP that could be used to provide active and engaging learning experiences for learners during this COVID-19 outbreak. To do so, OEP must be fully understood since, as noted by Cronin (2017), their scope is rapidly evolving, and researchers tend to focus on different OEP perspectives. Therefore, the next section aims to draw an OEP framework based on several reported OEP definitions in the literature. The authors will then refer to this framework when presenting guidelines about OEP application in education during the COVID-19 outbreak.
OEP framework for open education
To better understand OEP, a comprehensive review was conducted about the reported OEP definitions in the literature, as shown in Table 1. Several keywords were then identified from each definition (see Table 1). Finally, based on these keywords, five conditions are identified and discussed below, which have been used to create the OEP framework of this study.
OER:
Teaching materials used within OEP should be openly licenced, and the resources produced during the course (e.g. reports, presentations, videos) should also be released as OER. The Open eLearning Content Observatory Services (OLCOS) project defined OEP as 'practices that involve students in active, constructive engagement with content, tools, and services in the learning process and promote learners' self-management, creativity, and working in teams' (Geser, 2007).
tools and services, working in teams Conole and Ehlers (2010) Open teaching: Educators should implement teaching methodologies that can help students to construct their own learning pathways (self-regulated) and to actively contribute to knowledge building, both individually and collaboratively.
Open collaboration: Teachers should build open communities, for instance by using social networks, to help students to work in teams to carry out particular learning tasks (e.g. editing a blog, creating a Wikipedia page) as well as to exchange ideas and discussions related to those specific learning tasks. Other teachers and stakeholders can participate in these discussions as well to further assist learners.
Open assessment: Teachers should allow learners to evaluate one another (peer assessment). This can emphasise reflective practices and improve learning outcomes.
Since OEP, as with all teaching activities, are complex and multipronged practices, the conditions above are interrelated. Specifically, four relations, discussed below, can be formed among the five aforementioned conditions. All these relations are mediated by technology that is conceived as an enabling condition for OEP to be developed, not as the central aspect of the practice. It should be noted that numbers are used as indices of relations and do not reflect the order of importance of these relations. It should also be noted that the presented five conditions and four relations in Fig. 1 can lead to new conditions and relations, however they were not considered as they are not directly related to this study focus, that is to help providing effective educational experience using OER and OEP during COVID-19 outbreak. For instance, all these relations result in a large amount of learning interaction data, that can be released as "open data" to further enrich the educational offer (Atenas et al. 2015).
Challenges and guidelines on using OER and OEP during this COVID-19 outbreak
Two national online seminars on the '[first-line] demand and implementation of online and open education during the COVID-19 outbreak' were organised by the National Engineering Laboratory for Cyberlearning and Intelligent Technology during February 19 and 27, 2020. More than 140 individuals attended these two seminars, 40 of whom were experts and education specialists from several universities and schools (primary, middle, and high school) from different areas and provinces, including remote areas, and Wuhan, the city with the most COVID-19 cases. The goal of these seminars was to gather these experts with various educational experiences in different contexts (urban vs. remote, universities vs. schools, and totally closed areas like Wuhan vs. less closed areas) to discuss the new educational system reform towards open and distance education and the potential challenges that teachers or learners might face. Particularly, five researchers from the Smart Learning Institute of Beijing Normal University (SLIBNU) summarised the challenges reported during this seminar and then grouped them via a card sorting method. This method is used to organise and improve the architecture of the information and has been used in various fields, such as educational robots (Cheng, Sun, & Chen, 2018), to create different groups of collected information. The grouping process was applied by the five researchers based on the definition of each challenge. Particularly, in cases when the grouping was different, an agreement was reached through discussions. Based on the obtained challenges (specifically those related to our study) as well as on other challenges further reported during this COVID-19 outbreak by Chinese scholars and international experts in general (in the literature), several guidelines on using OER and OEP for learning and teaching are discussed below.
Guidelines for teachers
To ensure an active and engaging teaching experience using OER and OEP for better learning outcomes, teachers should refer to the following guidelines: Yang (2020) mentioned that copyright is one of the challenges of using online resources. Indeed, during OEP development, teachers should pay attention to the attributed open licence of each OER to ensure its legal use in their context. Professor ID-012 from the online seminar mentioned that teachers might not be familiar with the process of choosing the most suitable resources to use in their teaching processes. In this context, Ozdemir and Bonk (2017) pointed out that searching for high-quality OER among the thousands that are published is a difficult task. Therefore, teachers should consider the quality of the OER they would use by referring to well-known national and international OER repositories, such as the Massachusetts Institute of Technology (MIT), Commonwealth of Learning-OAsis OER and Open Knowledge Repository (all these OER repositories are accessible anywhere). Additionally, assessing and selecting high-quality OER is one of the most challenging tasks while using OER. Therefore, OER can be selected based on several criteria, including licensing, accuracy/quality of the content, interactivity, ease of adaptability, and cultural relevance and sensitivity. Professor ID-013 from the online seminar also mentioned that teachers might lack the technical skills to develop their OER. Therefore, to properly create and publish OER, teachers can refer to several national and international authoring tools, such as 101 ppt software and ALESCO Hub, Connexions repository authoring tool or Open Author (all these authoring tools are accessible anywhere), where the learning resources could be simply created via simple clicks and where no specific technical skills are needed. During the teaching process, teachers should apply open teaching to engage learners and encourage them to participate in the co-creation of knowledge (Nascimbeni and Burgos 2016). For instance, teachers can ask students to update a given blog related to a specific learning topic using the Baidu encyclopaedia (Wiki pages, for international readers). Additionally, teachers can apply the connectivist learning approach (Goldie 2016) by asking students to write reports as OER on a given topic as well as create new exercises for a specific chapter in an open textbook based on several references and resources. This can help learners gain digital literacy skills (searching, assessing, and identifying online resources) which are fundamental for twenty-first-century literacy. Particularly, teachers can ask students to work on public Tencent documents (Google Docs, for international readers), where they can see one another's work and progress. This can emphasise peer assessment and reflective practices.
To facilitate OEP adoption, teachers should select friendly learning tools and technologies that learners are already familiar with. They should also avoid overloading learners by asking them to use too many tools, resulting in inconvenient learning practices for them. Additionally, teachers can refer to open software because by nature it can be modified and adapted to different needs, fulfilling more accessibility requirements than proprietary software (Zhang et al. 2020). For instance, the open source learning management system Moodle was adapted to cover new functionalities, such as detecting at-risk students (Denden et al. 2019).
Learning is facilitated not only by teachers but also by peers (Hegarty 2015). Therefore, to make the teaching process more interactive, teachers can build open learning communities where the students can openly exchange ideas, create discussions, and collaborate on different tasks. To ensure interactive and open learning communities, teachers should use social networks during the learning process, such as Wechat, QQ, and Sina Weibo (Facebook or Twitter, for international readers). By using these social networks, teachers can share questions related to specific course materials, and students can discuss them to determine specific answers. Consequently, students learn by exchanging ideas and opinions. Furthermore, the jigsaw classroom pedagogy (invented and named in 1971 by Elliot Aronson) can be applied online by dividing the assignment into several tasks and making each team work on a specific task. The teams will use social networks to work together, communicate with one another, and deliver their assignments. This will foster both individual accountability and the achievement of team goals.
Additionally, the open learning within social networks can be gamified using Emojis to make the learning process more engaging and interactive. For instance, Saif et al. (2019) used, during the learning process on Facebook, the number of given "likes" on a particular learner's answer as the score he/she gets for that answer. During the learning process using OER and OEP, teachers should act as facilitators of the learning process. For instance, teachers can help their students with their reports by suggesting useful references that they should read. Also, teachers should have an active role in building a trustworthy learning environment by continuously encouraging their students to share their opinions and answers. Hegarty (2015) mentioned that building trust and selfconfidence is an important factor in open learning environments so as to achieve excellent learning outcomes. Wiley (2013)
Guidelines for learners
To ensure an active and engaging learning experience using OER and OEP for better learning outcomes, learners should refer to the following guidelines: Just like teachers, learners should pay attention to the attributed open licence of each OER to ensure its legal use in their context, as some combination of licenses, for instance, do not allow OER remixing. Thousands of OER are published online without knowing the reliability of the authors. Therefore, learners should carefully search for, select, and summarise information while preparing their content (e.g. assignments, presentations, videos, reports) to ensure high-quality OER.
Learners should remember to attribute open licences to their prepared open learning materials so they can be reused by others as OER.
To develop their independence and capacity to self-regulate within open learning experiences, learners must develop such skills as behavioural self-regulation and emotional self-regulation. For instance, learners should maintain a positive attitude when facing learning challenges and consider these challenges as new learning opportunities. Learners should be collaborative and active in building an open learning community by encouraging their peers and participating in discussions.
Furthermore, professors from the online seminar mentioned that the technical reliability of the internet in some Chinese areas and the complete absence of internet architecture in other areas are major challenges in the midst of this COVID-19 outbreak. Specifically, according to the China Internet Development Report of 2019, the number of Chinese rural internet users reached 225 million, accounting for 26.3% of the total number of internet users in China. Additionally, several professors mentioned that open courses related to prevention against the COVID-19 should be created to increase people's safety in China and worldwide.
Urgent applied initiatives to support open and distance education
To further cover the challenges mentioned by professors and education specialists during the two online seminars presented above, several initiatives were created by the MoE as well as Chinese universities and companies to support open and distance education, as follows: Several open courses and thematic teaching resources focusing on the epidemic have been produced, including patriotic education, epidemic prevention knowledge, psychological knowledge, and other resources of different disciplines. For example, two schools in Wuhan, the epicentre of the COVID-19 outbreak, have created a variety of educational resources around the epidemic. Additionally, the SLIBNU, in collaboration with the Arab League Educational, Cultural, and Scientific Organization (ALECSO) and the Universidad Internacional de la Rioja (UNIR), has created a series of open resources about COVID-19 protection (see http://sli.bnu. edu.cn/en/Courses/Webinars/Coronavirus_Prevention) in eleven languages: Chinese, English, Arabic, Spanish, Persian, Korean, German, French, Japanese, Urdu, and Bengali. The Department of National Textbooks under the MoE has released open versions of teaching books and textbooks for the spring semester of 2020 as well as relevant teaching resources provided by 67 textbook publishers across China, which can be downloaded and used for free by teachers and students all over the country. Additionally, under the unified arrangement of the MoE, the digital teaching resources of the People's Education Press and its affiliate, Renjiao Digital Publishing Co. Ltd., will be open on the Renjiao Diandu app to primary and secondary school students nationwide. The state has compiled textbooks of three subjects and digital textbooks of the People's Education Press as well as thousands of video and audio micro-courses synchronised with the textbooks, all contained in the application. The number of users increased by more than 2.3 million, and the number of page views reached 250 million after the app became free for 72 h. According to the statistics of the People's Education Press, 30 million downloads of elementary-and middle-school textbooks have been observed every day. Similarly, several universities have released several massive open online courses (MOOCs) for learners. To increase internet reliability, several Chinese companiesincluding China Mobile, China Unicom, and China Telecom as well as Alibaba, Baidu, and Huawei focused on enhancing the provided connectivity services and increasing the internet bandwidth to ensure that 50 million learners can access the cloud learning platform simultaneously and acquire new information without any interruptions. To ensure accessible learning experiences, four channels of China Education Television started the open broadcasting of primary-and middle-school classes across the nation, covering 75 lessons on air to provide learning experiences for those in remote areas without internet or without cable TV.
Discussion, recommendations, and conclusions
The Chinese government considered education as one of its priorities during this COVID-19 outbreak both as a sector that should not experience any discontinuity because of the emergency and as a way to fight the virus itself. Therefore, several initiatives were applied to provide everyone with flexible open and online education. However, despite these strategies and despite the provided guidelines for the better use of OER and OEP, it is seen from this critical situation (COVID-19 outbreak) that several areas related to OER and OEP should be further considered and improved in the future to facilitate OER and OEP application in general and particularly in crisis, resulting in better learning experiences and outcomes. As emphasized by Mrs. Stefania Giannini, UNESCO's Assistant Director-General for Education: "We need to come together not only to address the immediate educational consequences of this unprecedented crisis, but to build up the longer-term resilience of education systems". These areas are discussed hereby, and several recommendations for future consideration are presented based on the noted challenges and problems reported by both teachers and learners. Specifically, these recommendations are structured under the five main OER objectives that have been proposed by UNESCO in its recent OER recommendation (2019).
i. Build the capacity of stakeholders to work with OER Both learners and teachers lack the needed skills to create and publish OER. Therefore, several training sessions should be organised to help them to work with OER as well as to deal with the problem of low OER quality (e.g. how to select relevant OER). These sessions should also cover the basic technical skills to produce OER, such as video editing or sound mixing, as well as open licences. They should also be organised as blended learning, where they can be provided first to introduce participants to theoretical ideas and concepts, followed by hands-on workshops, where the participants (teachers and learners) can be practically involved in learning these skills (e.g. teachers working on specific software to edit a video). Furthermore, these training sessions should cover standardised meta-data tagging of a given OER to facilitate its indexing by search engines, hence increasing its visibility to learners.
ii. Develop supportive open education institutional policies
Several Chinese universities do not have internal policies on how to deal with OER and OEP. On the contrary, research shows that institutional policies are one of the key aspects to encourage teachers to create and openly publish their resources as OER (Atenas et al. 2019). This can be achieved by, for instance, providing financial incentives for those who contribute to enriching the OER repository of the university. Also, publishing learning materials as OER can be considered by universities as one of the criteria for academic promotion. In addition, decision makers within universities/ schools do not know the exact definition of OER and how these can be applied to enhance learning outcomes. Therefore, national seminars in several provinces should be organised to raise awareness about OER and OEP among decision makers and managers of educational institutions. This can result in a rapid increase of knowledge sharing and OER/OEP adoption. Furthermore, while most schools and universities are equipped with reliable infrastructure, several others, especially in remote areas, are not equipped with such infrastructure. This reduces their chances of providing open and online education to their learners. In these cases, governmental policies should be initiated to provide these schools and universities with the needed infrastructure for better teaching and learning experiences to both teachers and learners, respectively.
iii. Encourage inclusivity end equity through open education
While the use of distance education is common in China to ensure access to learning for those in rural and remote areas, less attention is normally paid to learners with disabilities. For instance, no OER repository is accessible to learners with disabilities. Also, normally, OER are not specifically published for learners with disabilities. In this context, Zhang et al. (2020) stated in a recent literature review on OER and disability that researchers and educators should pay more attention to developing OER for learners with disabilities, for example by considering different accessibility guidelines, such as web content accessibility guidelines (WCAG 2.0). Furthermore, low-cost technologies which facilitate offline accessibility to OER should be developed, allowing OER accessibility even in regions with low or unstable connectivity.
iv. Nurture the creation of sustainability models for open education
The majority of OER projects were funded by the government (e.g. CQC and NCIRSP, as discussed above) only for specific periods (3 to 5 years) without defined long-term sustainability strategies. Therefore, sustainable OER models and strategies should be developed to maintain OER projects and support lifelong learning. Downes (2007) proposed eight possible typologies of OER sustainability strategies, such as sponsorship, where the cost of open content creation and dissemination is covered by sponsors in return for advertising space and promotion. Mengual-Andrés and Payà Rico (2018), on the other hand, stated that sustainability strategies should be carefully studied before their implementation as they depend on several criteria, including cultural and financial situations. To allow each institution to build its own OER and OEP sustainability strategy, the successful experiences of sustainable open education initiatives from around the globe should be broadly shared and discussed among educational leaders and managers.
v. Facilitate international cooperation
Bruhn (2017) mentioned that international cooperation within universities is no longer limited to students' mobility and international agreements. On the contrary, it is now related to deeper strategies to enhance the learners' experience by providing, for instance, international curricula (Caniglia et al., 2018;Duart, 2011). In this context, Nascimbeni et al. (2020) found that OER can meaningfully support international cooperation on open courses, MOOC development, and virtual mobility practices. In China, while the SLIBNU initiated an international initiativenamely, the Belt and Road (B&R) International Community for Open Educational Resourcesto facilitate OER exchange and collaboration in the Belt and Road countries, more specific focus should be put into practice related to OER-centred international collaboration. For instance, the development of international open curricula among universities as well as the translation of high-quality OER from other languages into Chinese (and vice versa). This can reduce the cost and time of developing learning materials, especially in case of emergencies, such as the COVID-19 case.
To conclude, because of the unexpected COVID-19 outbreak, the Chinese educational system is being reformed to maintain the learning process without having to be physically present in classes. This study first presents the education challenges during this emergency and then discusses the application of OER and OEP to overcome these challenges. Particularly, this study presents a generic OEP framework built on existing open practice definitions as well as guidelines for both teachers and learners related to the implementation of OEP for an engaging and active learning and teaching experience and for better learning outcomes. These guidelines are identified based on the challenges highlighted by several experts during two national seminars, organised to discuss online and open education during this COVID-19 outbreak, as well as on several challenges highlighted in the literature by international experts. Finally, this paper discusses urgent strategies that could be applied to support open and online education reform and puts forward some recommendations to foster the adoption of OER/OEP. Future research will focus on presenting a practical experience of using OEP for teaching during the COVID-19 outbreak as well as its impact on learning outcomes. | 2020-07-06T14:35:27.598Z | 2020-07-06T00:00:00.000 | {
"year": 2020,
"sha1": "8134d5da8c72c1bec9c76ab66497b55f4ba5bc6d",
"oa_license": "CCBY",
"oa_url": "https://slejournal.springeropen.com/track/pdf/10.1186/s40561-020-00125-8",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8134d5da8c72c1bec9c76ab66497b55f4ba5bc6d",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Political Science",
"Computer Science"
]
} |
233486694 | pes2o/s2orc | v3-fos-license | A Review of Key Likert Scale Development Advances: 1995–2019
Developing self-report Likert scales is an essential part of modern psychology. However, it is hard for psychologists to remain apprised of best practices as methodological developments accumulate. To address this, this current paper offers a selective review of advances in Likert scale development that have occurred over the past 25 years. We reviewed six major measurement journals (e.g., Psychological Methods, Educational, and Psychological Measurement) between the years 1995–2019 and identified key advances, ultimately including 40 papers and offering written summaries of each. We supplemented this review with an in-depth discussion of five particular advances: (1) conceptions of construct validity, (2) creating better construct definitions, (3) readability tests for generating items, (4) alternative measures of precision [e.g., coefficient omega and item response theory (IRT) information], and (5) ant colony optimization (ACO) for creating short forms. The Supplementary Material provides further technical details on these advances and offers guidance on software implementation. This paper is intended to be a resource for psychological researchers to be informed about more recent psychometric progress in Likert scale creation.
INTRODUCTION
Psychological data are diverse and range from observations of behavior to face-to-face interviews. However, in modern times, one of the most common measurement methods is the self-report Likert scale (Baumeister et al., 2007;Clark and Watson, 2019). Likert scales provide a convenient way to measure unobservable constructs, and published tutorials detailing the process of their development have been highly influential, such as Clark and Watson (1995) and Hinkin (1998) (being cited over 6,500 and 3,000 times, respectively, according to Google scholar).
Notably, however, it has been roughly 25 years since these seminal papers were published, and specific best-practices have changed or evolved since then. Recently, Clark and Watson (2019) gave an update to their 1995 article, integrating some newer topics into a general tutorial of Likert scale creation. However, scale creation-from defining the construct to testing nomological relationships-is such an extensive process that it is challenging for any paper to give full coverage to each of its stages. The authors were quick to note this themselves several times, e.g., "[w]e have space only to raise briefly some key issues" and "unfortunately we do not have the space to do justice to these developments here" (p. 5). Therefore, a contribution to psychology would be a paper that provides a review of advances in Likert scale development since classic tutorials were published. This paper would not be a general tutorial in scale development like Watson (1995, 2019), Hinkin (1998), or others. Instead, it would focus on more recent advances and serve as a complement to these broader tutorials.
The present paper seeks to serve as such a resource by reviewing developments in Likert scale creation from the past 25 years. However, given that scale development is such an extensive topic, the limitations of this review should be made very explicit. The first limitations are with regard to scope. This is not a review of psychometrics, which would be impossibly broad, or advances in self-report in general, which would also be unwieldy (e.g., including measurement techniques like implicit measures and forced choice scales). This is a review of the initial development and validation of self-report Likert scales. Therefore, we also excluded measurement topics related the use self-report scales, like identifying and controlling for response biases. 1 Although this scope obviously omits many important aspects of measurement, it was necessary to do the review.
Importantly, like Watson (1995, 2019), Hinkin (1998), this paper was written at the level of the general psychologist, not methodologists, in order to benefit the field of psychology most broadly. This also meant that our scope was to fine articles that were broad enough to apply to most cases of Likert scale development. As a result, we omitted articles, for example, that only discussed measuring certain types of constructs [e.g., Haynes and Lench's (2003) paper on the incremental validation of new clinical measures].
The second major limitation concerns its objectivity. Performing any review of what is "significant" requires, at a point, making subjective judgment calls. The majority of the papers we reviewed were fairly easy to decide on. For example, we included Simms et al. (2019) because they tackled a major Likert scale issue: the ideal number of response options (as well as the comparative performance of visual analog scales). By contrast, we excluded Permut et al. (2019) because their advance was about monitoring the attention of subjects taking surveys online, not about scale development, per se. However, other papers were more difficult to decide on. Our method of handling this ambuity is described below, but we do not try claim that subjectivity did not play a part of the review process in some way.
Additionally, (a) we did not survey every single journal where advances may have been published 2 and (b) articles published after 2019 were not included. Despite all these limitations, this review was still worth performing. Self-report Likert scales are an incredibly dominant source of data in psychology and the social sciences in general. The divide between methodological and substantive literatures-and between methodologists and substantive researchers (Sharpe, 2013)-can increase over time, but they can also be reduced by good communication and dissemination (Sharpe, 2013). The current review is our attempt to bridge, in part, that gap.
To conduct this review, we examined every issue of six major journals related to psychological measurement from January 1995 to December 2019 (inclusive), screening out articles by either title and/or abstract. The full text of any potentially relevant article was reviewed by either the first or second author, and any borderline cases were discussed until a consensus was reached. A PRISMA flowchart of the process is shown in Figure 1. The journals we surveyed were: Applied Psychological Measurement, Psychological Assessment, Educational and Psychological Measurement, Psychological Methods, Advances in Methods and Practices in Psychological Science, and Organizational Research Methods. For inclusion, our criteria were that the advance had to be: (a) related to the creation of self-report Likert scales (seven excluded), (b) broad and significant enough for a general psychological audience (23 excluded), and (c) not superseded or encapsulated by newer developments (11 excluded). The advances we included are shown in Table 1, along with a short descriptive summary of each. Scale developers should not feel compelled to use all of these techniques, just those that contribute to better measurement in their context. More specific contexts (e.g., measuring socially sensitive constructs) can utilize additional resources.
To supplement this literature review, the remainder of the paper provides a more in-depth discussion of five of these advances that span a range of topics. These were chosen due to their importance, uniqueness, or ease-of-use, and lack of general coverage in classic scale creation papers. These are: (1) conceptualizations of construct validity, (2) approaches for creating more precise construct definitions, (3) readability tests for generating items, (4) alternative measures of precision (e.g., coefficient omega), and (5) ant colony optimization (ACO) for creating short forms. These developments are presented in roughly the order of what stage they occur in the process of scale creation, a schematic diagram of which is shown in Figure 2.
CONCEPTUALIZING CONSTRUCT VALIDITY Two Views of Validity
Psychologists recognize validity as the fundamental concept of psychometrics and one of the most critical aspects of psychological science (Hood, 2009;Cizek, 2012). However, what is "validity?" Despite the widespread agreement about its importance, there is disagreement about how validity should be defined (Newton and Shaw, 2013). In particular, there are two divergent perspectives on the definition. The first major perspective defines validity not as a property of tests but as a property of the interpretations of test scores (Messick, 1989;Kane, 1992). This view can be therefore called the interpretation camp (Hood, 2009) or validity as construct validity FIGURE 1 | PRISMA flowchart of review process. (Cronbach and Meehl, 1955), which is the perspective endorsed by Watson (1995, 2019) and standards set forth by governing agencies for the North American educational and psychological measurement supracommunity (Newton and Shaw, 2013). Construct validity is based on a synthesis and analysis of the evidence that supports a certain interpretation of test scores, so validity is a property of interpretive inferences about test scores (Messick, 1989, p. 13), especially interpreting score meaning (Messick, 1989, p. 17). Because the context of measurement affects test scores (Messick, 1989, pp. 14-15), the results of any validation effort are conditional upon the context in and group characteristics of the sample with which the studies were done, as are claims of validity drawn from these empirical results (Newton, 2012;Newton and Shaw, 2013).
The other major perspective (Borsboom et al., 2004) revivifies one of the oldest and most intuitive definitions of validity: ". . .whether or not a test measures what it purports to measure" (Kelley, 1927, p. 14). In other words, on this view, validity is a property of tests rather than interpretations. Validity is simply whether or not the statement, "test X measures attribute Y, " is true. To be true, it requires (a) that Y exists and (b) that variations in Y cause variations in X (Borsboom et al., 2004). This definition can be called the test validity view and finds ample precedent in psychometric texts (Hood, 2009 Ultimately, this disagreement does not show any signs of resolving, and interested readers can consult papers that have attempted to integrate or adjudicate on the two views (Lissitz and Samuelson, 2007;Hood, 2009;Cizek, 2012).
There Aren't "Types" of Validity; Validity Is "One" Even though there are stark differences between these two definitions of validity, one thing they do agree on is that there are not different "types" of validity (Newton and Shaw, 2013). Language like "content validity" and "criterion-related validity" is misleading because it implies that their typical analytic procedures produce empirical evidence that does not bear on the central inference of interpreting the score's meaning (i.e., construct validity; Messick, 1989, pp. 13-14, 17, 19-21). Rather, there is only (construct) validity, and different validation procedures and types of evidence all contribute to making inferences about score meaning (Messick, 1980;Binning and Barrett, 1989;Borsboom et al., 2004).
Despite the agreement that validity is a unitary concept, psychologists seem to disagree in practice; as of 2013, there were 122 distinct subtypes of validity (Newton and Shaw, 2013), many of them named after the fourth edition of the Standards that stated that validity-type language was inappropriate (American Educational Research Association et al., 1985). A consequence of speaking this way is that it perpetuates the view (a) that there are independent "types" of validity (b) that entail different analytic procedures to (c) produce corresponding types of evidence that (d) themselves correspond to different categories of inference (Messick, 1989). This is why to even speak of content, construct, and criterion-related "analyses" (e.g., Lawshe, 1985;Landy, 1986;Binning and Barrett, 1989) can be problematic, since this misleads researchers into thinking that these produce distinct kinds of empirical evidence that have a direct, one-to-one correspondence to the three broad categories of inferences with which they are typically associated (Messick, 1989).
However, an analytic procedure traditionally associated with a certain "type" of validity can be used to produce empirical evidence for another "type" of validity not typically associated with it. For instance, showing that the focal construct is empirically discriminable from similar constructs would constitute strong evidence for the inference of discriminability (Messick, 1989). However, the researcher could use analyses typically associated with "criterion and incremental validity" (Sechrest, 1963) to investigate discriminability as well (e.g., Credé et al., 2017). Thus, the key takeaway is to think not of "discriminant validity" or distinct "types" of validity, but to use a wide variety of research designs and statistical analyses to potentially provide evidence that may or may not support a given inference under investigation (e.g., discriminability). This demonstrates that thinking about validity "types" can be unnecessarily restrictive because it misleads researchers into thinking about validity as a fragmented concept (Newton and Shaw, 2013), leading to negative downstream consequences in validation practice.
Ensuring Concept Clarity
Defining the construct one is interested in measuring is a foundational part of scale development; failing to do so properly undermines every scientific activity that follows (T. L. Thorndike, 1904;Kelley, 1927;Mackenzie, 2003;Podsakoff et al., 2016). However, there are lingering issues with conceptual clarity in the social sciences. Locke (2012) noted that "As someone who has been reviewing journal articles for more than 30 years, I estimate that about 90% of the submissions I get suffer from problems of conceptual clarity" (p. 146), and Podsakoff et al. (2016) stated that, "it is. . .obvious that the problem of inadequate conceptual definitions remains an issue for scholars in the organizational, behavioral, and social sciences" (p. 160). To support this effort, we surveyed key papers on construct clarity and integrated their recommendations into Table 2, adding our own comments where appropriate. We cluster this advice into three "aspects" of formulating a construct definition, each of which contains several specific strategies.
Specifying the Latent Continuum
In addition to clearly articulating the concept, there are other parts to defining a psychological construct for empirical measurement. Another recent development demonstrates the importance of incorporating the latent continuum in measurement (Tay and Jebb, 2018). Briefly, many psychological concepts like emotion and self-esteem are conceived as having degrees of magnitudes (e.g., "low, " "moderate, " and "high"), and these degrees can be represented by a construct continuum. The continuum was originally a primary focus in early psychological measurement, but the advent of the convenient Likert(-type) scaling (Likert, 1932) pushed it into the background.
However, defining the characteristics of this continuum is needed for proper measurement. For instance, what do the poles (i.e., endpoints) of the construct represent? Is the lower pole its absence, or is it the presence of an opposing construct (i.e., a unipolar or bipolar continuum)? And, what do the different continuum degrees actually represent? If the construct is a positive emotion, do they represent the intensity of experience or the frequency of experience? Quite often, scale developers do not define these aspects but leave them implicit. Tay and Jebb (2018) discuss different problems that can arise from this.
In addition to defining the continuum, there is also the practical issue of fully operationalizing the continuum (Tay and Jebb, 2018). This involves ensuring that the whole continuum is well-represented when creating items. It also means being mindful when including reverse-worded items in their scales. These items may measure an opposite construct, which is desirable if the construct is bipolar (e.g., positive emotions as including happy and sad), but contaminates measurement if the construct is unipolar (e.g., positive emotions as only including feeling happy). Finally, developers should choose a response format that aligns with whether the continuum has been specified as unipolar or bipolar. For example, the numerical rating of 0-4 typically implies a unipolar scale to the respondent, whereas a −3-to-3 response scale implies a bipolar scale. Verbal labels like "Not at all" to "Extremely" imply unipolarity, whereas formats like "Strongly disagree" to "Strongly agree" imply bipolarity. Tay and Jebb (2018) also discuss operationalizing the continuum with regard to two other issues, assessing dimensionality of the scale and assuming the correct response process.
READABILITY TESTS FOR ITEMS
The current psychometric practice is to keep item statements short and simple with language that is familiar to the target respondents (Hinkin, 1998). Instructions like these alleviate readability problems because psychologists are usually good at identifying and revising difficult items. However, professional psychologists also have a much higher degree of education compared to the rest of the population. In the United States, less than 2% of adults have doctorates, and a majority do not have a degree past high school (U.S. Census Bureau, 2014). The average United States adult has an estimated 8th-grade reading level, with 20% of adults falling below a 5th-grade level (Doak et al., 1998). Researchers can probably catch and remove scale items that are extremely verbose (e.g., "I am garrulous"), but items that might not be easily understood by target respondents may
Modern readability measures
Peter et al. (2018) Two newer readability tools can supplement traditional tests for scale items. First, Coh-Metrix computes a syntactic simplicity score based on multiple variables (e.g., clauses within sentences, conditionals, negations). Second, the Question Understanding Aid (QUAID) was designed specifically to examine the readability of survey instruments, and can identify potential issues like vague wording, jargon, and working memory overload. Both are freely available at websites listed in the paper.
Respondent comprehension
Hardy and Ford (2014) Good survey data requires that respondents interpret the survey items as the scale developer intended. However, the authors describe how both (a) specific words and (b) the sentences in items can contribute to respondent miscomprehension. The authors provide evidence for this in popular scales and then discuss remedies, such as reducing words and phrases with multiple or vague meanings and collecting qualitative data from respondents about their interpretations of items.
Number of response options and labels
Weng (2004) and Simms et al. (2019) Examining the Big Five Inventory, Simms et al. (2019) found that more Likert response options resulted in higher internal consistency and test-retest reliability (but not convergent validity). These benefits stopped after six response options, and 0-1,000 visual analog scales did not show benefits, either. Including (or removing) a middle point (e.g., "neither agree nor disagree") did not show any psychometric effects. Weng (2004) also found higher internal consistency and test-retest reliability when all response options had labels compared to when only endpoints of the scale had labels.
Item format Zhang and Savalei (2016) The authors further research on the expanded scale format as a way to gain the benefit of including reverse worded items (i.e., controlling for acquiescence bias) in a scale without the common downside (i.e., introducing method variance into scores leading to method factor emergence). Each Likert-type item has their response options turned into a set of statements; respondents select one statement from each set.
Item stability Knowles and Condon (2000) The stability of item properties should not be assumed when it is placed in different testing contexts. There are available methods from classical test theory, factor analysis, and item response theory to examine the stability of items when applied to new conditions or test revisions.
Presentation of items in blocks Weijters et al. (2014) When putting a survey together, there are many ways to present the scale items. For instance, items from different scales can all be randomized and presented in the same block, or each scale can be given its own block. The authors showed the effects of splitting a unidimensional scale into two blocks with other scales administered in between. Scale items in different blocks had lower intercorrelations, and two factors emerged that corresponded to the two blocks. The authors recommend that assessments of discriminant validity should be mindful of scale presentation and that how scales are presented in surveys should be consistently reported.
Content validation
Guidelines for reporting Colquitt et al. (2019) Two common methods for content validation are reviewed and compared: Anderson and Gerbing (1991) and Hinkin and Tracey (1999). Both approaches ask subjects to rate how well each proposed item matches the construct definition, as well as the definitions of similar constructs. The authors also offer several new statistics for indexing content validity, provide standards for conducting content validation (e.g., participant instructions, scale anchors), and norms for evaluating these statistics.
Guidelines for assessment Haynes et al. (1995) Provides an overview of content validation and its issues (e.g., how it can change over time if the construct changes). The authors also provide guidelines for assessing content validity, such as using multiple judges of scales, examining the proportionality of item content in scales, and using subsequent psychometric analyses to indicate the degree of evidence for content coverage.
(Continued)
Frontiers in Psychology | www.frontiersin.org Vogt et al. (2004) Communicating with the target population is valuable in content validation but is rarely done. One method to do this is to use focus groups, moderator-facilitated discussions that generate qualitative data. This technique can (a) identify the important areas of a construct's domain, (b) identify appropriate wordings for items, and (c) corroborate or revise conceptualization of the target construct.
Analyzing rating/matching data As item similarity data Li and Sireci (2013) The authors argue that, compared to traditional content validation ratings/matching data, item similarity ratings are (a) less affected by social desirability and expectancy biases because no content categories are offered and (b) can provide more information about how items group together in multidimensional space. However, having subject matter experts engage in pairwise item similarity comparisons is labor-intensive. The authors offer an innovative method of dummy coding traditional content validation ratings/matching data to essentially derive item similarity data, which is conducive to multidimensional scaling.
Conducting pilot studies
Sample size considerations Johanson and Brooks (2010) Provides a cost-benefit analysis of increasing sample size relative to decreasing confidence intervals in correlation, proportion, and internal consistency estimates (i.e., coefficient alpha). Found that most reductions in confidence intervals occurred at sample sizes between 24 and 36.
Measurement precision
Limits of reliability coefficients Cronbach and Shavelson (2004) Although coefficient alpha is the most widely used index of measurement precision, the authors argue that any coefficient is a crude marker that lacks the nuance necessary to support interpretations in current assessment practice. Instead, they detail a reliability analysis approach whereby observed score variance is decomposed into population (or true score), item, and residual variance, the latter two of which comprise error variance. The authors argue that the standard error of measurement should be reported along with all variance components rather than a coefficient. Given that testing applications often use cut scores, the standard error of measurement offers an intuitive understanding to all stakeholders regarding the precision of each score when making decisions based on absolute rather than comparative standing.
Omega/alternatives to alpha See section 4: "Alternative Estimates of Measurement Precision" Key paper: McNeish (2018) Zhang and Yuan (2016) Both coefficient alpha and omega are often estimated using a sample covariance matrix, and traditional estimation methods are likely biased by outliers and missing observations in the data. The authors offer a software package in the R statistical computing language that allows for estimates of both alpha and omega that are robust against outliers and missing data.
Confidence intervals Kelley and Pornprasertmanit (2016) Because psychologists are interested in the reliability of the population, not just the sample, estimates should be accompanied by confidence intervals. The authors review the many methods for computing these confidence intervals and run simulations comparing their efficacies. Ultimately, they recommend using hierarchical omega as a reliability estimator and bootstrapped confidence intervals, all of which can be computed in R using the ci.reliability() function of the MBESS package (Kelley, 2016).
IRT Information See section 4: "Alternative Estimates of Measurement Precision" Key paper: Reise et al. (2005) Controlling for transient error Green (2003) and Schmidt et al. (2003) Whereas random response error comes from factors that vary moment-to-moment (e.g., variations in attention), transient errors come from factors that differ only across testing occasions (e.g., mood). Because coefficient alpha is computed from a single time point, it cannot correct for transient error and may overestimate reliability. Both articles provide an alternative reliability statistic that controls for transient error, test-retest alpha (Green, 2003), and the coefficient of equivalence and stability (Schmidt et al., 2003).
Test-retest reliability DeSimone (2015) Test-retest correlations between scale scores are limited for assessing temporal stability. The author introduces several new statistical approaches: (a) computing test-retest correlations among individual scale items, (b) comparing the stability of interitem correlations (SRMR TC ) and component loadings (CL TC ), and (c) assessing the scale instability that is due to respondents (D 2 pct ) rather than scale itself. Barchard (2012) Test-retest correlations do not capture absolute agreement between scores and can mislead about consistency. The author discusses several statistics for test-retest reliability based on absolute agreement: the root mean square difference [RMSD(A,1)] and concordance correlation coefficient [CCC(A,1)]. These measures are used in other scientific fields (e.g., biology, genetics) but not in psychology, and a supplemental Excel sheet for calculation is provided.
Item-level reliability Zijlmans et al. (2018) Reliability is typically calculated for entire scales but can also be computed for individual items. This can help identify unreliable items for removal. The authors investigate four methods for calculating item-level reliability and find that the correction for attenuation and Molenaar-Sijtsma methods performed best, estimating item reliability with very little bias and a reasonable amount of variability.
(Continued)
Frontiers in Psychology | www.frontiersin.org (2019) The authors provide a timely review of the issues and "pitfalls" in current factor analysis practices in psychology. Guidance is provided for (a) selecting proper indicators (e.g., analyzing item distributions, parceling), (b) estimation (e.g., alternatives to maximum likelihood), and (c) model evaluation and comparison. The authors conclude with a discussion of two alternatives to traditional factor analysis: exploratory structural equation modeling and bifactor modeling.
Exploratory factor analysis Henson and Roberts (2006) The authors briefly review four main decisions to be made when conducting exploratory factor analysis. Then they offer seven best practice recommendations for reporting how an exploratory factor analysis was conducted after reviewing reporting deficiencies found in four journals.
Exploratory factor analysis for scale revision Reise et al. (2000) The authors provide guidance on EFA procedures when revising a scale. Specifically, they offer guidance on (a) introducing new items, (b) sample selection, (c) factor extraction, (d) factor rotation, and (e) evaluating the revised scale. However, researchers first need to articulate why the revision is needed and pinpoint where the construct resides in the conceptual hierarchy.
Cluster analysis for dimensionality Cooksey and Soutar (2006) The authors revive Revelle's (1978) ICLUST clustering technique as a way to explore the dimensional structure of scale items. The end product is a tree-like graphic that represents the relations among the scale items. The authors claim this method is useful compared to alternatives (e.g., tables of factor loadings). (2013) Some measures may not demonstrate unidimensionality when assessed by fitting a one-factor model to the data due to method or substantive specific factors. This article aims to offer a way to estimate how much of the observed variance in the overall instrument is predominantly explained by a common factor and can thus be treated as essentially homogenous. Mplus and R code are provided to create point and interval estimates for variance explained by both common and specific factors to calculate the difference of these proportions.
Unidimensionality Raykov and Pohl
Ferrando and Lorenzo-Seva (2019) Measures are often intended to be unidimensional, but obtained data are found to be better described by multiple correlated factors (or vice versa). Standard goodness of fit assessments (a) are arguably insufficient to adjudicate on which solution is most accurate and (b) only use internal (i.e., item score) information. The authors propose the idea of using external variables (e.g., criteria) to provide evidence for unidimensionality. A procedure to derive (a) primary factor score estimates and then (b) a second-order factor score estimate is described and finally (c) criteria are regressed on them. Lack of differential or incremental prediction of criteria by primary factor score estimates beyond second-order factor score estimates would suggest evidence for unidimensionality.
Ferrando and Lorenzo-Seva (2018)
The authors introduce a program to allow determination of construct replicability, degree of factor indeterminacy and reliability of factor score estimates and explained common variance as an index of unidimensionality. In turn, this has implications for deriving individual scores (i.e., factor score estimates) using exploratory rather than confirmatory factor analysis, the latter of which they argue has the unrealistic assumption of simple structure. (2005) Including both positively-and negatively-worded items in scales is often done but can produce artifactual factors in dimensionality assessments. The authors show that items with more extreme wording (e.g., "I'm always optimistic about the future" vs. "I'm usually optimistic about the future") can result in greater multidimensionality for the same target construct. The authors recommend that scale developers exercise awareness of these issues and provide recommendations. Smith et al. (2003) The authors discuss five principles of incremental validation pertinent to scale construction: "(a) careful, precise articulation of each element or facet within the content domain; (b) reliable measurement of each facet through use of multiple, alternate-form items; (c) examination of incremental validity at the facet level rather than the broad construct level; (d) use of items that represent single facets rather than combinations of facets; and (e) empirical examination of whether there is a broad construct or a combination of separate constructs" (p. 467). Hunsley and Meyer (2003) The authors review theoretical, design, and statistical issues when conducting incremental validation. Of key importance is the choice of criterion. The criterion should be reliable, and researchers should also be wary of the variety of methodological artifacts that can influence incremental validation results (e.g., criterion contamination, "source overlap").
Incremental validation
slip through the item creation process. Social science samples frequently consist of university students (Henrich et al., 2010), but this subpopulation has a higher reading level than the general population (Baer et al., 2006), and issues that would manifest for other respondents might not be evident when using such samples. In addition to asking respondents directly (see Parrigon et al., 2017 for an example), another tool to assess readability is to use readability tests, which have already been used by scale developers in psychology (e.g., Lubin et al., 1990;Ravens-Sieberer et al., 2014). Readability tests are formulas that score the readability of some piece of writing, often as a function of the number of words per sentence and number of syllables per word. These tests only take seconds to implement and can serve as an additional way to check item language beyond the intuitions of scale developers. When these tests are used, scale items should only be analyzed individually, as testing the readability of the whole scale together can hide one or more difficult items. If an item receives a low readability score, the developer can revise the item.
There are many different readability tests available, such as the Flesch Reading Ease test, the Flesch-Kincaid Grade Level Studies test, the Gunning fog index, SMOG index, Automated Readability Index, and Coleman-Liau Index. These operate in much the same way, outputting an estimated grade level based on sentence and word length.
We reviewed their formulas and reviews on the topic (e.g., Benjamin, 2012). At the outset, we state that no statistic is univocally superior to all the others. It is possible to implement several tests and compare the results. However, we recommend the Flesch-Kincaid Grade Level Studies test because it (a) is among the most commonly used, (b) is expressed in grade school levels, and (c) is easily implemented in Microsoft Word. The score indicates what United States grade level the readability is suited. Given average reading grade levels in the United States, researchers can aim for a readability score of 8.0 or below for their items. There are several examples of scale developers using this reading test. Lubin et al. (1990) found that 80% of the Depression Adjective Check Lists was at an eighth-grade reading level. Ravens-Sieberer et al. (2014) used the test to check whether a measure of subjective well-being was suitable for children. As our own exercise, we took three recent instances of scale development in the Journal of Applied Psychology and ran readability tests on their items. This analysis is presented in the Supplementary Material.
Alpha and Omega
A major focus of scale development is demonstrating its reliability, defined formally as the proportion of true score variance to total score variance (Lord and Novick, 1968). The most common estimator of reliability in psychology is coefficient alpha (Cronbach, 1951). However, alpha is sometimes a lessthan-ideal measure because it assumes that all scale items have the same true score variance (Novick and Lewis, 1967;Sijtsma, 2009;Dunn et al., 2014;McNeish, 2018). Put in terms of latent variable modeling, this means that alpha estimates true reliability only if the factor loadings across items are the same (Graham, 2006), 3 something that is "rare for psychological scales" (Dunn et al., 2014, p. 409). Violating this assumption makes alpha underestimate true reliability. Often, this underestimation may be small, but it will increase for scales with fewer items and with greater differences in population factor loadings (Raykov, 1997;Graham, 2006).
A proposed solution to this is to relax this assumption and adopt the less stringent congeneric model of measurement. The most prominent estimator in this group is coefficient omega (McDonald, 1999), 4 which uses a factor model to obtain reliability estimates. Importantly, omega performs at least as well as alpha if alpha's assumptions hold (Zinbarg et al., 2005). However, one caveat is that the estimator requires a goodfitting factor model for estimation. Omega and its confidence interval can be computed with the psych package in R (for unidimensional scales, the "omega.tot" statistic from the function "omega;" Revelle, 2008). McNeish (2018) provides a software tutorial in R and Excel [see also Dunn et al. (2014) and Revelle and Condon (2019)].
Reliability vs. IRT Information
Alpha, omega, and other reliability estimators stem from the classical test theory paradigm of measurement, where the focus is on the overall reliability of the psychological scale. The other measurement paradigm, item response theory (IRT), focuses on the "reliability" of the scale at a given level of the latent trait or at the level of the item (DeMars, 2010). In IRT, this is operationalized as information IRT (Mellenbergh, 1996) Although they are analogous concepts, information IRT and reliability are different. 5 There are two uses of the word "information" used in this section: as the formal IRT statistic and the general, everyday sense of the word ("We don't have enough information."). For the technical term, we will use information IRT , and the latter we will leave simply as "information." Whereas traditional reliability is only assessed at the scalelevel, information IRT can be assessed at three levels: the response category, item, and test. Information IRT is a full mathematical function which shows how the precision changes across latent trait levels. These features translate into several advantages for the scale developer.
First, items can be evaluated for how much precision they have. Items that are not informative can be eliminated in favor of items that are (for a tutorial, see Edelen and Reeve, 2007). Second, the test information function shows how precisely Aspect: Consider the construct Strategies: 1. Think about the essence of the construct. Clear scientific definitions stem from a clear personal understanding of what the concept is. Social and psychological constructs are notoriously difficult to define (e.g., "justice," "terrorism," and "pornography"). Therefore, researchers must think carefully about answering, "What is this phenomenon? What is its essence, its inherent nature?" It is these questions that a definition answers. 2. Bring the construct back to reality. A useful question for increasing clarity is, "Where does this construct concretely manifest?" Psychological constructs are abstract, but they typically manifest in some concrete way. These "concretes" are often (a) behaviors, (b) feelings, or (c) cognitions. Analyzing these concretes sheds light on the essence of the construct. For example, the psychological construct, "spousal support," is abstract. However, some of its concretes would be listening to one's partner or taking care of a household errand, unasked. Analyzing these (and others) can shed valuable insight into the construct's meaning. 3. Think about what the construct is not. A definition states what something is, and this can be clarified by better understanding what it is not. Psychologists can, therefore, identify opposing constructs to clarify the meaning of the target construct. For example, exploring what a "lack of spousal support" means (e.g., dismissing the feelings of the partner, failing to help in tasks) can accurately reveal the essence of support. 4. Compare the construct to similar constructs. To figure out what makes a construct unique, it is also helpful to look at similar constructs. It is easy to state how a construct is different from very different ones (e.g., spousal support from life satisfaction). Doing the same with a similar construct is more difficult but also more fruitful. Identifying this point of difference will illuminate the subtleties specific to the target. For instance, how is spousal support differentiated from support by a friend? Answering this question is important and helps creates theoretical precision in one's definition.
Aspect: Create a formal definition Strategies: 1. Use simple language. Published definitions will be aimed at a scientific audience. However, more complexity and jargon are not necessarily better and can actually be counterproductive to communicating the construct. A useful exercise is to try to create a definition that is as simple as linguistically possible. Much about the target construct can be learned by reducing the language to its simplest form. 2. Define any necessary subconcepts. Relying on other concepts for one's definition is often unavoidable. However, it is important to be clear about what the subconcepts mean. For example, a hypothetical definition of, "ambition," could be, "the proactive drive to enhance the self." However, what is a "proactive drive?" And what does it mean to "enhance the self?" This definition demonstrates that having subconcepts can lead to a lack of clarity when they are not well-defined. Therefore, any subconcepts in a definition must themselves be well-understood, or else a lack of clarity is perpetuated. 3. Consider the definition's genus and differentia. Definitions have two parts. The first part specifies the concept as a member of a larger class. This is the "genus" and serves to ground the construct in prior knowledge. The second part is called the "differentia" and specifies what about the concept is new and distinguished from other members of its class. For example, "spousal support" could be defined as "the aid and emotional care provided to one's spouse." In this case, the genus is "aid and emotional care," because this is general behavior, and the differentia is "provided to one's spouse," which sets it apart from other forms of support (e.g., friend or co-worker support). Identifying the genus and differentia in one's working construct definition is a useful way to dissect one's construct definition. 4. Keep them short. Preferably, construct definitions should seek to state only its essential nature and be relatively short. Scholars should be mindful of the distinction between (a) its essential nature and (b) its secondary properties. A definition is focused on the former.
Aspect: Consult alternative opinions on the definition Strategies: 1. Consult dictionaries. Dictionaries provide lay, rather than scientific, definitions. It can be beneficial to consult these because they will use more straightforward language. 2. Review scientific literatures. Often, the same (or a similar) construct may be in multiple literatures. For example, the self-esteem construct can be found in education and psychology. These definitions likely overlap. Where they do overlap can indicate what the construct has as an essential component, and where they do not can point to what a particular definition may be missing. 3. Consult subject-matter experts, key informants, and/or practitioners. People familiar or well-studied with the construct can provide key insight into its nature and allow refinement to one's working definition. This insight can be gained by a variety of methods, such as interviews, gathering retrospective case studies, focus groups, and other qualitative methods. Because many psychological constructs are colloquial concepts (e.g., "spousal support," "ambition," "justice"), in many cases, the average layperson can be a key informant. However, this may not be true for more specialized constructs (e.g., clinical constructs). 4. Enlist feedback from academic peers. Perspectives from colleagues who do not study that construct can be highly useful because they may see alternatives to the standard thinking about the construct.
The advice in this table was taken from Mackenzie (2003), Locke (2012), and Podsakoff et al. (2016). the full scale measures each region of the latent trait. If a certain region is deficient, items can be added to better capture that region (or removed, if the region has been measured enough). Finally, suppose the scale developer is only interested in measuring a certain region of the latent trait range, such as middle-performers or high and low performers. In that case, information IRT can help them do so. Further details are provided in the Supplementary Material.
MAXIMIZING VALIDITY IN SHORT FORMS USING ANT COLONY OPTIMIZATION
Increasingly, psychologists wish to use short scales in their work (Leite et al., 2008), 6 as they reduce respondent time, 6 One important distinction is between short scales and short forms. Short forms are a type of short scales, but of course, not all short scales were taken from a larger fatigue, and required financial compensation. To date, the most common approaches aim to maintain reliability (Leite et al., 2008;Kruyen et al., 2013) and include retaining items with the highest factor loadings and item-total correlations. However, these strategies can incidentally impair measurement (Janssen et al., 2015;Olaru et al., 2015;Schroeders et al., 2016), as items with higher intercorrelations will usually have more similar content, resulting in less scale content (i.e., the attenuation paradox; Loevinger, 1954).
A more recent method for constructing short forms is a computational algorithm called ACO (Dorigo, 1992;Dorigo and Stützle, 2004). Instead of just maximizing reliability, this method can incorporate any number of evaluative criteria, such as associations with variables, factor model fit, and others. When reducing a Big 5 personality scale, Olaru et al. (2015) found that, for a mixture of criteria (e.g., CFA fit indices, latent measure. In this section, we are concerned with the process of developing a short form from an original scale only. correlations), ACO either equaled or surpassed the alternative methods for creating short forms, such as maximizing factor loadings, minimizing modification indices, a genetic algorithm, and the PURIFY algorithm (see also Schroeders et al., 2016). Since ACO has been introduced to psychology, it has been used in the creation of real psychological scales for proactive personality and supervisor support (Janssen et al., 2015), psychological situational characteristics (Parrigon et al., 2017), and others Olderbak et al., 2015).
The logic of ACO comes from how ants resolve the problem of determining the shortest path to their hive when they find food (Deneubourg et al., 1983). The ants solve it by (a) randomly sampling different paths toward the food and (b) laying down chemical pheromones that attract other ants. The paths that provide quicker solutions acquire pheromones more rapidly, attracting more ants, and thus more pheromone. Ultimately, a positive feedback loop is created until the ants converge on the best path (the solution).
The ACO algorithm works similarly. When creating a short form of N items, ACO first randomly samples N items from the full scale (the N "paths"). Next, the performance of that short form is evaluated by one or more statistical measures, such as the association with another variable, reliability, and/or factor model fit. Based on these measures, if the sampled items performed well, their probability weight is increased (the amount of "pheromone"). Over repeated iterations, the items that led to good performance will become increasingly weighted for selection, creating a positive feedback loop that eventually converges to a final solution. Thus, ACO, like the ants, does not search and test all possible solutions. Instead, it uses some criterion for evaluating the items and then uses this to update the probability of selecting those items.
ACO is an automated procedure, but this does not mean that researchers should accept its results automatically. Foremost, ACO does not guarantee that the shortened scale has satisfactory content (Kruyen et al., 2013). Therefore, the items that comprise the final scale should always be examined to see if their content is sufficient.
We also strongly recommend that authors using ACO be explicit about the specifications of the algorithm. Authors should always report (a) what criteria they are using to evaluate short form performance and (b) how these are mathematically translated into pheromone weights. Authors should also report all the other relevant details of conducting the algorithm (e.g., the software package, the number of total iterations). In the Supplementary Material, we provide further details and a full R software walkthrough. For more information, the reader can consult additional resources (Marcoulides and Drezner, 2003;Leite et al., 2008;Janssen et al., 2015;Olaru et al., 2015;Schroeders et al., 2016).
DISCUSSION
Measurement in psychology comes in many forms, and for many constructs, one of the best methods is the psychological Likert scale. A recent review suggests that, in the span of just a few years, dozens of scales are added to the psychological science literature (Colquitt et al., 2019). Thus, psychologists must have a clear understanding of the proper theory and procedures for scale creation. This present article aims to increase this clarity by offering a selective review of Likert scale development advances over the past 25 years. Classic papers delineating the process of Likert scale development have proven immensely useful to the field Watson, 1995, 2019;Hinkin, 1998), but it is difficult to do justice to this whole topic in a single paper, especially as methodological developments accumulate.
Though this paper reviewed past work, we end with some notes about the future. As methods progress, they become more sophisticated, but sophistication should not be mistaken for accuracy. This applies even to some of the techniques discussed here, such as ACO, which has crucial limitations (e.g., it depends on what predicted external variable is chosen and requires a subjective examination of sufficient content).
Second, we are concerned with the problem of construct proliferation, as are other social scientists (e.g., Shaffer et al., 2016;Colquitt et al., 2019). Solutions to this problem include paying close attention to the constructs that have already been established in the literature, as well as engaging in a critical and honest reflection on whether one's target construct is meaningfully different. In cases of scale development, the developer should provide sufficient arguments for these two criteria: the construct's (a) importance and (b) distinctiveness. Although scholars are quite adept at theoretically distinguishing a "new" construct from a prior one (Harter and Schmidt, 2008), empirical methods should only be enlisted after this has been established.
Finally, as psychological theory progresses, it tends to become more complex. One issue with this increasing complexity is the danger of creating incoherent constructs. Borsboom (2005, p. 33) provides an example of a scale with three items: (1) "I would like to be a military leader, " (2) ".10 sqrt (0.05+0.05)=. . ., " and (3) "I am over six feet tall" (p. 33). Although no common construct exists among these items, the scale can certainly be scored and will probably even be reliable, as the random error variance will be low (Borsboom, 2005). Therefore, measures of such incoherent constructs can display good psychometric properties, and psychologists cannot merely rely on empirical evidence for justifying them. Thus, the challenges of scale development of the present and future are equally empirical and theoretical.
AUTHOR CONTRIBUTIONS
LT conceived the idea for the manuscript and provided feedback and editing. AJ conducted most of the literature review and wrote much of the manuscript. VN assisted with the literature review and contributed writing. All authors contributed to the article and approved the submitted version. | 2021-05-04T13:27:46.532Z | 2021-05-04T00:00:00.000 | {
"year": 2021,
"sha1": "0060185e0197e1fbbcdda00d5bb3cbbb6916d874",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2021.637547/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0060185e0197e1fbbcdda00d5bb3cbbb6916d874",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
6004757 | pes2o/s2orc | v3-fos-license | HIV infection and host genetic mutation among injecting drug-users of northeastern states of India.
A community-based cross-sectional study was conducted among injecting drug-users (IDUs) of the northeastern states of India to understand the host genetic factors that confer resistance to HIV infection. The study aimed at assessing the existence and magnitude of genetic mutations of chemokine receptors, such as CCR2-64I, CCR-5 D-32, and SDF-1-3'A, that are known to confer resistance to HIV infection and progression of disease in some set-ups. In total, 711 IDUs from Manipur, Mizoram, Nagaland, and Meghalaya were sampled for the study. The selected participants were interviewed to study their sociodemography, risk behaviours, and risk perceptions after obtaining their verbal informed consent. The interview was followed by collection of about 5 mL of blood samples by an unlinked anonymous method for studying genetic mutation and HIV infection. All the blood samples were transported to and processed at the clinical medicine laboratory of the National Institute of Cholera & Enteric Diseases, Kolkata, India. The genetic mutations were detected by polymerase chain reaction (PCR) and the restriction fragment length polymorphism (RFLP) assay techniques. The study revealed that 328 (46.1%) IDUs were aged 20-29 years, 305 (42.9%) were aged 30-39 years, and only two (0.3%) were aged above 49 years. The rate of HIV seropositivity varied widely among the IDUs living in different northeastern states that ranged from 4.5% to 61%. There was not a single IDU with CCR5 homozygous mutation. Mutated genes of CCR2-64I and SDF-1-3'A were detected in the frequencies of 49% and 23% respectively in them. The rate of HIV seropositivity in IDUs having CCR2 mutant gene was 27% (n=94) and without mutation was 27% (n=98). Similarly, HIV seropositivity in IDUs with and without SDF1 mutation was 28% (n=46) and 27% (n=146) respectively. Both the differences were not statistically significant. A CCR5 homozygous mutation is known to be the most prominent marker that confers resistance against HIV infection. The absence of CCRS mutant gene in this population suggests that they do not have any additional protection against HIV infection. Analysis also revealed that, although mutation of CCR2 and SDF1 was present in this population, it did not confer any additional resistance against HIV. This indicates that the IDUs of northeastern India are not additionally protected against HIV infection through genetic mutation and are, therefore, vulnerable to acquire HIV infection due to high-risk behaviour and other related factors.
INTRODUCTION
HIV continues to be an important public-health problem in India since its first detection in 1986. The national-level prevalence of HIV among adults in India is around 0.3%, which gives rise to an estimated 2-3.6 million HIV-infected people in the country (1). The genetic differences between individuals appear to be an essential factor towards protection against HIV infection despite indulgence in high-risk behaviour (2). The risk factors associated with the development of clinical disease and the life-span of HIV infection in individuals are less understood.
Individuals with variants of the genes encoding the chemokine receptors-CCR2 and CCR5-and the ligand SDF1 have been shown to be resistant to HIV-1 infection and progression of disease (3)(4)(5)(6). Of the three genetic markers, presence of homozygous CCR5 D-32 allele appears to be the most impor-tant factor that confers resistance against HIV-1 infection, and heterozygous mutation prevents the progression of disease (7). The natural resistance to HIV-1 infection has been described by two mechanisms: one is termed 'exposed-uninfected' and the other one 'long-term non-progressors'. In the former type, individuals are exposed repeatedly over a long period without any manifestations of HIV infection. Such individuals include commercial sex workers, people having unprotected sex with infected partners, infants of HIV-positive mothers, IDUs, haemophiliacs, etc. The latter types of individuals carry HIV virus which either does not progress or progresses at a very slow rate. Homosexuals, IDUs, infants, or children usually fall in this group as described by Marmor et al. (8). Such facts have provided several options for research and development of more and more safe and effective drugs and vaccines to combat HIV infection at a very early stage of its pathogenesis.
Of all the northeastern states of India, Manipur in particular is contributing to the rapid spread of HIV-seropositive cases among IDUs (9). Although sharing of injecting equipment and paraphernalia is the main mode of transmission among IDUs here, sexual transmission is also currently increasing there (10). Transmission of HIV in local IDUs has always been viewed as an interaction between behavioural or cultural practices and presence of circulating HIV in the community. Influence of host factors, particularly genetic susceptibility, as mentioned above, has never been thought of and investigated in the said population. Hence, a study was conducted among IDUs of northeastern India to understand the existence and magnitude of genetic variants encoding CCR5, CCR2, and SDF1 and their relationships with the prevalence of HIV infection among them.
MATERIALS AND METHODS
This community-based cross-sectional study was conducted among 711 IDUs from four northeastern states: Manipur, Nagaland, Mizoram, and Meghalaya. Initially, the purpose of the study was explained to all the participants who were invited to participate voluntarily after obtaining their verbal consent. Ethical clearance was obtained from the ethical committee of the National Institute of Cholera & Enteric Diseases (NICED) before initiation of the study. Two experienced social workers interviewed the study participants using a fieldtested questionnaire to study their sociodemography, risk behaviour, and risk perceptions about HIV infection. The interview was followed by collection of about 5 mL blood sample by an unlinked anonymous method for testing HIV and genetic mutation. All the samples were then transported to and processed at the clinical medicine laboratory of the NICED, a national AIDS reference laboratory. The genetic mutations were detected by polymerase chain reaction (PCR) and the restriction fragment length polymorphism (RFLP) assay techniques as described below.
Primers for the amplification of CCR5 (FP: 5' TTA AAA GCC AGG ACG GTC AC 3' and RP: 5' TGT AGG GAG CCC AGA AGA GA 3'), CCR2 (FP: 5' TTG TGG GCA ACA TGA TGG 3' and RP: 5' CTG TGA ATA ATT TGC AGA TTG C 3'), and SDF1 (FP: 5' AAG GCT TCT CTC TGT GGG ATG 3' and RP: 5' GAC AGT CGT GGA CAC ACA TGA T 3') were custom-synthesized from Integrated DNA Technologies (IDT), Inc., USA. DNA was extracted from peripheral blood samples following the standard procedure. Peripheral blood mononuclear cells (PB-MCs) were isolated from 100 to 200 µL of blood by pelleting down the cells at 1,000 rpm for five minutes. Red blood cells (RBCs) were lysed by re-suspending the cell pellet in 500 µL of ACS-buffered saline, followed by incubation at room temperature for 3-4 minutes. Cell suspension was centrifuged at 6,000 rpm for five minutes after the addition of 500 µL of RPMI medium. The pellet containing PBMCs was washed with 500 µL of PBS and lysed by the addition of 500 µL of DNAZOL (invitrogen-USA). 0.25 mL of 100% ethanol was then added to the lysate, and genomic DNA was spooled out with a pipette-tip and transferred to a fresh tube. DNA was washed twice with 0.8-1.0 mL of 75% ethanol and dissolved in 100 µL 8 mM NaOH. PCR amplification of the genes under study was done in a 50-µL reaction containing 5 µL of 10xPCR buffer, 1 µL of dNTPs (2.5 mM stock), 0.4 µL of each primer (10 μM stock), ng of genomic DNA, and 2.5 U of Taq DNA polymerase. For the amplification of CCR5, the reaction mixture was subjected to denaturation at 94 0 C for five minutes, followed by 35 cycles of denaturation at 94 0 C for 45 seconds, annealing at 62 0 C for 30 seconds, and extension at 72 0 C for one minute. For CCR2, initial denaturation was followed by 35 cycles of denaturation at 94 0 C for one minute, annealing at 56 0 C for 45 seconds, and extension at 72 0 C for one minute. SDF-1 was also amplified for 35 cycles (denaturation at 94 0 C for one minute, annealing at 50 0 C for one minute, and extension at 72 0 C for 1.30 minutes). A final primer extension at 72 0 C for five minutes was allowed in each case before the PCR was stopped. To analyze RFLP for SDF1-3'A, PCR products were digested with MspI restriction enzyme at 37 0 C for 16 hours. RFLP for CCR2-64I was analyzed by digesting the PCR product with BsaB1 for four hours at 65 0 C. The digested products were resolved in 3% agarose gel. All the data, including laboratory test results, were edited, entered, and analyzed using the Epi Info software (version 6.04).
A sizable number of the IDUs shared their injecting equipment and paraphernalia. Transmission of HIV infection is known to be associated with sharing unsafe injection. Table 1 shows that 53.3% (n=379) of the participants shared their injecting equipment either always or frequently, and the rate of HIV seropositivity in them was 46.7% (n=177). On the other hand, 46.6% (n=332) shared either occasionally or never, with an HIV-seropositivity rate of 13.2% (n=44). This difference was significant as indicated by odds ratio (OR) of 5.7, with a 95% confidence (CI) level of 3.8-8.5.
In total, 237 (33.3%) of the IDUs shared drugs from common ampoules, with an HIV-seropositive rate of 63.3% (n=150), and 66.7% (n=474) did not have a similar history of sharing. The HIV-seropositive rate was 15% (n=71) in them. This difference was significant as indicated by OR of 9.7 and 95% CI level of 6.6-14.3. 62.3% (n=443) of the participants shared water for cleaning their syringes before injecting, and the rate of HIV seropositivity was 42.7% (n=189) in them. On the contrary, 37.7% (n-268) did not share, and the rate of HIV sero positivity was 11.9% (n=32). This difference was significant indicated by OR of 5.4 and 95% CI level of 3.5-8.4. 21.4% (n=152) shared cotton to stop bleeding following injection with an HIV-seropositivity rate of 40.8% (n=62), 78.6% (n=559) did not share, and the rate of HIV seropositivity was 28.4% (n=159) in them. The difference was significant with OR of 1.7 and 95% CI level of 1.1-2.5 (Table 1).
There was not a single IDU with CCR5 mutant gene. Distribution of the wild type, heterozygous and homozygous mutations in the CCR2 and SDF1 genes in the study population are shown in Fig. 3. Table 2 shows the seroprevalence of HIV among IDUs with and without the CCR2 and SDF1 mu-tant genes. However, the difference was not significant (p>0.05).
DISCUSSION
In total, 711 IDUs from four different northeastern states participated in the study. Forty-six percent (n=328) of the participants were aged 20-39 years. The youngest participants (4%; n=29) were aged 19 years or less. HIV seropositivity was increasingly higher with increase in the age of IDUs (Fig. 2). This could be explained by the fact that the older IDUs have a longer duration of injection that could expose them to repeated unsafe injection practices.
The unsafe injection practices are associated with transmission of blood-borne infection, including HIV, as evidenced by a number of studies (11,12).
The results of the study showed that the female IDUs (n=43) had a higher rate of HIV seropositivity compared to the male IDUs (n=668) as consistent with the findings of a study among IDUs in China (13). In the study community, most (90%) female IDUs were sex workers. This fact was also consistent with the findings of several other studies among female IDUs where the seroprevalence of HIV was more among female IDUs compared to males (14)(15)(16). This could probably be explained by the dual route of entry of HIV in female IDUs, who were sex workers too. Unsafe sexual activity was a common risk behaviour encountered with them as observed in other studies (17,18).
Spasmoproxyvon was the highest consumable injectable drug in this part of the country, except parts of Manipur and Nagaland which are traversed by the national highway that acts as the herointrafficking route (19,20). The national highway, which is also considered the heroin-trafficking route, starts from the India-Burma border to Manipur and passes to Nagaland after cutting across the capital city of Manipur, Imphal (21). Consumption of drugs is associated with cultural acceptability, availability, and affordability (11). A study in Bangladesh observed that IDUs usually start addiction with cannabis as the drug of choice and ends up with heroin (22). However, no such diversity in drug-use was observed in the present study.
Unsafe injection-practices are frequently observed in most IDU communities. In this study, HIV seropositivity was observed in 47% (n=177) and 13% (n=44) of the IDUs with always or frequent sharing and occasional or no sharing respectively (Table 1). This difference was significant indicated by OR 5.7. This implies that sharing is associated with higher transmission of HIV. Similarly, sharing of drugs from common ampoules, cleaning water, cotton, etc. were associated with higher transmission of HIV among sharers compared to nonsharers. A similar observation was also made in a study of IDUs in a province in China (23).
Several factors that play an important role in the acquisition of HIV infection include viral load at entry-point (set point), the population density, risk behaviour and cultures of a society, immigration and mixing of population, rate of unemployment and poverty, availability, distribution, and consumption of drugs, and government policies (24). Moreover, about 90% of infected people are not aware that they are carrying the infection, and even if they did, anti-retroviral treatment is not an affordable option for them (18,25). Apart from these factors, host genetics plays an important role in viral entry giving rise to an epidemic. Results of a study showed that people who are homozygous to CCR5 mutation were protected from HIV-1 infection, and on the other hand, heterozygous state renders partial protection against the infection (26).
The HIV epidemic is spreading rapidly in Europe and North America from Asia and Africa due to mutation of HIV subtypes, intermixing of communities, intravenous drug-use, and commercial sex workers (27,28). Thus, genetic mutations need to be considered for the analysis of HIV-1 epidemic in a particular region, apart from risk factors and risk behaviours. The mutated gene of CCR5 chemokine co-receptor is a prominent genetic marker of HIV-1 resistance (7). In this study, none of the IDUs had CCR5-mutated gene which is consistent with the finding of another study in diverse populations of Andhra Pradesh, South India (27) and also with the findings of studies in other parts of the world, such as Africa, America, Oceania, South-East Asia, and China (29). As in most parts of the world, the IDUs of northeastern India are not protected by any genetic mutation against HIV infection.
The presence of CCR2 mutation in this study was much higher compared to other studies conducted on diverse populations of Andhra Pradesh, India (29) and in HIV patients of Kuwait (30). On matching unsafe injection-practices, the rate of HIV seropositivity was observed to be almost equal in IDUs with and without CCR2 and SDF1 mutant genes. Since mutation in CCR2 and SDF1 genes slows the progression of HIV, IDUs with absence of these mutated genes might have died earlier than those with mutations. Thus, the former may have been underrepresented in this study, leading to a bias in the results. Also, the study includes IDUs who are currently involved in injection-practices. The history of HIV seropositivity of the past IDUs should also be considered to get a true picture of the effect of host genetic mutation on HIV seroconversion and progression. So, the IDUs of northeastern states of India appear to have no additional protection against HIV-1 infection in absence of CCR5 mutation. Regarding the CCR2 and SDF1 genes, further cohort studies are required to understand whether the existing mutations in these genes confer any additional protection against progression of HIV-1.
The present study has explored the seroprevalence of HIV, injecting practices, and risk behaviours of current IDUs in four northeastern states of India where injection-practices are widespread due to the easy availability of drugs. It also explores the existence and magnitude of genetic mutations of chemokine receptors in the IDUs of the northeastern states to determine their genetic protection to HIV infection and progression. It has documented that unsafe injection-practices are widely prevalent in this part of India and shows significant contribution to the spread of HIV infection. The study also revealed that the IDU population of this region is not additionally protected against HIV infection and its progression through genetic mutation of chemokine receptors, such as CCR2-64I, CCR-5 D-32, and SDF-1-3'A. Unlike some parts of the world where such genetic mutation confers certain amount of protection to HIV infection despite highrisk behaviour, the IDUs in the northeastern states of India need to control unsafe injection and sexual practices to decrease the high prevalence of HIV in these states. Also, further in-depth cohort study needs to be conducted to understand the effects of CCR2 and SDF1 mutation on the progression of HIV infection. Such studies can help us further understand the complex relationship among HIV infection, progression of disease, and host genetic diversity. | 2014-10-01T00:00:00.000Z | 2010-04-01T00:00:00.000 | {
"year": 2010,
"sha1": "8e9f03f033317b2a7dc57b778f6469d6315d931a",
"oa_license": "CCBY",
"oa_url": "https://www.banglajol.info/index.php/JHPN/article/download/4882/3894",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "8e9f03f033317b2a7dc57b778f6469d6315d931a",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
18737531 | pes2o/s2orc | v3-fos-license | Doppler Tomograms from Hydrodynamical Models of Dwarf Nova Disks
We present three-dimensional models of accretion disks in U Gem - like systems and calculate their Doppler tomograms. The tomograms are based on two different assumptions concerning the origin of line emission from the disk. The assumption of lines originating due to irradiation of the surface layer of the disk by the central source leads to a better agreement with observations. We argue that fully three-dimensional modelling is necessary to properly interpret the observed tomograms.
Introduction
With their typical dimensions of less than 10 −4 arcseconds, Dwarf Nova (DN) disks are far too small to be resolved directly. However, an indirect observational insight into their structure became possible already in mid-eighties, when a powerful technique of Doppler tomography was introduced (for a recent review see Marsh 2001). Nowadays tomographic observations are a standard, but the interpretation of Doppler tomograms in terms of theoretical models of DN disks is often problematic. This is particularly well visible in the case of spiral waves, which have been suggested as one of the agents responsible for the angular momentum transfer through the disk (for a recent review see Boffin 2001). The excellent discussion of the subject can be found in a recent paper by Smak (2001), and there is no need to repeat it here.
The main problem associated with the interpretation of Doppler tomograms is related to the nature of the data derived from the theory. Model calculations yield spiral patterns in the distribution of disk surface density, while patterns in observational tomograms are related to the distribution of the emissivity in specific spectral lines. While comparing these two sets of data one usually makes an implicit assumption that emissivity is proportional to surface density, which certainly is an oversimplification.
A more sophisticated approach was presented by Steeghs and Stehle (1999), who based their Doppler tomograms on emission line profiles calculated from 2D disk models. For the origin of the lines they adopted a purely thermal model with local Planckian source function. Such model was criticized by Smak (2001), who pointed out that it requires factor of 100 overabundance of helium in order to reproduce the observed intensities, while the relative brightness of the features it produces in the tomograms is incompatible with observations. Smak himself proposed to base the theoretical tomograms on the distribution of velocity divergence. He argued that at a given radial distance from the disk center the compression regions with ∇ v < 0 would be distinguished by a higher-than-average disk thickness (because their density and temperature would be higher). As a result, the surface layer of the disk would be better exposed to the irradiating flux from the white dwarf and the boundary layer, and a local enhancement in line emission would be observed. Based on the three-body model of gas flow in a close binary he identified regions of maximum compression in the orbital plane and showed that they well reproduced shape, location and relative intensities of the arch-like structures observed in Doppler tomograms of DN disks.
Motivated by his paper, we obtained two-and three-dimensional hydrodynamical models of DN disks and calculated their tomograms. The details of the modelling procedure are given in Section 2. The results are presented in Section 3 and discussed in Section 4.
Numerical methods, input physics and initial conditions
All models presented here were obtained with the help of the ZEUS-3D code (Clarke & Norman 1994, Clarke 1996. The original code was modified to include conservative angular momentum transport (Kley 1998). Spherical coordinates (r, θ, φ) centered on the primary and corotating with the system were used. The grid was extending from 0 to 2π in φ, from 0 to 0.2π in θ, and from rin = 0.1a to rout = 0.5a in r, with a standing for the orbital separation. Grid spacing was uniform in (θ, φ) and logarithmic in r, resulting in zones of identical shape. After a few experiments we decided to limit the resolution to 100 × 20 × 64 zones in r, θ, φ directions, respectively (test runs, with resolution increased by a factor of 2 in r, θ, φ consecutively, did not introduce any significant changes into the models).
A standard periodic boundary condition was imposed at φ = 2π, and symmetry with respect to the orbital plane was assumed, implying a reflecting boundary condition at θ = 0. A free outflow was allowed for at rin and rout. In the four innermost radial zones the radial velocity was reduced by 5% at every time-step, preventing the reflection of waves from the inner boundary of the grid. To check whether this damping procedure did not influence the structure of the disk or the shape of the spiral pattern, we calculated model B2 with rin moved to 0.045a (see Table 1). It was found that shifting the inner grid boundary toward the white dwarf did not introduce any significant changes in the model.
The simulations did not include explicit viscosity (the von Neumann & Richtmyer and scalar linear artificial viscosities originally implemented in ZEUS were only used with coefficients C1 = 0.5 and C2 = 2.0, where C1 is responsible for the magnitude of the artificial viscous pressure, and C2 is a shock-spreading parameter, see the definitions in Stone and Norman (1992)). The energy equation was not solved; a polytropic equation of state (p = κρ γ ) with γ = 5/3 was employed instead. In a system of units in which gravitational constant, orbital separation, and primary's mass are all equal to 1, the value of κ was set to 6500. To mimic U Gem -like systems, all models had the same mass ratio, q = 0.5. The stream flowing from the secondary through the L1 point was not included.
Every simulation consisted of three phases (relaxation, switch-on, and proper). The mass ratio was set to 0 in the relaxation phase and to 0.5 in the proper phase, while in the switch-on phase it was linearly increasing in time. At the beginning of each simulation (t = 0) the grid was initialized with an exponential density distribution where and (3) In our system of units the value of the midplane density, ρ0, was set to 10 −8 , corresponding to ∼ 2 · 10 −8 g cm −3 in U Gem (with U Gem parameters taken from Groot 2001). The azimuthal velocity of the disk was given a purely Keplerian pattern, and the remaining two velocity components were set to 0. Since the content of the grid was not in hydrostatic equilibrium, we allowed it to relax for ∼ 3.9 orbital periods (P orb ). Throughout the relaxation phase the model was strictly axisymmetric, so that it was possible to speed the computations up by reducing the number of angular grid points to 2.
Because of extremely steep vertical gradients of density at the surface of the disk it was necessary to introduce a density limit. Every time step the grid was scanned for cells with ρ < ρmin, and whenever such a cell was found ρ was reset to ρmin. We wanted ρmin to be small enough to minimize side-effects caused by the newly-added matter falling onto the disk, and, simultaneously, large enough to avoid excessive computational slowdowns due to formation of strong shocks in the rarefied medium above the disk. After a few experiments ρmin was set to 10 −13 for all models. At the end of the relaxation phase the midplane density of the disk increased, reaching up to 4 · 10 −8 (corresponding to ∼ 8 · 10 −8 g cm −3 in U Gem).
At the beginning of the switch-on phase the relaxed model was mapped onto the standard grid, and the secondary's gravity was "switched on". The final value of q was achieved at t = 5.8 P orb . At the end of the switch on phase the total mass contained in the grid, and scaled to U Gem, M disk , was equal ≃ 10 24 g). The proper phase with q = 0.5 lasted for another ∼ 3.9 P orb so that the simulation was ended at t ≃ 9.7 P orb . By that time shape and location of the disk edge stabilized, and a stationary spiral pattern developed in the disk.
Results
The tidal forces affect the relaxed disk in two ways. First, some material is stripped from its outer edge and driven out of the grid through the outer grid boundary. Second, angular momentum is removed from the remaining material and transferred into the orbital momentum of the binary, causing the disk to shrink. In the three-body approximation, the radius of the disk cannot be larger than the radius of the largest non-intersecting orbit, r max nis . U Gem -like systems with q = 0.5 have r max nis ≃ 0.3, and in fact at the end of the simulation the disk barely extends beyond r ≃ 0.3. The rest of the gas originally located at 0.3 < r < 0.5 now resides in a ring-like density enhancement between r ≃ 0.15 and r ≃ 0.3. The ring is markedly elliptical, but, when averaged over the azimuthal angle, it shows a well-defined density maximum at r ≃ 0.19, i.e. slightly beyond the circularization radius (rcirc = 0.16 for q = 0.5). The maximum density is factor of ∼ 2.5 higher than the midplane density of the inner disk (r < 0.15). The ratio h/r, where h is the half-thickness of the disk, varies from ∼ 0.1 in the inner disk to ∼ 0.2 in the ring. The overall structure of the final model is reminiscent of the one expected at the early phase of the outburst, when the outer radius of the disk just begins to increase. However, because of the polytropic equation of state we employ, the interior of the disk is unrealistically hot (for a disk composed of pure hydrogen T grows from ∼ 10 4 K at the surface of the disk to ∼ 6.7 · 10 5 K at the density maximum). We find that both 2-D and 3-D calculations produce disks of nearly the same shape and extent (Fig. 1). Below we shall argue, however, that fully three-dimensional models are needed to properly interpret the Doppler tomograms.
Both 2-D and 3-D models develop spiral shocks shown in Figs. 1 and 2. At a first glance, location and inclination of the shocks do not significantly depend on the number of dimensions of the model. Significant differences become visible when compression regions (∇ v < 0) are compared: while model A has a clear two-armed pattern, three arms are present in models B1 and B2. We speculate that the third arm may originate due to tidal forcing of the disk matter in the direction perpendicular to the orbital plane (an effect entirely absent in 2D). Thus, to the long dispute on whether spiral shocks can exist in three dimensions, or rather their existence is limited to the two-dimensional world, we add a vote in favor of the first possibility. In 3D the shocks are definitely there, but their pattern is different than in 2D. Obviously, the validity of this conclusion is limited to hot polytropic disks with 0.1 ≤ h/r ≤ 0.2.
Following the approach of Smak (2001), we obtained Doppler tomograms of the compressional power due to tidal forces, pdV .The distribution of brightness on the (vx, vy) plane was calculated from the integral where (ux, uy) are velocity components of the volume element dV , and the so-called boxcar function B is defined as The resolution parameter δ, related to the resolution of images (400 × 400 pixels), was set to 6v orb /400. In the three-body approximation of Smak (2001), the inner disk is essentially Keplerian, and its Doppler tomogram cannot show any nonaxisymmetric structures in the high-velocity range. However, in a more realistic polytropic disk the spiral shocks excited at its outer edge propagate deeply into the high-velocity regions. As a result, our tomograms ( Fig. 3) show extended spirals instead of two crescent-shaped maxima reported by Smak (2001). Such spirals are not observed in real disks (Groot 2001). To account for the limited resolution of the observational data we blurred the tomograms to such a degree that the width of the brightest segments of the spiral became comparable to the width of the observed features (∼ 400 ÷ 500 km s −1 ; see Groot 2001). However, the spiral pattern extending up to velocities of ∼ 1000 km s −1 was still clearly visible. Moreover, both location and relative intensity of the brightest segments of the spiral did not agree with those observed in U Gem.
Should this disagreement be regarded as an argument against the presence of spiral waves in CV disks? Certainly not. As we already indicated in the Introduction, the observed tomograms refer to the distribution of the emissivity in specific spectral lines rather than to the distribution of physical parameters directly obtainable from hydrodynamical simulations. The presently available hydrocodes are not sophisticated enough to predict the detailed spectrum of the disk, and the correspondence between those two sets of data is by no means clear. However, the models can yield data much more closely related to line emissivity than simple physical parameters or their combinations. To obtain such data, we assume the line emission to originate mainly due to the irradiation of the surface layer of the disk by the central white dwarf and/or boundary layer (cf. Robinson et al. 1993, Smak 1991. Since our models are not detailed enough to resolve the surface layer, we assume that the lower boundary of the layer coincides with a constant density surface S l at which ρ ≡ ρ l = 10 −8 . Typical distance between S l and the midplane of the disk, h l was such, that h l /r = 0.08 and h l /r = 0.2 in the inner disk and in the outer ring, respectively. Further, we assume that the line flux from each element of the layer is proportional to the mass contained within that element, ρdV , multiplied by the irradiating flux. For simplicity, we also assume that all irradiating photons are emitted from a point source located at the centre of the white dwarf, so that the irradiating flux is given by L/r 2 , where r is the radial coordinate of the volume element dV , and L = L wd + L bl is the combined luminosity of the white dwarf and the boundary layer. The distribution of brightness on Figure 4: Tomograms obtained from models B1 and B2 within the irradiation approach (blurred in the same way as described in Fig. 3).
The function S(r, θ, φ) describes the shadow cast by S l , and it is given by where is the Heaviside step function. The third velocity component, vz, was neglected in (6) because nearly everywhere in the disk its value was smaller than ∼ 5% of the local azimuthal velocity, v φ . Formally, the integral in Eq. 6 subtends the whole space above S l . Practically, due to steeply falling density, only volume elements closest to S l contribute to it significantly. The boundary layer above S l has a mass of ≃ 0.15M disk , but only about 25% of its volume is directly illuminated. Location and relative intensity of the brightest areas on tomograms resulting from the irradiation approach (Fig. 4) agree rather well with those observed by Groot (2001) at an advanced outburst phase of U Gem (his Fig. 2, Episode 2). The major discrepancy is the bright ring visible in our tomograms at v 2 x + v 2 y ≃ 1000 km s −1 . In both models the ring is too bright compared to the observational data, but relative to the maximum intensity obtained in the model it is weaker in B2 where the disk extends down to r = 0.045. We conclude that in B1 the ring is enhanced by a spurious contribution from the inner edge of the disk at r = 0.1. The ring in B2 would be still weaker if absorption of the irradiating flux by the gas in the surface layer was taken into account in equation (6). Unfortunately, the present models are too crude for such an operation to be reliable. It is clear, however, that the effect of absorption should be particularly strong for r 0.1, where h/r is nearly constant (see Fig. 5), and the irradiating quanta propagate nearly parallel to S l . Further reduction in ring intensity could probably be achieved if the inner boundary of the grid was moved even closer to the white dwarf, as the shadow cast by S l at r < 0.045 might partly screen the region at r ∼ 0.1 where the ring originates. On the other hand, in some phases of the activity cycle the intensity of the brightest areas in our irradiation tomograms is underestimated relative to the ring. This is because substantial pdV work is done by tidal forces directly on the gas in the surface layer of the outer disk, where the low-velocity emission originates.
In fact, the heating rate per unit mass, pdV /ρ, reaches a clear maximum just below the surface of the outer disk (see Fig. 5). For the case of U Gem the height of this maximum is ≃ 10 11 erg g −1 s −1 . With a typical outburst accretion rate ofṀ = 3 · 10 18 g s −1 , and a white dwarf radius R wd = 4 · 10 8 cm we get L bl ≃ L = 1 2 GM1Ṁ /R wd ≃ 6 · 10 35 erg s −1 . The illuminated mass in the region of maximum pdV /ρ between r ∼ 0.15a and r ∼ 0.25a approaches ≃ 0.025M disk , and it is distributed within a solid angle of ∼ 0.3π. Assuming that the whole incident flux is absorbed there we obtain a radiative energy input of ∼ 2 · 10 12 erg g −1 s −1 , and we see that during the outburst the contribution to the line flux from pdV heating is rather small. However, pdV dominates just before the outburst, when accretion rate is factor of ∼ 100 lower.
The subsurface maximum of pdV /ρ in Fig. 5 originates mainly due to tidal forcing in the direction perpendicular to the orbit. It also contains a contribution from the ambient gas falling onto the disk; however the "rainfall" heating is much less efficient than the tidal one. We checked this by re-running simulation B1 with ρmin reduced to 3·10 −14 : at t ≃ 8.0 P orb virtually no changes were seen in the distribution of pdV /ρ.
Discussion
As discussed in Section 3, we find that spiral waves efficiently propagate from excitation regions at the outer edge of the disk toward the white dwarf, reaching to at least ∼ 0.05a. This conclusion concerns both twodimensional and three-dimensional models; it is however limited to hot polytropic disks presented in this paper. The main spiral features seen in the density distribution (two-dimensional case) and in the surface density distribution (three-dimensional case) are hard to distinguish. On the other hand, clear differences are visible in the distributions of the tidal heating rate, pdV : in 3D the two main spiral arms are less tightly wound than in 2D, and a weaker third arm is excited. We suggest that the third arm may originate from tidal forcing in the direction perpendicular to the orbital plane. The effects of this forcing seem to be responsible for the origin of the clear maximum of tidal heating rate per unit mass, pdV /ρ, which is located away from the midplane in subsurface layers of the outer disk.
Doppler tomograms of tidal heating rate derived from both 2D and 3D models correlate rather poorly with observed tomograms of U Gem (Groot 2001). A better agreement (but still not entirely satisfactory) is obtained for tomograms of the irradiation flux from the white dwarf through the surface layer of the disk. The brightest areas of such tomograms coincide with arches observed in U Gem at an advanced stage of the outburst. The irradiation tomograms can be derived from 3D models only, which indicates that fully three-dimensional modelling is needed for a reliable interpretation of the observational data on DN disks.
According to our results the arches originate in the outer part of the disk, fairly high above the midplane (h > 0.1r). For this to happen, the outer disk would have to be substantially bulged. The bulging phenomenon may be explained within the following (not entirely new) scenario: prior to the outburst the gas transferred from the secondary mainly collects in a ring at the circularization radius, and only partly accretes through the disk onto the white dwarf. The ring expands as the gas flows in, but it remains cool until heating from tidal forcing in the orbital plane grows so strong that it cannot be balanced by radiative cooling. The ring begins to expand even more rapidly, and within it the gas located away from the midplane begins to receive additional internal energy from tidal forcing in the direction perpendicular to the orbital plane. Eventually, strong spiral shocks develop, and a dynamical instability of the kind described by Różyczka & Spruit (1993)
sets in.
Obviously, such scenario cannot be the whole story, as it is not linked to the thermal instability believed to be at least partly responsible for eruptive phenomena in DN and other classes of cataclysmic variables. Nevertheless, it seems to indicate a promising direction of further research. | 2014-10-01T00:00:00.000Z | 2003-02-24T00:00:00.000 | {
"year": 2003,
"sha1": "8cd62bc593a3fa48031c9491e618f8a7c22fe12f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "8cd62bc593a3fa48031c9491e618f8a7c22fe12f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
253098920 | pes2o/s2orc | v3-fos-license | Axion-like Dark Matter from the Type-II Seesaw Mechanism
Although axion-like particles (ALPs) are popular dark matter candidates, their mass generation mechanisms as well as cosmic thermal evolutions are still unclear. In this letter, we propose a new mass generation mechanism of ALP during the electroweak phase transition in the presence of the type-II seesaw mechanism. As ALP gets mass uniquely at the electroweak scale, there is a cutoff scale on the ALP oscillation temperature irrelevant to the specific mass of ALP, which is a distinctive feature of this scenario. The ALP couples to the active neutrinos, leaving the matter effect of neutrino oscillations in a dense ALP environment as a smoking gun. As a by-product, the recent $W$-boson mass anomaly observed by the CDF collaboration is also quoted by the TeV-scale type-II seesaw. We explain three kinds of new physics phenomena are with one stroke.
Introduction.-Variouscosmological observations have confirmed the existence of cold dark matter (DM), which accounts for about 26.8% [1] of the cosmic energy budget.However, the particle nature of DM still elude us.Axion [2][3][4][5] is one of the most popular DM candidates motivated by addressing the strong CP problem, with its mass induced by the QCD instanton and its relic abundance arising from the misalignment mechanism [6][7][8][9][10][11], which drives the coherent oscillation of axion field around the minimum of the effective potential.Couplings of the axion to the standard model (SM) particles are modeldependent and there are three general types of QCD axion models, PQWW [2,3], KSVZ [12,13], and DFSZ [14,15], of which the PQWW axion is excluded by the beam-dump experiments [16][17][18] and other axion models can be detected via their couplings to photons or SM fermions.
To relax property constraints to the QCD axions, more general classes of axion-like particle (ALP) DM models [19][20][21][22][23][24][25][26][27][28][29][30][31][32] are proposed, with the mass ranging from 10 −22 eV to O(1) GeV [9,31], where the lower bound is from the fuzzy DM [33] and the upper bound is from the LHC limits.The mass generation mechanism as well as the relic abundance of axion-like DM are blurred and indistinct since people usually pay more attention to the detection signal of ALP in various experiments via its coupling to photon [34][35][36][37][38][39][40][41][42][43][44][45], a/f a F F , where a is the ALP field and f a is the ALP decay constant.It should be mentioned that the mass generation mechanism of the ALP is highly correlated with its interactions with the SM particles.So one cannot simply ignore these facts and directly apply the strategy of searching for QCD axion to detect the ALP.This issue has been concerned recently and several novel approaches have been proposed to address the relic abundance of the light scalar DM, such as the thermal misalignment mechanism [46,47], which supposes a feeble coupling between the DM and thermal fermions.These attempts provide novel insights to the origin of ALP in the early Universe.
In this letter, we propose a new mechanism of generating the ALP mass during the electroweak phase tran-sition with the help of a Higgs triplet ∆ with Y = 1, which is the seesaw particle in the type-II seesaw mechanism [48][49][50][51][52][53].Active neutrinos get Majorana mass as ∆ develops a tiny but non-zero vacuum expectation value (VEV).We explicitly show that an ALP, which is the Goldstone boson arising from the spontaneous breaking of them global U (1) L symmetry, can get tiny mass through the quartic coupling with the Higgs triplet and the SM Higgs doublet Φ whenever the global lepton number is explicitly broken by the term µΦ T iτ 2 ∆ † Φ + h.c.In such a scenario, symmetries break sequently: the U (1) L first breaks at high energy scale resulting a massless ALP serving as dark energy, then electroweak symmetry is spontaneously broken leading the mass generation of the ALP, which begins to oscillate as its mass is comparable with the Hubble parameter.We derive the relic density of ALP by investing its thermal evolution and solving its equation of motion (EOM) analytically.To further investigate its signal, we explicitly derive the interactions between ALP and SM particles, which arise from the mixing of ALP with other CP-even particles.We argue that neutrino oscillations in certain specific environment may be a smoking gun.As a by-product, we show that the recent W -boson mass anomaly observed by the CDF collaboration [54][55][56][57][58][59][60][61][62][63][64] can be addressed in the same model without conflicting with the LHC constraints.
Framework.-Weassume a complex scalar singlet S carries two units of lepton number charge and the U (1) L is spontaneously broken at high temperature when S gets VEV.Besides, the type-II seesaw mechanism is required for the origin of neutrino mass and S couples to the Higgs triplet ∆ and the SM Higgs doublet Φ via the quartic interaction with a real coupling.The most general scalar potential is where V (Φ, ∆) is the most general potential for the type-II seesaw mechanism given in the Supplemental Mate-rial.The quartic couplings λ 7,8 are relevant for the thermal mass of S. It is obvious that S may get non-zero VEV in the early Universe by assuming the small quartic couplings, which is consistent with experimental observations [65][66][67][68], leaving the CP-odd component of S as ALP.ALP is massless at the early time until the temperature drops down to the electroweak scale at which both Φ and ∆ get non-zero VEVs.Then ALP acquires a tiny mass double suppressed by the VEV of the Higgs triplet and the tiny lepton-number-violating parameter µ, which should be naturally small accorded to the naturalness principle of t'Hooft [69].
To analytically derive the mass of ALP, the Φ, ∆, and S can be parametrized as where ∆ 0 = (v ∆ + δ + iη)/ √ 2 being the neutral component of the Higgs triplet, the v φ , v ∆ , and v s are the VEVs of Φ, ∆, and S, respectively.After the electroweak symmetry breaking (EWSB), the remaining physical scalars are as follows, two charged scalar pairs H ±± and H ± , two CP-odd scalars A and a, and three CP-even scalars h, H, and s, whose masses may be obtained by unitary transformations to their squared mass matrices.The detailed procedures of diagonalization of all the scalar mass matrices are given in the Supplemental Material.Then the ALP mass in the CP-odd sector can be written as In the limits s ), which is double suppressed by the parameters v ∆ and µ in the type-II seesaw mechanism.
ALP DM.-As discussed above, the ALP gets a tiny but non-zero mass via the type-II seesaw mechanism during the electroweak phase transition at the critical temperature T C 160 GeV [70].Neglecting the radiative corrections, the temperature-dependent ALP mass can be written as where f a = v s , v φ (T ) and v ∆ (T ) are the temperaturedependent VEVs of the SM Higgs and Higgs triplet, respectively.The EOM of the homogeneous ALP field a (a ≡ θf a ) in the FRW Universe can be written as [6][7][8] θ + 3H(T ) θ + m 2 a (T )θ = 0 , where the dot denotes the derivative with the respect to time, and H(T ) ≡ Ṙ/R is the Hubble parameter in terms of the scale factor R. In the radiation-dominated epoch, we have H(T ) = 1/(2t) = 1.66 g * (T )T 2 /m pl , where g * is the effective number of the degrees of freedom, and m pl = 1.221 × 10 19 GeV being the Planck mass.The initial conditions are taken as θ(t i ) = θ 2 a,i and θ(t i ) = 0, where the angle brackets denote the initial misalignment angle θ(t i ) averaged over [−π, π) [10].The value of θ 2 a,i depends on whether the U (1) L breaking occurs before the inflation ends or after the inflation [10,30].
In general, the ALP becomes dynamical and starts to oscillate when m a (T osc ) = 3H(T osc ) [9][10][11], where T osc is the oscillation temperature.Before the EWSB, the ALP is massless and the angle θ remains a constant with the initial value θ(t) = θ(t i ).Therefore, there is an upper bound on the oscillation temperature T max osc ≡ T C , which leads to the existence of a critical mass The oscillation temperature can be divided into two cases where T * is derived from the condition m a = 3H(T * ).Eq. ( 7) implies that the traditional oscillation condition is only available to the case m a < m aC .For m a ≥ m aC , the oscillation temperature is always equal to the critical temperature T C , as shown in Fig. 1.Note that we use the parameter 3H instead of the Hubble parameter H to better show the critical point given by Eq. (7).We now investigate the evolution of the ALP, which is frozen at the initial value by the Hubble friction at early times (3H > m a ) and behaves as dark energy.As the temperature T of the Universe drops to T osc given by Eq. ( 7), the ALP starts to oscillate with damped amplitude, and its energy density scales as R −3 , which is similar with the ordinary matter [9,10], until the angle θ oscillates around the potential minimum of the ALP at the late time.The evolution of θ can be described by the analytical solution of EOM in the radiation dominated Universe when H > H E ∼ 10 −28 eV [9,71], where H E is the Hubble rate at the matter-radiation equality in ΛCDM.The exact analytical expression is given in Sec.B of the Supplemental Material.Alternatively, we can also numerically solve Eq. ( 5) with the given initial values.Here we consider the post-inflationary scenario and take the initial value as θ(t i ) = π/ √ 3 [10,30].The analytical and numerical results are shown in Fig. 2 with the two benchmark ALP masses.We find that the numerical results of the evolution are consistent with the analytical ones.
The energy density of ALP is ρ a Since the ratio of ALP number density to the entropy density is conserved, the ALP energy density at the present can be written as , where T 0 is the CMB temperature at present, and s = 2π 2 g * s T 3 /45 is the entropy density with g * s the relativistic degrees of freedom of the entropy.The ALP mass is almost temperature-independent, which indicates m a (T osc ) = m a (T 0 ) = m a , so the ALP energy density at present is The relic density of ALP at present is defined as Ω a h 2 = (ρ a (T 0 )/ρ c,0 ) h 2 [9,10], where ρ c,0 ≡ 3m 2 pl H 2 0 / (8π) is the critical energy density, T 0 = 2.4×10 −4 eV, and g * s (T 0 ) = 3.94 [72].Combining these parameters with Eq. ( 8), the 10 -9 10 -8 10 -7 10 -6 10 -5 10 -4 10 -3 10 -2 0.1 1 10 10 2 10 -2 0.1 relic density of ALP can be estimated as Since the initial misalignment angle θ 2 a,i the relic density is almost determined by the decay constant f a and its mass m a .In Fig. 3, we show the relic density Ω a h 2 as a function of m a with the four benchmark values of f a ∼ O(10 10 − 10 13 ) GeV.The vertical black dotted line represents the critical mass m aC , on two sides of which the ALP density evolve differently.We find that there exists the allowed parameter space that may address the observed DM relic abundance, Ω a h 2 0.12 [1,72].
ALP interactions.-Nowwe investigate interactions of the ALP with ordinary matters including the Higgs and active neutrinos.ALP may couple to the SM Higgs as well as active neutrinos in forms λ haa haa and ν C L iaλ aνν ν L + h.c. with the couplings where U ij , V ij (i, j = 1, 2, 3) are the orthogonal matrices diagonalizing scalar matrices given in the Supplemental Material, and m ν is the neutrino mass matrix in the flavor basis.The complete interactions of the ALP are listed in Table IV of the Supplement Material.Given FIG. 4. The transition probability P (νe → νµ) as a function of the neutrino energy E. The red and blue lines stand for the neutrino oscillations in vacuum and dense axion stars, respectively.We take ma = 10 −5 eV, fa = 10 12 GeV, and v∆ = 1 MeV for axion, and take M dense a = 13.6M and R dense a = 45.2 km for the dense axion star [74].The threeflavor oscillation parameters are taken from Ref. [75].
that the SM Higgs decays into two ALPs (h → aa), the constraint of Higgs invisible decay from the LHC set an upper bound on the coupling λ haa < 1.536 GeV [73].We have checked that the coupling predicted by this model always satisfy this constraint.
The interaction of ALP with active neutrinos may cause matter effect in neutrino oscillations.Since ALP is the classical field, the effective potential can be directly written as ., which contributes an effective mass to active neutrinos and can be diagnonalized by the same unitary transformation as that in vacuum.In this case, the threeflavor neutrino oscillation amplitude can be written as where U αi is the matrix element of the PMNS matrix [76,77], α, β = {e, µ, τ }, i = {1, 2, 3}, and m i is the mass of the i-th neutrino mass eigenstate.Notice that Eq. ( 11) is same as the formula of neutrino oscillation in the vacuum up to the factor in the bracket.We find that it is difficult to probe this matter effect with a fixed v ∆ in vacuum, because of the low DM energy density ρ a and the super-small suppression factor V 23 .The matter effect induced by this ALP-neutrino interaction becomes important only if the active neutrinos propagate in a dense celestial body, such as an axion star performed in [74,78].As an illustration, we show in Fig. 4 the neutrino oscillation probability P (ν e → ν µ ) as a function of the neutrino energy in an axion star by setting ρ dense a = 6.97×10 19 g m −3 [74], which corresponds Here we set µ = 10 −5 GeV, v∆ = 1 MeV, vs = 10 13 GeV, and ms = 1000 GeV.Three typical values of η are selected for comparisons.The dashed and solid lines correspond to the cases of η < 0 and η > 0, respectively.The dashed green line and the gray region represent the SM prediction [80] and the recent 2σ bound set by CDF [54], respectively.
to the axion star of mass M dense a = 13.6M and radius R dense a = 45.2 km.In Fig. 4, the matter effect induced by a dense axion star makes the neutrino oscillation spectrum different from that in the vacuum.
W mass anomaly.-Nowwe calculate the deviation of W -boson mass from the SM prediction at one-loop level within the framework of this model.In general, the expression of the W -boson mass m W can be parameterized as [79,80] where G F is the Fermi constant, α em is the fine-structure constant, and ∆r = ∆α em − c 2 W /s 2 W ∆ρ loop + ∆r rem .The explicit expression of ∆r is given in the Supplemental Material.
Both the Higgs triplet [81][82][83][84][85][86][87][88][89][90][91][92] and the scalar singlet [68,93] may contribute to ∆r and thus to the W -boson mass.Given that v s is much larger than v φ and v ∆ , it is reasonable to expect that the mixing angles α 2 and α 3 in CP-even sector are approximately zero (α 2 , α 3 0) as the scalar singlet is nearly decoupled from the other scalar fields.The remaining α 1 can be written as Here we take the coupling λ 4 = 0 [82,94] for simplicity.The M 2 ∆ and λ 5 correlated with the splitting of triplet mass spectrum are obtained from the limits of and M 2 ∆ m 2 A .We denote the lightest mass of m H ++ , m H + , and m A as the variable m L and show in Fig. 5 the improved m W as the function of m L for various η, which is defined as η Three typical values of |η| = (100) 2 GeV 2 , (150) 2 GeV 2 , and (200) 2 GeV 2 are selected for comparisons.The dashed and solid lines correspond to the cases of η < 0 and η > 0, respectively.Notice that m W would asymptotically approach to the SM prediction with the increase of m L due to the decoupling of Higgs triplet.However, when m L is lower than 2000 GeV, for different η, m W increases diversely with the decrease of m L , and some curves can reach the range of the CDF measurement [54].In this case, the CDF anomaly can be feasibly explained by taking |η| ∈ [(150) 2 , (200) 2 ] GeV 2 for m L 500 GeV.
Summary.-In this letter, we have proposed a new mass generation mechanism of ALP from the type-II seesaw mechanism that give rise to the active neutrino Majorana masses.The typical oscillation temperature of ALP shows a cut-off at the critical temperature of the EWSB, which is the typical trappings of this kind of ALP.Although the ALP does not couple to the diphoton, it might be detected in future neutrino oscillation experiments due to the matter effect induced by the ALPneutrino interactions.Finally, we show that the W -mass anomaly observed by the CDF collaboration can be explained by the TeV-scale type-II seesaw.All these observations make three different kinds of new physics phenomena tightly connected with each other in a single model.
Supplemental Material
Wei Chao, Mingjie Jin, Hai-Jun Li, and Ying-Quan Peng The Supplemental Material is organized as follows.In Sec., we present the model in detail.The analytical solution of the EOM for ALP is given in Sec..In Sec., we calculate the branching fraction of the decay process h → aa.In Sec., we discuss the calculation of W -boson mass anomaly.Finally, the gauge-scalar interactions, the ALP interactions, and the gauge boson self-energies are listed in Secs.,, and , respectively.
The Singlet-Triplet Model
The relevant Lagrangian is given by where the covariant derivatives are defined as The general form of the scalar potential V (S, Φ, ∆) is given by and L Yukawa is the Yukawa interaction of left-handed lepton doublets [48][49][50][51][52][53], where y αβ denotes the 3 × 3 complex symmetric matrix, and α L = (ν α L , e α L ) T is the left-handed lepton doublet with α = {e, µ, τ }.From Eq. (S18) we know that ∆ carries a lepton number charge of −2.The scalar fields Φ, ∆, and S can be parametrized as where GeV) 2 .The v φ , v ∆ , and v s are the VEVs of the Higgs doublet, the Higgs triplet, and the scalar singlet, respectively.After the Higgs triplet acquires a VEV v ∆ , Eq. (S18) gives rise to the mass matrix of active neutrinos, At the tree level, the W boson and the Z boson obtain masses through Higgs mechanism, The electroweak ρ parameter can slightly deviate from 1, i.e. [95], Actually, the experimental measurement of the ρ parameter gives ρ exp = 1.0002 ± 0.0009 [73], which implies that v ∆ 7 GeV [83] according to Eq. (S22).The physical scalar sectors are obtained by rotating the weak eigenstates of the scalar fields with the following orthogonal transformations where the expressions of the orthogonal matrices R(β), V(β 1 , β 2 , β 3 ), and U(α 1 , α 2 , α 3 ) can be found in Ref. [96].The mixing angles β and β i are The masses of charged and CP-odd physical states are where Given that the value of v s is much larger than that of v φ and v ∆ , it is reasonable to expect that the mixing angles α 2 and α 3 in CP-even sector are approximately zero (α 2 , α 3 0) because the scalar singlet is nearly decoupled from the other scalar fields.The remaining α 1 can be written as Here we take λ 4 = 0 [82,94], and λ 5 which is correlated with the splitting of triplet mass spectrum is given as In this case (α 2 , α 3 0), the mass eigenvalues of CP-even physical states are where J n and Y n are the Bessel functions of rank-n.By taking t i → 0 [47] and t i = t C (the critical time when ALP starts to oscillate at T C ), the analytical evolution is shown in Fig. 2 as the red curves, as a comparison of the numerical simulations (blue curves).
The decay of SM Higgs into ALPs
The SM Higgs can decay into ALPs where the relevant coupling is listed in Table IV, and the decay width is estimated to be where , and we neglect the v ∆ and v φ terms.The total width of the SM Higgs is Γ h = 3.2 MeV [73].As discussed in Fig. 5, we take µ = 10 −5 GeV, v ∆ = 1 MeV, v s = 10 13 GeV, and m s = 1000 GeV.Then the branching fraction of the decay h → aa as a function of m L is shown in Fig. S1.Note that the branching fraction is of order ∼ O(10 −78 ), which implies that this decay process is almost impossible to be detected by current experiments.
One-loop radiative corrections to W -boson mass
In this section, we calculate the W -boson mass at the one-loop level.First of all, the expression of m W is given by [79,80] where ∆r is defined as with [81][82][83] The explicit expressions for the gauge boson self-energies Π W W , Π γγ , and Π Zγ are listed in Sec. .The δ VB denotes the contribution from the vertex and box radiative corrections, which are calculated in Refs.[97,98].Other input experimental values related to electroweak parameters are [54,73] α To better show the correlation between m W and the CP-even Higgs mixing angles α i = {α 1 ,α 2 ,α 3 }, we show scattering plots of m W versus sin α 1 in Fig. S2, by setting mixing angles α i is random values in the range (−0.4,0.4).For other numerical inputs, we set m L = 300 GeV, η = −200 2 GeV 2 , µ = 10 −5 GeV, v ∆ = 1 MeV, v s = 10 13 GeV and m s = 1000 GeV.It can be easily seen that the allowed parameter space of m W measured by the CDF collaboration [54] are displayed as the arched areas in both panels.However, sin α 2 and sin α 3 show the significantly different distributions.In the left panel, the allowed value of sin α 2 has a relatively uniform distribution in the range (−0.4,0.4), while the right panel shows that the allowed points for sin α 3 are almost smaller than 0.1, which implies that sin α 2 is usually irrelevant to the corrected m W .
The gauge-scalar interactions
In order to express the weak eigenstates on the right side of Eq. (S23) in terms of mass eigenstates on the left side, it's useful to get the transposed form of the orthogonal matrices, which are listed as follows, (S40) In the following, we list the vertices and coefficients for the corresponding interactions in Tables I, II, and III.For simplicity, we use the abbreviations s β (sin β), c β (cos β), ŝW (sin θ w ), and ĉW (cos θ w ) for the mixing angle β and the Weinberg angle θ w , respectively.In addition, the matrix elements of the matrices U (α 1 , α 2 , α 3 ) and V (β 1 , β 2 , β 3 ) are denoted as U ij , V ij (i, j = 1, 2, 3), respectively.
The Higgs-Higgs-gauge type interactions and the corresponding coefficients.
The ALP-scalar/neutrino interactions
We list the vertices and the corresponding coefficients for ALP-Higgs and ALP-neutrino interactions in Table IV.
The gauge boson self-energies
The analytic expressions for gauge boson self-energies are listed as follows.Here we use the Passarino-Veltman functions defined in Ref. [99].First, the fermion-loop (F) contributions to all the two-point correlation functions are
FIG. 1 .
FIG. 1.The evolution of the energy scales for ALP mass ma (blue line) and the Hubble parameter (red line) as a function of the time.Three cases of ma are shown for comparisons.The green intersections represent the temperatures when the oscillation begins.The vertical dashed line represents the critical temperature (TC).
FIG. 2 .
FIG.2.The analytical (solid red) and numerical (dashed blue) evolution of θ as a function of T for two benchmark ALP masses ma < ma C (ma = 2 × 10 −8 eV) and ma > ma C (ma = 2 × 10 −4 eV).The vertical dashed lines correspond to the oscillation temperatures.
FIG.3.The relic density Ωah 2 as a function of ma for various fa.The vertical dotted line represents the critical mass (maC).The initial misalignment angle is taken asθ(ti) = π/ √ 3.The gray region is excluded by the overabundance of DM.
2 FIG. 5 .
FIG.5.The mW as a function of the lightest triplet-like Higgs mL.Here we set µ = 10 −5 GeV, v∆ = 1 MeV, vs = 10 13 GeV, and ms = 1000 GeV.Three typical values of η are selected for comparisons.The dashed and solid lines correspond to the cases of η < 0 and η > 0, respectively.The dashed green line and the gray region represent the SM prediction[80] and the recent 2σ bound set by CDF[54], respectively.
6 .
FIG.S1.The branching fraction of h → aa as a function of mL.
TABLE I .
The Higgs-gauge-gauge type interactions and the corresponding coefficients.
TABLE III .
The four-point interactions and the corresponding coefficients.
TABLE IV .
The ALP-Higgs/neutrino interactions and the corresponding coefficients. | 2022-10-25T01:16:15.329Z | 2022-10-24T00:00:00.000 | {
"year": 2022,
"sha1": "b0552312f7573d99498c5337945a90a39c8d5195",
"oa_license": "CCBY",
"oa_url": "http://link.aps.org/pdf/10.1103/PhysRevD.109.115027",
"oa_status": "HYBRID",
"pdf_src": "ArXiv",
"pdf_hash": "b0552312f7573d99498c5337945a90a39c8d5195",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
14435865 | pes2o/s2orc | v3-fos-license | Monte Carlo Algorithm for Simulating Reversible Aggregation of Multisite Particles
We present an efficient and exact Monte Carlo algorithm to simulate reversible aggregation of particles with dedicated binding sites. This method introduces a novel data structure of dynamic bond tree to record clusters and sequences of bond formations. The algorithm achieves a constant time cost for processing cluster association and a cost between $\mathcal{O}(\log M)$ and $\mathcal{O}(M)$ for processing bond dissociation in clusters with $M$ bonds. The algorithm is statistically exact and can reproduce results obtained by the standard method. We applied the method to simulate a trivalent ligand and a bivalent receptor clustering system and obtained an average scaling of $\mathcal{O}(M^{0.45})$ for processing bond dissociation in acyclic aggregation, compared to a linear scaling with the cluster size in standard methods. The algorithm also demands substantially less memory than the conventional method.
I. INTRODUCTION
Reversible aggregation or self-assembly of particles with multiple interactive sites is of fundamental importance to diverse processes in physical and living systems including aggregation of colloidal particles [1] and proteins [2], synthesis of supramolecules in polymer science [3], and self-assembly of patchy particles such as nanoparticles [4,5] and synthetic biomolecules [6] in material sciences [7]. Reversible aggregation was traditionally studied using the generalized Smoluchowski equation [8,9] that requires one to develop kernel functions for cluster aggregation and fragmentation to obtain the kinetics of the cluster size distribution. Proper kernel functions can be analytically characterized often under restrictive assumptions of particle interactions. For acyclic aggregation of multisite particles that forms loopless clusters, Wertheim's thermodynamic perturbation theory [10] and Flory-Stockmayer theory [11] can predict equilibrium properties for simple systems. To study more general systems, Monte Carlo simulations are indispensable to provide new insights into the kinetics and equilibrium properties of the aggregation.
Reversible aggregation involves two principal types of reaction processes, bond formation and breaking. The balance of these two competing processes allows an aggregation system to reach an equilibrium after a transient phase. In the standard site-based simulation algorithm, clusters are stored as graphs representing the connectivity between particles (see Fig. 1 left panel for an example of multivalent ligand-receptor interaction system). To resolve information such as composition and topology of clusters, graph traversals by depth-first (or breadth-first) search are routinely applied, which are often computationally costly.
To simulate irreversible aggregation that ignores bond breaking, a highly efficient algorithm, the classic weighted * jinyang2004@gmail.com union-find with path compression [12], can identify cluster membership of binding sites and amalgamate two clusters in a near constant time proved by Tarjan [13] and demonstrated in computing site or bond percolation models [14]. The algorithm employs a tree-based data structure to index the cluster membership of individual sites. However, this strategy cannot be readily adopted to simulate reversible aggregation because bond dissociation requires time-consuming reorganization of the treebased data structures used in the algorithm.
To simulate reversible aggregation, the standard sitebased algorithm labels each individual site to identify its cluster membership. Bond formation and dissociation require site relabelings whenever a reactive event is sampled. In an event of bond formation between two sites, one first determines whether both sites belong to a same cluster by comparing their labels. The two sites belong to two separate clusters if the labels are different. In the latter case, one needs to relabel all sites in one cluster with the label of the other, which is done by a graph traversal of one cluster that is to be relabeled. Because a cluster size is known by simple bookkeeping, one can always relabel sites in the smaller cluster with the label assigned to the larger one to minimize the cost. This heuristics usually improves performance to a substantial extent over relabeling an arbitrary subcluster. Unfortunately, this weighted relabeling is infeasible for processing dissociation of a cluster into two smaller ones because the sizes of the two resulting subclusters are not known a priori (bookkeeping such information is nontrivial and expensive). Therefore, upon identification of a bond to break, by graph traversal one systematically relabels an arbitrary subcluster to which the bond connects. For a cyclic cluster with loops, a graph traversal also identifies whether the dissociating bond resides in a loop in order to decide whether site relabeling is needed. The average time complexity of a cluster traversal is O(N +M ), scaled by the size of the cluster, here measured as the number of particles N and the number of bonds M in the cluster. Clearly, the standard algorithm becomes computationally intensive in particular for simulating high density systems that contain giant clusters recorded by large connectivity graphs.
Here, we present an efficient kinetic Monte Carlo algorithm that amalgamates two clusters in O(N C ) time, where N C is the average number of aggregates, and the algorithm splits a cluster in time between O(log M ) and O(M ). Unlike the site-based methods, the main idea behind our algorithm is based on the observation that explicit cluster graphs are usually not required in a simulation. Instead of using connectivity graphs, we use a more efficient data structure, namely dynamic bond tree (DBT), to track bonds and clusters without recording and updating the actual connections between particle sites. As a considerable advantage, the algorithm replaces expensive traversals of connectivity graphs with much more efficient updates of DBTs. The algorithm is numerically exact in generating observable quantities such as the cluster size distribution, average cluster size and the number of clusters. The algorithm is directly applicable to simulate aggregations that allow formation of both acyclic and cyclic clusters. If topologies of clusters are of interest, connections among sites can be recorded in parallel during a simulation, or alternatively, ensembles of cluster topologies can be mapped out stochastically from the corresponding DBTs by postprocessing.
The paper is organized as follows. In Section II, we elaborate the details about the data structure in our algorithm. Using acyclic aggregation as an example, we explain how to compute the two basic events of bond formation and dissociation using the data structure. The complete algorithm is summarized by the end of the section and the adaptation of the algorithm to simulating cyclic aggregation is explained. In Section III, we evaluate the performance of our algorithm by applying it to simulate a multivalent ligand-receptor binding model and compare it to the site-based graph traversal algorithm.
II. THE ALGORITHM
Data structure for storing clusters -Dynamic bond tree. To simplify explaining the algorithm without loss of generality, we consider a system that contains a homogeneous population of particles, each of which is decorated with one or more symmetric surface patches (binding sites). We assume that a single binding site can only sustain at most one bond. As demonstrated below in Section III (Application), the algorithm can be readily extended to a system with a heterogeneous population of particles with non-identical sites that can bind to complementary sites on other particles. The basic data structure in our algorithm is the dynamic bond tree (DBT) used to store every cluster of particles. Fig. 1 illustrates structures of DBTs for both acyclic and cyclic clusters in an example system of multivalent ligand receptor binding, which will be later used as an application to demonstrate the DBT-based simulation algorithm.
Each multiparticle cluster is identified by the root node of the corresponding DBT. A leaf node in a DBT represents a single particle in the cluster, whereas a non-leaf node including the root node records a site-site bond. Each non-leaf node has either one or two child nodes, depending on whether the bond is formed between intracluster sites or intercluster sites, respectively. A node with two children (e.g., non-leaf nodes in DBTs in Fig. 1 except for node 6) indicates that the bond was formed by an association between a pair of sites that reside on two previously separate clusters represented by the two child nodes, whereas a node with a single child (e.g., node 6 in Fig. 1 middle panel) indicates that the bond was formed by an association between a pair of sites that reside on a same cluster represented by the only child node. By these conventions, a cluster is cyclic if and only if the corresponding DBT contains at least one non-leaf node that has a single child. Otherwise, a cluster is acyclic.
Unlike the standard cluster connectivity graph, a DBT does not require keeping track individual binding sites. Below, we use acyclic aggregation as an example to describe how to process bond formation and breaking using DBT structures, and later we explain that processing cyclic aggregation only requires slight adaptation. Bond formation (for acyclic clusters). To process a bond formation, two clusters (note that one or both could be free particles) are first sampled according to their joint probability of contributing binding sites. The probability for a cluster c to contribute a binding site can be related to its number of free sites s c with a function g(s c ), whose value is assigned to each cluster as a weight (see Fig. 1 for example weights assigned to DBT nodes). In a simplest form, g(s c ) can be usually considered to be proportional to s c , but we note that in general the function g(s c ) might assume different forms in different systems or models. For example, consider that a cluster of a spherical volume has s c free binding sites. Due to the effect of steric hindrance, one may assume that only free sites near the cluster surface can form a bond with a site near the surface on another cluster. In this model, assuming free sites are homogeneously distributed within the cluster volume and on the surface, one can show that g(s c ) ∼ s 2/3 c is a good approximation. After two binding clusters are determined, a new node z is then created as a root node of the DBT that will store the resulting cluster. The root nodes, x and y, of the DBTs of the two binding clusters become two children nodes (representing two subclusters) of the root node z. A weight of value g(s z ) is then assigned to z. The number of free sites in the newly formed cluster is s z = s x +s y −2, where the adjustment by −2 is due to the consumption of two sites to form bond z, each from one subcluster. Obviously, the process of constructing a DBT manifests the hierarchical nature of cluster aggregation, in which a bond node in a lower level in the DBT formed earlier than one in an upper level. In fact, this structure of hierarchy in DBTs underlies the performance gain of our algorithm. Unlike the standard method, this procedure of merging two clusters does not require cluster membership checking of trial binding sites and systematical site relabeling, and thus merely has a constant time complexity. We note that locating two clusters to bind demands searching over the entire array of clusters. Therefore, the overall complexity of bond association scales linearly with the number of clusters (i.e., O(N C ), where N C is the total number of clusters). However, we will show in the example TLBR system below that this cost is in most cases modest if it is not ignorable. In particular, when a system reaches the highly aggregated regime, processing bond formation has a near constant time cost because the number of clusters N C remains small and grows very slowly with the number of particles [15].
Bond dissociation (for acyclic clusters). To process a bond dissociation, one first samples a bond according to its probability to dissociate. The selected bond locates to a non-leaf node x in a DBT which is identified by its root node z. In an acyclic cluster, removal of node x will ultimately split the DBT into two smaller DBTs (we note that the cluster could also dissociate into one free particle and a cluster or into two free particles). If x happens to be the root node z, the procedure is trivial.
The two child nodes of z, l and r, simply become root nodes of the two separate DBTs. Otherwise, the final two smaller DBTs are determined by a series of probabilistic decisions, which constitutes a key ingredient of our algorithm. The bond dissociation results in the removal of node x and separates the subcluster into two parts represented by nodes l and r. Note that the subcluster represented by node x contributes a site to form the bond at its parent node, p, with the other child node of p. Therefore, we need to decide at this step which subcluster, l or r, provides the site to form bond p and thus will connect to p as a child node. This is done probabilistically. We may assume that the probability of choosing either l or r is proportional to the weight function g(s x ) of the number of free sites contained in the subcluster before the bond p was actually formed. For instance, the number of free sites in subcluster l is s l − 1 (in subcluster r, the number of free sites is s r − 1), where the adjustment −1 accounts for the consumption of one site to form bond x. The probability of choosing l to connect to p can be then calculated as g(s l − 1)/(g(s l − 1) + g(s r − 1)). Without loss of generality, we assume that node l is selected and subcluster r dissociates from cluster p. We then update the number of free sites in p as s p ← s p − (s r − 1) and recalculate the weight g(s p ). Applying the same operation, we further decide which of node r and the updated p connects to the parent node of p, and so on. This procedure iterates up to the root node z and then in the end obtains two separate DBTs. Figure 1 illustrates that a bond dissociation splits a DBT into two smaller ones in an example ligand-receptor binding system for both acyclic and cyclic clusters. The total number of iterations required to split a DBT equals the depth of the DBT from the dissociating node x to the root node. Each iteration requires generating a random deviate drawn uniformly from the interval (0, 1) and updating the weight of the parent node one level up. This bond-breaking procedure is substantially efficient, which has a sublinear cost between O(log M ) when a DBT is well balanced and O(M ) when a DBT forms a linear cascade due to sequential attachment of single particles. Both scenarios are rare and unlikely to persist because stochastic bond association and dissociation prevent formation of perpetual linear DBTs or completely balanced DBTs.
The algorithm. We now summarize below the algorithm for reversible acyclic aggregation of multisite particles. We note that the memory cost of this algorithm is also substantially reduced in comparison to the sitebased algorithm that uses connectivity graphs. The sitebased algorithm demands memory to track connections between individual particles and sites, which scales with the total number of sites in a system. The current algorithm has a memory cost scaled by the maximum number of bonds that can be potentially formed in a system, which is usually much less than the total number of sites in the systems. As for other essential memory requirement for simulation, both algorithms maintain lists of bonds and clusters.
The current algorithm is described as follows: 1. Initially, sites in every particle are free with no bond formed. The system in the beginning does not have bonds and clusters. Both bond list and cluster list are empty (the cluster list will contain root nodes of DBTs). Initialize the probabilities for bond association and dissociation events.
2. Sample a bond association or a dissociation event based on the probabilities of the two event types at the current step.
3. If the sampled event is a bond association, create a new bond node as a root node for the DBT of the new cluster, connect the root nodes of the two merging DBTs to the new node, remove the two merging clusters from the cluster list and insert the new cluster into the list. Since a DBT is identified by its root node, one can just remove one root node of the two merging clusters from the cluster list and replace the other with the root node of the new cluster. Insert the new bond into the bond list. Go to Step 5.
4. If the sampled event is a bond dissociation, identify an individual bond to break by searching over the bond list. Split the DBT to which the chosen bond locates into two smaller DBTs. Replace the root node of the splitting cluster in the cluster list with one root node of the two new smaller clusters and insert the other root node into the list. Remove the dissociated bond from the bond list.
5. Update the probabilities for bond association and dissociation events.
Repeat
Step 2 until the simulation stops.
The above procedure does not explicitly include tracking time evolution in simulation, which can be included as shown in our application below. A simulation of a system starts out with all free particles without bonds, and after a transient dynamics the system will eventually relax to its equilibrium where the rate of bond formation is balanced by the rate of bond breaking. As we will show, this algorithm may provide a substantial speedup for processing bond dissociations in high density clusters.
The procedure for simulating cyclic aggregation that allows loop formations in clusters is largely the same as described above with some modifications to handle cyclic bonds. As mentioned in the previous section, processing a cyclic bond formation is also trivial. Whenever an intracluster site pair forms a bond, a new node is created with only one subtree that corresponds to the same cluster contributing both binding sites. For a cyclic cluster, breaking a bond may or may not split the corresponding DBT into two smaller ones. If the breaking bond x happens to only have a single child, we connect this child node directly to the parent node of x and update the weights of all subsequent parent nodes up to the root node of the DBT. In this case, because the bond x is part of a loop in the cluster, its dissociation does not split the cluster into two. If node x has two child nodes, the procedure is identical to that of splitting an acyclic DBT until the iteration meets one upper level parent node p that has only one child. In such a case, we need to determine how the two subclusters contribute a pair of sites to form the bond at node p. There are two possible ways, to be determined probabilistically: (1) One of the two subclusters contributes both sites. This subcluster will then connect to p as a single child node and the other subcluster remains separate for further processing to the upper level of the DBT. (2) Each subcluster contributes a site. In this case, the two subclusters connect to node p as two children nodes, and no further probabilistic decisions are needed except for updating the weights of all the upper level parent nodes up to the root node. Figure 1 illustrates how to process cluster aggregation using DBTs for an example system of trivalent ligands with three binding sites aggregate with bivalent receptors with two binding sites (TLBR model). Especially, we note that an equivalent class of DBTs exists for each cluster with a distinct connectivity, and vice versa. It is possible, by probabilistic mapping, to systematically convert a DBT into an ensemble of cluster connectivity graphs with same numbers of particles and bonds for inspection. For instance, Fig. 1 shows that cluster I may be represented as DBT I or II depending on the sequence of bond formation. The stochasticity in breaking a bond in a cluster can also result in diverse fragmentation scenarios ( Fig. 1 right panel).
III. APPLICATION
To demonstrate our algorithm and compare its performance to the conventional site-based algorithm, in this section we specialize to simulate aggregation in the TLBR system. The system is representative to aggregation of a mixture of heterogeneous particles with multiple complementary binding sites and thus serves as a benchmarking system for comparison between the current method and the conventional one that uses graph traversals. The acyclic TLBR aggregation was originally studied analytically by Goldstein and Perelson [16] who used an equilibrium model to obtain cluster size distribution and showed the existence of sol-gel phase transition under a certain range of parameter values. Results from a recent Monte Carlo simulation study [17] showed agreement with Goldstein and Perelson's equilibrium theory.
In the TLBR system, a population of extracellular ligands, each of which has three identical binding sites, interact with a population of receptors distributed on the cell surface, each of which has two identical binding sites. A bond can be formed only between a ligand site and a receptor site. The TLBR system distinguishes two kinds of bond formations: (1) Free ligands are first recruited to cell-surface receptors and (2) bound ligands with free binding sites can subsequently crosslink receptors and induce receptor aggregation. A ligand-receptor bond can dissociate spontaneously. We apply the law of mass action to account for the rates of bond association and dissociation. Here, we simply assume that the probability for a cluster to contribute a receptor (ligand) site is proportional to the number of free receptor (ligand) sites in the cluster, i.e., g(s c ) ≡ s c .
The system involves three rate processes parameterized by different rate constants: (1) Free ligands precipitating to bind cell surface receptors with a rate constant k + . The rate can be calculated as: where v l and v r are valences (number of binding sites) in the ligand and the receptor molecule, respectively. F L is the number of free ligands in solution, N R and N B are the numbers of total receptors and bonds, respectively.
(2) Receptor crosslinking by ligands already bound to receptors with a rate constant k ++ . The rate is calculated as: where φ is a parameter that characterizes the average probability of an intracluster site pair to form a bond and takes a value within the interval [0, 1]. The aggregation becomes acyclic when φ = 0. The term accounts for the total product between free ligand sites and receptor sites on the cell surface and the term (1 − φ)Z accounts for a reduction by the production of intracluster ligand and receptor sites adjusted by the parameter φ. The quantity Z is the sum of intracluster site combinations over all clusters, Z = NC i=1 l i r i . l i and r i are the numbers of free ligand and receptor sites in cluster i. respectively. In practice, since each event only affects a smaller number of clusters, Z can be calculated after each event by iterative update to avoid summing over the entire array of clusters [15]. And, (3) Ligand-receptor bond dissociation with a rate constant k off . The rate is proportional to the number of bonds and is calculated as: Evaluation of the rates for the above processes takes a constant cost for each event if quantities including F L and N B are bookkept during simulation. The simulation follows the typical procedure of kinetic Monte Carlo or stochastic simulation algorithm [18][19][20]. At the start of a simulation, all ligand and receptor sites are free with no bond formed. For each iteration, to record the lapse of time, we first determine the waiting time τ for the next event, which is sampled from an exponential distribution with the mean waiting time τ = 1/r tot , where r tot = r 1 + r 2 + r 3 . We then select a rate process that fires the next reaction event. The probability of a process to be selected is proportional to its rate. Finally, we update the configuration of the system and recalculate the reaction rates. Figure 2 shows that the DBT depth in acyclic aggregation has a very slow growth against the increase in the cluster size. The growth of the DBT depth can be fit to a monomial function of the number of bonds as M 0.45 . Cyclic aggregation exhibits a steeper growth of the DBT depth against the cluster size because forming intracluster bonds increases the DBT depth on top of an acyclic cluster with the same number of receptors and ligands. The deepest cyclic DBTs that correspond to the largest clusters, on average has the depth only one tenth (about 4,000 vs. 50,000) of the cluster size measured by the number of particles and bonds in the cluster. For the cyclic aggregation shown in Fig. 2, we allow each free ligand-receptor site pairs (both intracluster and intercluster site pairs) has an equal probability to form a bond. Note that this assumption overestimates the probability of both intracluster and intercluster bond formation because geometric constraints may prohibit interactions between certain intracluster site pairs [21]. For more realistic models, one can use a different function g(s) to account for the probability of a cluster to provide binding site. Upon each association event, a trial intracluster ligand-receptor site pair is accepted to form a bond with a probability φ. Here by setting φ = 1, we intend to present a worst-case scenario for cyclic aggregations in terms of the average DBT depth and expect that the performance of simulating any intermediate cyclic aggre-gation model (0 < φ < 1) of the TLBR system will lie between this extreme model and the one of acyclic aggregation (φ = 0).
Simulation results verified that for both acyclic and cyclic aggregations, our algorithm is statistically identical to the site-based graph traversal method in obtaining the average cluster size and the number of clusters N C under varying dissociation rate constant k off (see Fig. 3(a)). The cluster size is measured by the number of receptors in a cluster (not including free receptors). The average cluster size is given by: x n is the number of clusters of size n, F R is the number of free receptors. The result shows that cyclic aggregation produces less clusters and bigger average cluster size than acyclic aggregation within the middle range of k off (between 10 −4 s −1 and 10 −1 s −1 ) and the two types of aggregation converge at both high density region and weakly aggregated region. The number of clusters N C reaches a maximum about 3000 at k off = 0.4s −1 , which is much lower than the total number of receptors N R and falls down sharply as the average cluster size reaches maximum.
To simulate acyclic aggregation, rejection sampling can be employed to enforce the loopless condition required for receptor crosslinking. Upon each crosslinking event, the rejection sampling in the site-based algorithm randomly picks out a pair of sites. The algorithm rejects the trial sites if the two sites are found in a same cluster. Otherwise, the sites are accepted to form a bond. In contrast, the current algorithm using DBTs processes crosslinking by picking out two clusters x and y for ligand and receptor sites, respectively, based on the number of free binding sites in the clusters. The event is rejected if clusters x and y are identical (x = y). However, rejections can slow down a simulation when the system has a high density cluster that contains a large number of particles. One can measure the extent of rejection sampling using a rejection ratio, θ, defined as the probability that a pair of sampled sites are rejected for binding. For example, the instantaneous rejection ratio can be calculated for the acyclic TLBR system as: θ = k ++ Z/r tot . In practice, we can obtain the average rejection ratio in the steady state as rejected events normalized by the total number of events including null events. As shown in Fig. 3(b) for an algorithm using either site-based connectivity graphs or DBTs, efficiency of simulation decreases when rejected samples become dominant in the rejection sampling (e.g., θ > 0.9).
To simulate an acyclic aggregation system with high density clusters, a rejection-free sampling [15] of binding sites is required to overcome the bottleneck caused by a high rejection ratio in the rejection sampling that excludes intracluster site pairs from binding. Fig. 3(b) shows the performance comparison between different methods that use graph traversals or DBTs with or without rejection-free sampling. The combination of DBTs and the rejection-free sampling is superior to other approaches. Except only for the method using rejection sampling with graph traversals, a hump (about k off = 0.4s −1 ) in each curve in Fig. 3(b) for simulating acyclic aggregation and the DBT method in Fig. 3(c) for simulating cyclic aggregation reflected a small performance penalty due to sampling over a maximum number of clusters for binding clusters near the phase transition boundary. For simulations of cyclic aggregation, the performance comparison between the DBT and site-based methods is shown in Fig. 3(c). In this fully cyclic aggregation model (φ = 1) because every free ligand-receptor site pair is allowed to crosslink on the cell surface, no rejection sampling is required in simulations. At the high density region (k off < 0.01), the method using DBTs is four times faster than the site-based algorithm using graph traversals.
IV. CONCLUSION
We have presented an efficient kinetic Monte Carlo procedure for simulating reversible aggregation of multisite particles, especially for systems with a large number of particles that nucleate into high density clusters. The algorithm generates statistically identical results as the standard method with considerably less time and space complexity in computation. To avoid costly operations of traversals of cluster connectivity graphs, the algorithm records clusters and processes bond formation and breaking using dynamic bond trees that track the hierarchy of cluster aggregation. The method provides a fast means to evaluate aggregation of multisite particles and can be in general adapted to simulate a wide class of particle aggregation and self-assembly.
In this paper, we demonstrated the scaling of our algorithm against cluster size with the TLBR system that is prototypical to more general reversible aggregation of multisite particles. We note that the standard graph traversal algorithm has a complexity of O(M + N ) in processing bond breaking, which scales linearly with the number of particles and bonds in an aggregate. In comparison, the worst-case scenario in the current method using DBTs has a complexity of O(M ), which scales linearly only with the number of bonds in an aggregate and in principle improves over the standard method. This worst case is however probabilistically unlike to persist for any typical system such that the complexity of our algorithm is always below O(M ). For example, O(M ) complexity requires the DBTs used for storing an aggregate has a height that is equal to the number of bonds so that the DBT updating takes O(M ) time after a bond was randomly sampled to break in the original DBT. For this situation to persevere, it requires that each acyclic bond association and dissociation must happen between a single particle and an aggregate (or another single particle) to maintain each DBT as a linear chain with no branches, which effectively precludes aggregation between multiparticle clusters. On the other hand, the scaling of O(log M ) requires well-balanced DBTs (i.e., bal-anced binary trees), which implies each bond association must happen between two clusters with identical DBTs and each bond dissociation must result in two identical clusters. This scenario obviously can only happens fortuitously during a kinetic Monte Carlo simulation. The actual performance of the algorithm will depend on the property of a particular aggregation system itself and specific parameters used in a simulation including the valence on particles, kinetic rate constants, interaction rules, and etc.
V. ACKNOWLEDGEMENT
We thank William Hlavacek and John Pearson for helpful discussions. This work was supported by National Science Foundation of China through grant 30870477 (J.Y.). cluster size and the number of clusters, by methods using DBTs or graph traversals for acyclic and cyclic aggregations. The results were obtained by averaging 5000 samples (each sample was separated by 100 events) at the equilibrium. (b) Performance of four schemes for simulating acyclic aggregation of the TLBR system: rejection or rejection-free sampling with or without employing DBTs. Inset: rejection ratio [the ratio of the number of effective events to the number of all events] in different phase regimes. The mean CPU time per event was obtained by averaging after the system reached the equilibrium. (c) Performance comparison between DBT and graph traversal methods for simulating cyclic aggregation (φ = 1). Parameters are identical to the ones indicated in Fig. 2. | 2011-09-26T06:03:54.000Z | 2010-10-02T00:00:00.000 | {
"year": 2010,
"sha1": "365da5864fde2e99343c9ef18cff1fedc3290e86",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1010.0339",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "365da5864fde2e99343c9ef18cff1fedc3290e86",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine",
"Biology",
"Physics"
]
} |
334487 | pes2o/s2orc | v3-fos-license | Assembly of the sarcoplasmic reticulum. Localization by immunofluorescence of sarcoplasmic reticulum proteins in differentiating rat skeletal muscle cell cultures.
Immunofluorescent staining techniques were used to study the distribution of the Ca(2) + Mg(2+)-dependent ATPase and calsequestrin in primary cultures of differentiating rat skeletal muscle cells, grown for different periods of time under various culture conditions. In mononucleated myoblasts calsequestrin was detected after 45 h in culture whereas the ATPase was not detected until 60 h. After cell fusion began, both proteins could be identified in all multinucleated cells. Myoblasts grown for longer than 60 h in low Ca(2+) medium contained calsequestrin and the ATPase, even though they were unable to fuse. These studies at the cellular level confirm biochemical findings on the biosynthesis of calsequestrin and the ATPase. Immunofluorescent staining of myoblasts showed that calsequestrin first appears in a well-defined region of the cell near one end of the nucleus. At later times, the staining occupied progressively larger regions adjacent to the nucleus and took on a fibrous appearance. This suggests that calsequestrin first accumulates in the Golgi region and then gradually spreads throughout the cell. In contrast, the ATPase appeared to be concentrated in many small patches or foci throughout the cytoplasm and was never confined to one particular region, although some parts of the cell often stained more intensely than others. In multinucleated cells, alternating dark and fluorescent strands parallel to the longitudinal axis of the cells were evident.
In mononucleated myoblasts calsequestrin was detected after 45 h in culture whereas the ATPase was not detected until 60 h. After cell fusion began, both proteins could be identified in all multinucleated cells. Myoblasts grown for longer than 60 h in low Ca z+ medium contained calsequestrin and the ATPase, even though they were unable to fuse. These studies at the cellular level confirm biochemical findings on the biosynthesis of calsequestrin and the ATPase.
Immunofluorescent staining of myoblasts showed that calsequestrin first appears in a well-defined region of the cell near one end of the nucleus. At later times, the staining occupied progressively larger regions adjacent to the nucleus and took on a fibrous appearance. This suggests that calsequestrin first accumulates in the Golgi region and then gradually spreads throughout the cell. In contrast, the ATPase appeared to be concentrated in many small patches or foci throughout the cytoplasm and was never confined to one particular region, although some parts of the cell often stained more intensely than others. In multinucleated cells, alternating dark and fluorescent strands parallel to the longitudinal axis of the cells were evident.
Fluorescent staining with these antisera was not observed in fibroblasts which were also present in the cultures.
Sarcoplasmic reticulum is the intracellular membrane system in muscle cells which regulates contraction by modulating the intracellular concentration of calcium ions (3). The relationship between the individual components of this membrane system and their function at the molecular level has been studied extensively (13). Very little is known, however, about the assembly of this membrane system during the maturation of muscle cells. Examination of the microsomal fractions isolated from embryonic and neonatal muscle, for their ability to transport Ca ++ in an ATP-depen-THE JOURNAL OF CELL BIOLOGY' VOLUME 74, 1977. pages [287][288][289][290][291][292][293][294][295][296][297][298] dent reaction, has shown a progressive increase in Ca ++ transport activity during development (2,5,10,17).
Recently, we have undertaken an investigation of the synthesis of the two major sarcoplasmic reticulum proteins, the Ca ++ + Mg++-dependent ATPase (7) and calsequestrin (22). A study of the temporal pattern of biosynthesis of these proteins in differentiating rat skeletal muscle cells in culture showed that the initiation of synthesis of calsequestrin (an extrinsic component) preceded the initiation of synthesis of the ATPase (the major intrinsic protein) by about 20 h. Moreover, calsequestrin synthesis preceded myoblast fusion which began 50-60 h after plating (22). The maximal rate of synthesis of calsequestrin was reached at about 72 h after plating. The rate of synthesis of calsequestrin then decreased while the rate of ATPase synthesis remained high. If cells were transferred to a low Ca + § medium which inhibited fusion, the rate of calsequestrin synthesis decreased whereas the rate of ATPase synthesis increased sharply after a slight delay, even though no fusion occurred.
In order to understand the assembly of the sarcoplasmic reticulum, we have begun a morphological study of the biosynthesis of calsequestrin and the Ca + § + Mg++-ATPase and their incorporation into this membrane system. In this report, we describe the appearance and distribution of calsequestrin and of the ATPase in developing rat skeletal muscle cells as determined by the immunofluorescent staining technique.
The results confirm, at the cellular level, our previous biochemical findings on the temporal sequence of initiation of synthesis of these two sarcoplasmic reticulum proteins. In addition, we have found that some myoblasts, cultured in standard medium, are capable of synthesizing the ATPase before fusion. The results obtained with the fluorescent antibody technique fully support the view that cell fusion is not essential for the initiation of synthesis of either the ATPase or calsequestrin. The localization of these two proteins in myogenic cells during maturation and cell fusion is described and the possible relationship between the observed staining patterns and organized subcellular structures is discussed.
Cell Culture
Isolation of muscle cells from neonatal rats and their growth on standard medium, low Ca ++ medium and normal Ca ++ medium has been described previously (7,22).
Purification of Rat A TPase and Rabbit Calsequestrin
Rat ATPase was prepared by procedures similar to those used for the purification of rabbit ATPase (11), except that the fractionation in ammonium acetate was carried out at pH 8.35. Rabbit calsequestrin was purified as previously described (14).
Preparation of Antibodies
The rabbit anti-rat ATPase serum previously characterized (7) was used. The specificity of this antiserum was demonstrated by Ouchterlony double-diffusion tests against purified rat ATPase or solubilized rat sarcoplasmic reticulum. Only a single precipitin line was obtained in both cases. No precipitin line was observed when normal sera were used. This antiserum did not cross-react with either purified rat calsequestrin or high affinity Ca++-binding protein (7).
ATPase antibodies were isolated from the serum by absorption with insoluble purified ATPase. Since the anti-ATPase serum was prepared against an ATPase preparation which also contained phospholipid and proteolipid (15), the latter substances were removed from the absorbant by acetone extraction (6). The insoluble, acetone-extracted ATPase was then washed twice with phosphate-buffered saline (PBS), which contained 147 mM NaCI, 2.67 mM KCI, 0.49 mM MgCI2, 0.9 mM CaCI2, 8.1 mM Na2HPO4, and 1.47 mM KH~PO4, pH 7.2, extracted twice with 0.1 M glycine, pH 2.8, and washed again with PBS. For absorption, about 50 mg of insoluble ATPase and 4 ml of serum were incubated for 30 min at room temperature, and then the mixture was centrifuged for 10 rain at 25,000 g. Ouchterlony doublediffusion tests were used to determine whether the antibody was removed from the serum. The insoluble ATPase-antibody complex was washed twice in PBS, and the specific antibody was recovered from the insoluble complex by two washes at 0~ in 0.1 M glycine, pH 2.8. The glycine extract was adjusted immediately to pH 7.5 with phosphate buffer and concentrated to 1 ml by pressure dialysis. About 3 mg of specific antibody were obtained from 4 ml of serum. This purified antibody gave only a single precipitin line in Ouchterlony tests.
The sheep anti-rabbit calsequestrin serum previously characterized (22) was used. The specificity of this antiserum was demonstrated by Ouchterlony double-diffusion tests against purified rat calsequestrin or solubilized rat sarcoplasmic reticulum. Only a single precipitin line was obtained in both cases. No precipitation line was obtained when normal sera were used. This antiserum did not cross-react with either purified rat ATPase or high affinity Ca+t-binding protein (22).
Specific calsequestrin antibodies were prepared from the serum by absorption with an insoluble calsequestrinalbumin complex. To prepare insoluble calsequestrin, 40 mg of bovine serum albumin and 10 mg of calsequestrin were dissolved in 1 ml of 0.2 M sodium acetate, pH 5.0. 0.2 ml of 2.5% glutaraldehyde was added dropwise, and the mixture was stirred for 3 h at room temperature. The material was then diluted with 20 ml of 1 M glycine, 0.1 M sodium phosphate, pH 7.5, and washed twice by centrifugation in glycine-PO4 buffer, twice by centrifugation in PBS, twice by centrifugation in 0.1 M glycine, pH 2.8, and, finally, twice in PBS (1,8). The insoluble calsequestrin-albumin complex was then added to 2 ml of sheep anti-calsequestrin serum. All of the antibody was removed from the serum in two incubations of 30 rain at room temperature. The specific antibody-calsequestrin complex which was obtained upon centrifugation was dissociated by glycine buffer, and the specific antibody was recovered by the same procedures used for recovery of the specific ATPase antibodies. This purified antibody gave only a single precipitin line in Ouchterlony tests.
Absorption
Solutions of specific antibodies were absorbed by incubation with the appropriate antigen. Calsequestrin antibody (100 /zg) was incubated with 6 p.g of rat calsequestrin for 72 h at 4~ in 0.3 ml of PBS. The supernatant solution obtained after centrifugation was used in place of the specific antibody in immunofluorescence tests. Similarly, 25 p.g of specific ATPase antibody was incubated for 72 h at 4~ with 5/zg of purified, lipid-free rat ATPase dissolved in 0.5% Triton X-100. The supernatant solution obtained after centrifugation was used in immunofluorescence tests.
Indirect Fluorescent Antibody Staining
Primary rat skeletal muscle cells were grown either in Labtek chambers (Miles Laboratories Inc., Miles Research Products, Elkhart, Ind.) or on glass cover slips placed in 55-mm diameter plastic Petri dishes. Each 100 mm 2 Labtek chamber was filled with 0.5 ml of medium and 1.2 x 105 cells. Each Petri dish was filled with 3 ml of medium and 3 • l0 s cells. Media were changed daily. After various time intervals up to 140 h after plating, the medium was removed and the cells on the glass slide or cover slip were rinsed with PBS, pH 7.2, and fixed for 30 min at room temperature in 1% formaldehyde in PBS, pH 7.2, air dried, and stored in a desiccator at -20~ for up to a week.
The fixed cells were incubated with specific antibody for 30 min at room temperature (anti-ATPase, 15 txg/ml in PBS, pH 7.2; anti-calsequestrin, 100 /zg/ml in PBS, pH 7.2) and then rinsed four times with PBS, pH 7.2. Cells previously treated with specific rabbit ATPase antibodies were incubated with the F1TC-conjugated immunoglobulin fraction of goat anti-rabbit IgG (3.5 mg/ml) for 30 min at room temperature. Cells previously treated with specific sheep calsequestrin antibodies were incubated with the FITC-conjugated immunoglobulin fraction of rabbit anti-sheep IgG (0.5 mg/ml) for 30 min at room temperature. Finally, the cells were washed four times in PBS, pH 7.2, and mounted in 50% glycerol in PBS. The cells were examined in a Zeiss fuorescence microscope provided with an Epi-fluorescence attachment and interference filters. The photographs were taken on Ilford FP-4 film.
For a specific time point, the labeling of cells with ATPase and calsequestrin antibodies was carried out by dividing a cover slip in half and using one half of the cover slip for treatment with ATPase antibodies and the other half for treatment with calsequestrin antibodies. When cells were grown in Labtek chambers, one ehamber at each end of the same slide was used for the labeling of cells with each of the two antibodies.
Analysis of Sugars in Calsequestrin
The sugar content of calsequestrin was determined by the method of Zanetta et al. (21). To confirm the absence of fucose and galactose, a sample, after methanolysis, was passed through a Dowex 50 (+H) column to remove amino acids. Glucosamine was also measured by a colorimetric technique (9) after acid hydrolysis in 4 N HCI at 100~ for 4 h.
RESULTS
To determine the time of appearance and the distribution of the ATPase and calsequestrin in cells at various times during differentiation, primary cultures of rat skeletal muscle were examined by the indirect fluorescein antibody technique. In cultures prepared for microscopy, only mononucleated bipolar myoblasts and flat irregularly shaped mononucleated fibroblasts were present when cells were grown in standard medium up to 70 h. Fusion started after about 70 h, and subsequently the number of multinucleated muscle cells progressively increased. This is in contrast to the growth pattern observed with higher density plating in Petri dishes where fusion began after about 50 h in culture (7,22).
Localization of Calsequestrin
In cultures grown in standard or normal Ca 2+ medium, immunofluorescent staining of cells with antibody to calsequestrin was not observed before 45 h (Fig. l a). Immunofluorescent staining in some myoblasts was first detected at about 45 h, localized in one region of the cytoplasm near the nucleus ( Fig. 1 b and 1 c), which we believe is the Golgi region. The nucleus often appeared indented near the stained region (Fig. l b). The staining appeared homogeneously distributed within this region, and no structural features could be resolved at this stage. The cytoplasm outside this region was negative. Later, myoblasts were observed in which the fluorescent staining occupied increasingly larger regions adjacent to the nucleus (Figs. ld, 2a and 2b) and took on a distinctly fibrous appearance. With time, an increasing number of myoblasts were specifically stained with antibodies to calsequestrin, but all of the above stages could still be recognized. After 65 h in culture, some myoblasts were specifically stained throughout the cytoplasm.
Following fusion, all of the bi-and multinucleated myotubes were specifically labeled with calsequestrin antibody (Figs. 2c and 4a). In the vast majority of the myotubes, fluorescent strands running parallel to the longitudinal axis of the cell could he distinguished (Fig. 4a). Occasionally, bior trinucleated cells were encountered in which the staining was absent from the cytoplasm surrounding one nucleus while that surrounding the other nucleus was stained (Fig. 2 d).
Localization of A TPase
In cultures grown in standard or normal Ca ++ medium, immunofluorescent staining with the ATPase antibody was not observed before 65 h (Fig. 3 a).
Specific staining was first detected in some mononucleated myoblasts after 65 h, just before fusion began (Fig. 3 b, 3 c, and 3 d). This contrasts with the much earlier appearance of calsequestrin in cells grown in the same cultures and is in agreement with results from earlier biochemical studies (22).
The fluorescent staining in myoblasts with anti-bodies to the ATPase was present throughout the cytoplasm ( Fig. 3b and 3c) and was granular, indicating that the ATPase was concentrated at certain foci rather than homogeneously distributed. Occasionally, however, one region did stain more intensely than the rest of the cytoplasm, but there was no sharp line of demarcation between these two regions ( Fig. 3d and 3c) as was seen with antibodies to calsequestrin (Fig. 1 b and 1 c). At later times, after cell fusion began, immunofluorescent staining with the ATPase antibodies was observed in all multinucleated cells (Figs. 3 e, 3f, and 4b).
The staining pattern observed in myotubes with ATPase antibodies (Fig. 4b) was very similar to that observed after staining with the calsequestrin antibody (Fig. 4a), in that positively stained strands running parallel to the longitudinal axis of the cell could be detected. The fluorescent strands observed with anti-ATPase, however, appeared more granular than those detected with anti-calsequestrin. When cells grown in low Ca ++ medium were treated with the ATPase antibody, the staining pattern observed was again indistinguishable from that of the myoblasts grown in standard medium.
Quantitative Studies
To quantitate the changes in the staining patterns observed, after staining with calsequestrin antibody, the myogenic cells were classified according to four types of staining: Golgi-associated staining (Fig. i b and 1 c); staining in Golgi region and in surrounding region (Figs. I d, 2a and 2b); whole cytoplasmic staining; and staining in multinucleated cells (Figs. 2c, and 4a). Similarly, after treatment with ATPase antibody, stained cells were classified as either myoblasts (Fig. 3 b, 3 c, 3d and 3e) or myotubes (Fig. 3e, 3fand 4b).
The results obtained are presented in Figs. 5 a, b and c.
Before fusion (Fig. 5a), most of the calsequestrin antibody-stained myoblasts showed Golgi-as- Since the relative number of multinucleated cells also increased steadily, the proportion of stained myoblasts reached a maximum and then decreased.
In cultures grown in low Ca ++ medium, cells with the staining patterns described above appeared in the same sequence. The proportion of cells having these patterns, however, changed with time in a different manner (Fig. 5 b). The relative number of myoblasts showing Golgi-associated staining decreased with time but less rapidly. Whereas in normal Ca ++ the proportion of myoblasts showing additional staining in the region surrounding the Golgi apparatus reached a maximum and then rapidly declined, in low Ca ++ medium this number kept increasing during the period examined. Myogenic cells with whole cytoplasmic staining appeared 20 h later than in normal Ca ++ , and their proportion increased only slightly with time.
Previous results obtained from biochemical studies indicated that the rate of calsequestrin synthesis declined rapidly with time when muscle cells were transferred from standard medium to low Ca ++ medium (22). This decrease was also reflected in the results obtained by immunofluorescence. The fraction of stained myogenic cells was lower in low Ca ++ medium and, in addition, most of the stained myogenic cells showed Golgiassociated staining or, in addition, staining in the cytoplasm surrounding the Goigi region. Those cells grown in normal Ca ++ showed whole cytoplasmic staining in mono-and multinucleated cells. Thus, the amount of caisequestrin per myogenic cell appears to be lower in low Ca ++ me-dium than in normal Ca ++ medium.
The results presented in Fig. 5 c show that ATPase-positive myoblasts increased to a maximum and then declined. Meanwhile, the number of myotubes increased steadily. Biochemical studies showed that although the initiation of ATPase synthesis could be delayed by transfer to low Ca ++, the rate of ATPase synthesis still increased to relatively high values (7,22). Our present results show that the appearance of ATPase-positive myoblasts in low Ca ++ medium was delayed but that once initiated, the increase in number of stained myoblasts in low Ca ++ was as rapid as the increase in number of stained myotubes grown in normal Ca ++ (Fig. 5 c).
Control Studies
To test the specificity of the staining pattern observed with the two antibodies, the supernate from the ATPase antibody, absorbed with ATPase, and the supernate from the calsequestrin antibody, absorbed with calsequestrin, were used in immunofluorescent staining tests. In both cases, the specific immunofluorescent staining patterns were almost completely abolished.
Specific immunofluorescent staining with antibody to calsequestrin or antibody to the ATPase was not detected in the spindle region or elsewhere in mitotic cells that were frequently observed in these cultures. Likewise, no staining was observed in the fiat, irregularly shaped fibroblasts which could be readily distinguished from the bipolar myoblasts.
Sugar Analysis of Calsequestrin
Although it was evident from earlier studies (12,14,18) that calsequestrin is a glycoprotein, accurate analysis of its sugar content has not yet been reported. The data presented in Table I show that calsequestrin contains only N-acetylglucosamine and mannose in a molar ratio of 1:2:3. This indicates that only a "core" carbohydrate is pres- (16). Since this core structure is so widely found and since its linkage is so invariant, it is probable that the linkage of sugars to the protein through asparagine (Asn) is identi-cal to that occurring in other glycoproteins and is as follows: (Man)z --~ Man B~ GIcNAc /3 ~ GIcNAc ~ Asn.
DISCUSSION
The use of specific antibodies to the ATPase and calsequestrin in the fluorescein-labeled antibody technique has permitted us to follow the appearance of these proteins and to determine their distribution during the differentiation of muscle cells in culture. Calsequestrin was first detected in mononucleated myoblasts after 45 h in culture while the ATPase was not observed in these cells until 20 h later. These results complement our previous biochemical studies which established that the initiation of calsequestrin synthesis in skeletal muscle cultures precedes cell fusion and initiation of ATPase synthesis. Because initiation of ATPase synthesis and cell fusion occurred almost simultaneously in cells grown in standard medium, it could not be determined by biochemical analysis alone whether mononucleated myoblasts or only myotubes synthesized this enzyme. The present results dearly show that some myoblasts, grown for 65 h in standard medium, develop the capacity to synthesize the ATPase before cell fusion.
In myoblasts, calsequestrin was first detected in a small region of the cytoplasm close to, and often within an indentation of, the nucleus. Since this site is characteristic of the position of the Golgi apparatus in many cell types, we believe that this calsequestrin-containing region corresponds to the Golgi apparatus. The possibility that calsequestrin accumulates in the Golgi region is strengthened by previous (12,14,16) and present studies showing that calsequestrin is a glycoprotein. Thus, the Goigi apparatus would be expected to play a role in the processing of calsequestrin before it is incorporated into sarcoplasmic reticulum membranes. The qualitative and quantitative changes in the staining pattern with time are consistent with a gradual spreading of calsequestrin from this region until it becomes distributed throughout the cytoplasm. The confinement of calsequestrin to a specific region of the cytoplasm at early times and the fibrous appearance of the staining at later times FmtraE 4 Parts of myotubes treated with antibody against caisequestrin (a) and ATPase (b) fi'om cultures sampled 114 h after plating. Posilively stained strands running parallel to the longitudinal axis of the cell can be seen in both myotubes. Staining with ATPase antibody gives a more granular appearance than staining with antibody to calsequestrin. Scale bar, 10 ~m. suggest that calsequestrin is always associated with organized subcellular structures and that it is not free to diffuse throughout the cytoplasm. Attempts are now being made to identify the ultrastructural basis of these staining patterns.
In some multinucleated cells, staining was found in the cytoplasm around some but not all of the nuclei. This raises the possibility that a myoblast which has not yet begun to synthesize calsequestrin may be capable of fusing with myoblasts and myotubes which have already initiated calsequestrin synthesis.
The granular staining pattern observed in myoblasts and myotubes after treatment with ATPase antibodies suggests that the ATPase, when it first appears, accumulates in many regions of the cyto- plasm. Ultrastructural analysis of the biogenesis of sarcoplasmic reticulum in differentiating chick skeletal muscle cultures has indicated that the sarcoplasmic reticulum is formed by the budding of membranous vesicles from the rough endoplasmic reticulum (4). Assuming that the development of the sarcoplasmic reticulum in differentiating rat skeletal muscle cells in culture is similar, it is possible that the granular appearance of the ATPase staining pattern observed in myoblast and myotubes marks the sites where the ATPase-containing membranes are being assembled. Strands running parallel to the longitudinal axis of the cell were present in some myoblasts and became more prominent in myotubes after staining with either calsequestrin or the ATPase antibody. They are probably due to the separation of the calsequestrin-and the ATPase-positive regions by unstained, newly developing myofibrils. The similarity in staining patterns obtained with both antibodies suggests that both proteins become components of the same subcellular structure at later stages.
The processes whereby the ATPase and calsequestrin become constituents of the same membrane system are still unknown. We assume that the sarcoplasmic reticuhim does not exist without the ATPase since this protein constitutes 95% of the intrinsic protein mass of the membrane (13,22). On the basis of this assumption, we have previously proposed that the incorporation of calsequestrin into sarcoplasmic reticulum occurs only after the formation of the ATP-ase containing membrane structures. The time of appearance and the distribution of the ATPase and calsequestrin during the differentiation of skeletal muscle cells, as determined in the present studies, are consistent with this view. The polypeptide chain of calsequestrin may be synthesized on the rough endoplasmic reticulum and then transferred intraluminally to the Golgi apparatus where its final processing into a glycoprotein occurs. As a result of this process, calsequestrin accumulates in the Golgi region and becomes detectable by the immunofluorescence technique. During the period when calsequestrin is processed and accumulates in the Golgi region, the synthesis of the ATPase and the assembly of the sarcoplasmic reticulum are initiated at multiple foci throughout the cytoplasm. After the formation of the sarcoplasmic reticulum, calsequestrin is transferred to the lumen of this membrane system by a mechanism which is as yet unknown. A more precise ultra-structural identification of the subcellular structures labeled by the two antibodies will be required in order to understand the role of the endoplasmic reticulum and the Golgi apparatus in the synthesis and processing of the ATPase and calsequestrin and in order to determine how these two proteins assemble into a functional sarcoplasmic reticulum. | 2014-10-01T00:00:00.000Z | 1977-07-01T00:00:00.000 | {
"year": 1977,
"sha1": "9f583e3f2b506620703bdee070dc4252ce91a79b",
"oa_license": "CCBYNCSA",
"oa_url": "https://rupress.org/jcb/article-pdf/74/1/287/1072466/287.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "c7a7fc939ea863280233d08a336cba70242a00dd",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18008800 | pes2o/s2orc | v3-fos-license | Local Reorientation Dynamics of Semiflexible Polymers in the Melt
The reorientation dynamics of local tangent vectors of chains in isotropic amorphous melts containing semiflexible model polymers was studied by molecular dynamics simulations. The reorientation is strongly influenced both by the local chain stiffness and by the overall chain length. It takes place by two different subsequent processes: A short-time non-exponential decay and a long-time exponential reorientation arising from the relaxation of medium-size chain segments. Both processes depend on stiffness and chain length. The strong influence of the chain length on the chain dynamics is in marked contrast to its negligible effect on the static structure of the melt. The local structure shows only a small dependence on the stiffness, and is independent of chain length. Calculated correlation functions related to double-quantum NMR experiments are in qualitative agreement with experiments on entangled melts. A plateau is observed in the dependence of segment reorientation on the mean-squared displacement of the corresponding chain segments. This plateau confirms, on one hand, the existence of reptation dynamics. On the other hand, it shows how the reptation picture has to be adapted if, instead of fully flexible chains, semirigid chains are considered.
Introduction
Modern double-quantum nuclear magnetic resonance (NMR) experiments aim to understand the microscopic dynamics of polymer segments in the melt. 1 The polymer dynamics is most often described by either the Rouse 2 or the reptation 3, 4 model depending on chain length. Both models, however, take the architecture of a specific polymer into account only via the so-called Kuhn length. These local features are summarized into C ∞ = R 2 N l 2 b , the ratio between the mean squared endto-end distance R 2 and the number of monomers N (multiplied with the squared bond length l 2 b ). Statics and dynamics are thus renormalized onto a Gaussian bead-spring chain of larger but fewer beads of the size of the Kuhn length l K = R 2 N l b . Experimentally, however, there is a qualitative difference between fully flexible polymers like poly-(dimethyl-siloxane) (PDMS) 5 and moderately stiff systems like poly-(butadiene) (PB) 6 when it comes to the reorientation of chain segments. These differences remain after both polymers have been appropriately renormalized onto Gaussian chains. The PB system shows unusual dynamics which has been taken as an indication of local order, 6 whereas the PDMS data is quite successfully described by the standard reptation picture. 5 From the shape of reorientation auto-correlation functions a relatively high degree of residual structural order in the presence of entanglements has been deduced for PB. The experimental data, however, cannot be interpreted without model assumptions. Here, simulations which are validated against experimental raw data may contribute to a better understanding. 7 Molecular dynamics simulations are widely applied to study the dynamics of simplified polymer models in solution 8,9 and the melt. [10][11][12][13][14][15][16] To date, only simple models are capable of investigating dynamics of long entangled polymer chains in the melt to a satisfactory degree of accuracy. Models that allow for atomistic details are limited to shorter times and to fewer and shorter chains. [17][18][19][20] Recently, we showed that the static local mutual orientation of neighboring chains depends on chain stiffness in the amorphous melt, whereas the chain length has no influence on local static properties. 15,21 In the present contribution, we now extend our investiga-tions to the dynamics of entangled and unentangled melts of polymer chains with local stiffness. In the following section, the polymer model is shortly recapitulated and details of the simulated systems are described. In the main part (Section 3), reorientation correlation functions are analyzed and compared to theoretical considerations and experiments. Section 4 relates static chain packing and dynamic chain reorientation observables.
Model and Computational Details
We performed polymer melt simulations using Brownian dynamics (BD) of a widely used and well characterized generic polymer model 10,12,14 with added local stiffness. 21 All monomers interact via a truncated and shifted, therefore purely repulsive, Lennard-Jones potential (Weeks-Chandler-Andersen, WCA, potential 22 ) Neighbors on the chain are connected by a finitely extensible nonlinear elastic (FENE) potential which is used for computational efficiency This yields, together with the WCA-potential, an anharmonic spring. Most of our systems have an additional three-body potential to stiffen the chain locally For details of the implementation and parallelization, see ref. 23. Throughout this work, reduced units are used with mass m, bead diameter σ, and the strength of the WCA potential ǫ set to unity. The time unit is t * = σ m ǫ . Temperature is measured in units of ǫ by setting Boltzmann's constant k B = 1. The average bond length is l b = 0.97σ so that the beads overlap only slightly.
Systems of up to 1000 chains of length N monomers ranging from 5 to 1000 were simulated at melt density (ρ * = 0.85σ −3 ) and temperature T * = 1. The Brownian dynamics algorithm is mainly used to maintain this temperature. Additionally, it has been shown to be very efficient for our model. 12 The monomer friction coefficient of 0.5 inverse time units used here means that only processes on a time scale well above one of our time units have a meaningful dynamics. The persistence lengths l p defined via the decay of the bond direction correlation function along the chain backbones varied from one bond length up to 5 bond lengths.
The distance l is measured along the contour of the chain. The persistence length is related to the Kuhn length (Section 1) in the wormlike chain model by l p = 1 2 l K . 4 In the simulations, l p is controlled by choosing appropriate values of b in Eq. 3. Hence, throughout this article we refer to systems of different stiffness with their l p rather than the corresponding b. The most flexible system (l p ≈ l b ) has no intrinsic stiffness, only the excluded-volume interaction leads to its persistence length being non-zero. The persistence length increases weakly with chain length due to end effects. 21 The values indicated here are the limits for long chains. For N ≥ 10 the persistence length is much shorter than the contour length. An overall Gaussian behavior is therefore expected and confirmed by the characteristic ratio 21 Note that the model would show a nematic liquid crystalline phase if the bending stiffness was increased far above l p = 5. 24 All chains of length up to 75 and all flexible chains (l p = 1) relaxed fully, as evidenced by the decay of all the Rouse modes. To cut down necessary equilibration times, the longer chains (or chains of greater stiffness) were initialized as non-reversal random walks whose local structure was estimated from simulations of shorter chains: In the setup configuration a monomer i and its second neighbor i + 2 are not allowed to approach closer than a certain distance. This setup procedure reduces the equilibration time substantially while producing useful configurations. 12 The end-to-end distance and the gyration radius changed then only very slightly in the initial stage of the simulation. Their equilibrium values as a function of stiffness were already presented in reference 21 together with other static observables like structure functions, pair distribution functions and local chain orientation correlation functions. Some of this data is included in section 4. It is not yet possible to simulate the "slowest" systems (e.g. l p = 5, N = 1000) until the final regime of free diffusion is reached. Still, we trust that even this system is sufficiently equilibrated for the purpose under study. Table 1 gives an overview of the simulated systems with their respective Rouse times and simulated times. We are aware that for non-flexible polymers the Rouse modes are no longer the true eigenmodes (see section 3). Still, the Rouse time is useful as an estimate of the relaxation time. For chains of equal length, the Rouse time τ R would increase linearly with chain extension, i.e. l p , if the friction due to neighboring chains were constant. 4 However, the relaxation times increase even stronger (Table 1 and Figure 1). Similarly, for the nonflexible chains (l p > 1), the increase of τ R with N 2 as expected from the Rouse model is no longer observed. The slowdown is stronger, indicating an earlier onset of entanglement influence with increasing persistence length ( Table 1).
The very short chains (N ≤ 15) allow an estimate of the diffusion coefficient of chains of a given stiffness, i.e. their mobility in the absence of entanglements. The diffusion coefficient does not decrease linearly with chain stiffness as would be expected from the Rouse model; for reptation this decrease is quadratic. As entanglement length we take the chain length for which the crossover from linear to quadratic takes place. 25,26 3 Chain Reorientation
Reorientation Correlation Function
The main purpose of this work is to investigate the reorientation dynamics of local chain segments in dense melts. This was studied by means of the auto-correlation function of the second Legendre polynomial P 2 of chain tangent vectors As chain tangent vectors we take (normalized) vectors connecting neighboring beads unless noted otherwise. The second, rather than the first, Legendre polynomial is taken because its Fourier transform relates directly to NMR measurements (T 1 experiments) Also the double-quantum experiments aimed at the chain dynamics are related to this function (below).
Short-time behavior
For all systems investigated, we have found that the reorientation correlation function (Eq. 5) consists of two qualitatively different parts. For short times, its decay follows a power law (algebraic). At long times, the decay is exponential. All characteristics of the reorientation correlation function are influenced by the chain length N and the chain stiffness l p : the short-time part, the long-time part, the time at which the cross-over from algebraic to exponential behavior occurs and the function value at this point. We will see that the shorttime regime is influenced more by stiffness, while the long-time regime shows a stronger dependence on chain length. The influence of the chain architecture on the short-time behavior is illustrated in Fig. 2. Reorientation is slowed by increasing the stiffness at constant length ( Fig. 2a) as well as by increasing the length at constant stiffness (Figs. 2b and 2c for l p = 5 and l p = 1.4, respectively). The more flexible chains and the shorter chains reach exponential behavior earlier. At the same time, the short-time algebraic process is more "efficient" for the shorter and the more flexible chains, i.e. the reorientation correlation function has decreased to a smaller value when long-time exponential behavior sets in. These findings can be understood if one attributes the short-time behavior to local dynamics and the long-time behavior to the relaxation of larger chain segments or, ultimately, to the rotational diffusion of entire chains. If the chains are flexible much of the reorientation can be achieved by local rearrangement, i.e. without the local tangent vector feeling that it is part of a long entangled chain. As stiffness increases, it hinders the reorientation on local scales, so the reorientation of the local tangent vector has to wait for some larger-scale reorientation which is exponential. The early crossover to exponential behavior and the apparently small efficiency of the short-time process for short chains, on the other hand, is due to a faster rotational diffusion of the entire chains, as they become shorter. The rotational diffusion begins to contribute substantially to the reorientation, before the local process can complete. Its time dependence is exponential according to e.g. the Debye model. 27 If the Rouse model were strictly applicable the relaxation time of entire chains in the melt (Rouse time τ R ) should scale with N 2 . Hence, for constant stiffness the reorientation correlation functions belonging to different N should coincide if the time axis is transformed t → t/N 2 . Instead, we find empirically that coincidence is achieved for t → t/N (Fig. 2d). This indicates that even for a small deviation from full flexibility (l p =1.4) the Rouse model is not appropriate for the short-time relaxation, the local process being dominated by effects other than connectivity, e.g. stiffness. Additionally the "bead friction" enters which also increases sublinearly with stiffness. 26 In contrast to the local dynamics, there is no chain length influence at all on local structural properties. 21 As a general result, we note that, while the short-time process is influenced by both stiffness and chain length, the influence of stiffness is much stronger. (This can, for example, be seen by comparing Figures 2b and 2c.) It is, therefore, to be expected that also in experiments on real polymers the short-time regime will experience the influence from the chemical architecture of the polymer, whereas the chain length should be secondary.
Long-time behavior
The long-time tail of the reorientation correlation function is exponential to a reasonable approximation ( Figure 3). The rate of decay depends -as the short-time behavior -on both chain stiffness (Fig. 2a) and chain length (Figs. 2b and c). The reorientation time of the exponential part τ reor increases with both chain length N and stiffness l p (Fig. 4). The dependence on N is stronger than in the short-time regime, where an approximately linear increase with chain length was found (Figure 2d). This is expected because for larger scale processes the Rouse model should be appropriate and the overall chain relaxation characterized by the decay of all Rouse modes should scale with N 2 . Therefore, the time axis is rescaled in Figures 3b and 3c to highlight the remaining deviation from Rouse behavior. As one approaches the entanglement regime (the entanglement lengths of the systems are compiled in Table 2), an even stronger increase with chain length is expected.
The reorientation time τ reor (Figure 4) is much shorter than the and respective times τ e = τ R (N e ) for the different persistence lengths.These were determined from the mean-squared displacements of chains. 12, 25 For l p = 5 the entanglement time is not defined.
time for the reorientation of entire long chains. Thus, one may suspect that in the case of long entangled chains, it is no longer important for the reorientation of a tangent vector, whether the entire chain reorients completely. Instead, the relaxation of a shorter part of the chain appears to be sufficient. This is in line with theoretical expectations: In the limit of infinitely long chains, the local segmental dynamics has to become independent of the chain length. The relaxation of a large but finite segment of the chain (a few entanglement lengths long) has to give enough freedom for the local reorientation to take place. This would only be different if the ends of the relevant segment were constrained for all times. We were able to corroborate these considerations by simulating an entangled melt of fully flexible chains (N = 350, l p = 1) where the initial algebraic decay of C reor is very fast (≈ 1800t * ) 14 and effective in the sense that it reduces C reor (τ e ) to less than 0.01 before the long-time reorientation sets in. The decay time of the long-time process is about 5000t * which is the relaxation time of chain-segments of the length of about 60 monomers. This corresponds to about two entanglement lengths of the system, 12 so we can deduce that, after completion of short-time relaxation, only relaxations on a length scale of up to the order of the entanglement length are relevant.
Reorientation of medium-size chain segments
It is also of interest to analyze the reorientation of longer chain segments. As the orientation vector of a segment of length d we take the unit vector between two monomers whose indices on the chain differ by d Thus d = 1 represent the vectors connecting neighbors discussed up to now. Using simple topological arguments, the dependence of the exponential part of the reorientation correlation function (Eq. 5) of u d on d has been predicted. 6 The value of the reorientation correlation function at any given time is proportional to (3dl K )/(5N e l b ), where N is the chain length, N e the entanglement monomer number, and l K the Kuhn length. This relation holds for l K < N l b < N e l b . As a measure of the amplitude of the exponential part of the reorientation Table 3: Fitted exponential decays to the curves in Figure 5 (N = 200, l p = 5) in the time domain between t = 100000 and 300000. The amplitude β is defined by the following relation C reor (t) → βe −t/τreor , t → ∞, i.e. it is the fictitious intersection of the exponential long-time curve with the y-axis. correlation function we define the ordinate intercept β obtained by fitting an exponential to the long-time tail of the reorientation correlation function and extrapolating back to t = 0. If the assumptions of the theory were true, β would be proportional to d. In Table 3, however, β is seen to increase monotonically but sublinearly with d. The data of Table 3 correspond to a non-flexible chain (l p = 5). Hence, it is obvious that the arguments of topological entanglement are not sufficient to explain the behavior of stiff chains. The reason for this is most likely that for the l p = 5 system l K and N e l b are of the same order of magnitude, as shown elsewhere, 25 so that the above condition is not fulfilled.
The long-time exponential parts of the reorientation correlation functions belonging to different d are almost parallel (Fig. 5). They have relaxation times τ reor of similar magnitude, although the τ reor seem to increase slowly but monotonically with d (Table 3).
Comparison to Double-Quantum NMR Experiments
The direct experimental observable in double-quantum (DQ) experiments like the ones performed by Graf et al. 6 is The vector u is a unit vector along an atom-atom distance vector which is usually not parallel to the chain tangent vector. If enough of these are available, then the C DQ of the backbone can be recalculated. 28 B is a unit vector parallel to the external magnetic field in the NMR experiment. We can choose B =ê z for convenience because amorphous melts are rotationally invariant. C DQ is proportional to C reor if the vectors u are isotropically distributed with respect to the field Thus, C reor can be used for the comparison to experiments. However, absolute values cannot be compared, since the experimentally detectable C reor (0) is reduced from 1 to a value S by very fast motions of internal degrees of freedom not present in our bead-spring model. For the C=C double bond in polybutadiene, S is found to be 0.24, for example. 6 For comparison, we therefore normalize both curves to C reor (0) = 1. Ball et al. have derived expressions for the DQ correlation functions assuming the reptation model and infinitely long chains. 29 They predict an algebraic decay with different exponents in the different dynamic regimes of the standard tube model. In the time interval between the entanglement time τ e and the Rouse time τ R , for which the inner degrees of freedom of the chain are relaxed, a t −1/4 regime of C reor is expected. Later, in the regime where the chain as a whole reptates in its tube, a t −1/2 behavior should be found. The exponent of C reor should be the negative of that of the monomer mean-squaredisplacement in the same dynamic regime. Algebraic fits of the shorttime part of the decay curves (linear region of the double logarithmic plot cf. Fig. 2a-c) yield exponents κ shown in Table 4. The systems under study are not very long compared to the infinite chain limit as they are at most about 30 times the entanglement length (except for N = 1000, l p = 5). Our exponents κ come therefore closer to the t −1/2 dependence. The system with persistence length l p = 5 is the most strongly entangled. 25 It is found to reorient slowest with an exponent between 0.25 < κ < 0.5. The system with persistence length l p = 1.4 shows an algebraic decay faster than t −1/2 . This is probably because it is so weakly entangled, that the effects of entanglements are just starting to play a role. The exponents decrease systematically with persistence length, which, as discussed earlier, indicates an increasing Table 4: Algebraic fits (t −κ ) of the decay of double-quantum correlation functions C DQ for N = 200 (see text and Figure 6). * The bottom line for l p = 5 has chain length N = 1000 but the system is not equilibrated. degree of entanglement. The dependence on the degree of entanglement is supported by the very low exponent (κ = 0.29) for a system with l p = 5 and chain length N = 1000.
A t −1/4 dependence of the correlation was found experimentally by Graf et al. 6 for a system with very long and therefore highly entangled PB chains (76 entanglement molecular weight M e ). They also observed a power-law with t −1/2 for a system with a molecular weight of 11M e . Figure 6 compares directly simulation (at l p = 5) and experiment; the ratio between the lengths in simulation and experiment is not exactly the same, it is, however, only important to be slightly or far above M e . The agreement shows that the simulations reproduce well the exponents found in experiments. To achieve this agreement, the time axis of the simulated correlation functions has been rescaled empirically by 0.153 and 0.5 for N = 50 ≈ 8N e and N = 1000 ≈ 160N e respectively. This scaling may be used to infer a mapping to experimental times.
Interdependence of reorientation and translation of segments
If the reptation model holds the reorientation process is coupled to the translation of the polymer in its tube. A useful relation to monitor is, therefore, the reorientation correlation function of the chain tangent vector versus the mean-squared displacement of the monomers defining it, irrespective of the time. This relation has to be averaged over a finite time window 2t av , which is centered at some time t m and does not necessarily start at t = 0.
Both C reor and ∆r 2 depend parametrically on t. If, during t, the tube relaxes (reorients) then C reor is zero. Any deviation from zero indicates that the orientation is correlated over the time interval corresponding to the displacement. In our analysis we have ruled out possible artifacts from the translation of the system as a whole which could be present in Brownian dynamics.
In Figure 7a, it is seen that this function at short t (0 < t < 30000) does not decay to zero, but shows a plateau, whose value depends on the stiffness: stiffer chains have a higher residual correlation. The presence of a plateau is a consequence of the finite length of the chain: Every finite polymer has a trivial residual static orientation correlation between distant chain segments in the direction of the end-to-end vector. If the motion of the chain is predominantly along the fixed tube, this residual correlation is also visible in the dynamic C reor (∆r 2 ) shown here, since one chain segment samples the very same fixed tube at different times. We have seen in the preceding sections that stiffer chains have a higher reptation component in their dynamics. Hence, it is no surprise that they exhibit a larger residual correlation. Figure 7b shows, for the most interesting case l p = 5, how the C reor (∆r 2 ) depends on the position of the time window t m . With increasing t m , the reorientation correlation at ∆r = 0 goes to zero. At the same time, intensity moves to larger ∆r 2 , so that eventually a maximum at ∆r 2 > 0 develops. This observation is explained by a scenario that includes, in addition to reptation, a diffusive or rather subdiffusive translation of entire chain segments through space, without significant reorientation of these segments: When a chain segment has reptated along the tube and comes back to its former part of the tube and hence its former orientation, it finds that this part of the tube itself has translated in the meantime. On the other hand, when it returns to its former absolute position (∆r 2 = 0) it is now in a completely different part of the tube and has a different orientation. This picture of short-scale transverse translation of stiff tube segments has been borne out by visualizations of individual chains. 25,26 The time dependence of the plateau value contains information about the stability of the initial neighborhood. It measures the "similarity" i.e. These results support the presence of reptation in our systems, as the chains come back to their former surrounding which has undergone only small changes in the meantime. As this memory effect preserves information about orientations, a tube picture is a suitable concept. However, the chains do not behave simply as the standard reptation picture would suggest. The reptation is considerably modified by their stiffness. Stiffer chains reptate in a more pronounced way, i.e. they follow the primitive path of the tube more closely as the stiffness suppresses the transversal motions efficiently. This leads to a higher degree of orientation memory for chains of the same length (Figure 7a).
Connection to Structure
The preceding section has shown that both the stiffness l p and the length N have an effect on the dynamics of polymers already on the local level. In this section, we briefly review earlier results 21 about the local structure in polymer melts and how it is influenced by the chain architecture. This is done not only for comparisons within the model system. NMR experiments on melts have so far only been able to study the local dynamics. Any information on the structure had to be deduced from the dynamics using models. In contrast, the simulations of this work can be analyzed independently for both structure and dynamics. We, therefore, have an example case for which the assumptions can be checked that are used to analyze NMR experiments.
Of particular interest has been the question whether or not neighboring polymer chains are in any way aligned. 30 We therefore concentrate here on the static orientation correlation function OCF This function measures the mutual orientation of tangent vectors (defined in Eq. 6) of segments belonging to two different chains 1 and 2 as a function of their distance r. The second Legendre polynomial is used again, this time because our polymer chains have no direction, i.e. head and tail are equivalent. The OCF is 1 for parallel orientation, −1/2 for perpendicular orientation, and 0 for random orientation. The detailed discussion of the various OCF s is given elsewhere. 15,21 Here we only note that the OCF is a strictly local property. The chain length N has no influence whatsoever on the short-range mutual orientation of two chains, even if one N is below the entanglement length N e ≈ 32 and the other above (Figure 8a). The influence of the stiffness l p on the structure is clearly visible (Figure 8b), but small. We may conclude that the two parameters N and l p , which both influence significantly the local reorientation dynamics, have little (l p ) or no (N ) effect on the local packing of chains. The fact that the chain length is not important for the local structure means that entanglements cannot be important either. This is yet another manifestation of the entanglement length N e being a purely dynamical quantity.
Local packing is, however, strongly influenced by another local quantity (which, in contrast, contributes little to the dynamics 31 ), namely the excluded volume of the monomers. An example of this is shown in Figure 9. Here, the OCF for a flexible chain (l p = 1, N = 50) is overlaid by the OCF for dimers on a randomly perturbed lattice. The monomers occupy fcc lattice sites whose positions were randomly displaced by small amounts to emulate finite temperature. The OCF was then evaluated between all possible pairs of dimers which do not share a common atom. Although the OCF for the dimers is much more accentuated than for the amorphous melt one can clearly see the short-range orientation of dimers shining through in the OCF of the melt. One may, therefore, conclude that local structure and local dynamics in polymer melts are dominated by different properties of the polymer. For this reason, it may be difficult to infer one from experimental results on the other. 31
Conclusions
The reorientation of short segments in polymer chains in dense melts is governed by two subsequent processes. The fast one leads to an algebraic decay of the reorientation correlation function and the slow one to an exponential decay. The correlation times of both depend on the chain length as well as on the chain stiffness. Increasing chain stiffness leads to a strong slowing of the reorientation on both time scales.
A qualitative comparison of our reorientation correlation function showed that the power laws of the reorientation correlation function measured in double-quantum NMR experiments of systems not too far above the entanglement molecular weight could be reproduced. Therefore, our simplistic model, which is probably the simplest possible to incorporate stiffness and excluded volume, successfully describes the qualitative features of the dynamics. With our results, thus validated against experimental data, the reorientation correlation functions C reor (t) can be regarded as meaningful. In contrast, our simple model cannot explain two other experimental observations. The initial plateau of C reor (t), the so-called dynamical order parameter, 6 of the experiment is not found, and in experiment the difference between persistence length and entanglement length is larger. Detailed atomistic models are probably necessary, in order to capture these features. 26,31 One main result of this investigation is that, for the local reorientation to relax slowly, the chains have to be both stiff and long. Local stiffness ensures that the memory of orientation is not already lost during the fast algebraic process. On the other hand, the chains have to be above the entanglement length for the slow exponential process to extend into the experimentally observable regime (milliseconds in NMR). Although in principle both processes are always present, they have to span a big enough range in intensity and time, in order to be detectable in experiments. In agreement with our results, it has been found previously that the chain length has only little influence on the local reorientation, as long as it is below the entanglement length. In united atom polyethylene below the entanglement length local orientation relaxes like a stretched exponential, whereas the relaxation of the end-to-end vector is well described by the Rouse model. 19 Although the dynamic reorientation correlation functions are found to be in qualitative agreement with experiments and theoretical predictions, our work shows that this does not necessarily imply an increase of static order due to topological entanglements of the polymer chains. The static local order increases with chain stiffness but does not at all depend on chain length. The entanglement length has emerged as a central quantity to explain the dynamics of flexible and especially stiff chains. It is not easily determined uniquely as there are several different definitions which lead to different values. This difficulty becomes even worse for chains with intrinsic stiffness. Nonetheless, we can say that the entanglement length in any definition decreases dramatically as the persistence length is increased. Evidence for this is found in the chain length dependence of the center of mass diffusion coefficient and in the segment size governing the long time relaxation of local order. In the system with l p = 5, it is questionable whether any length scale can be described by Rouse dynamics. The very local scales up to the persistence are dominated by bending modes and the very big length scales are dominated by entanglements.
As N e l b approaches l p , the Rouse regime disappears between these two extremes.
A renormalization of the local scale properties onto an effective monomer or Kuhn segment is possible for static structural aspects as there is only one relevant length scale, namely the persistence length. However, this renormalization fails for dynamical properties of stiff polymers because two length scales (and the associated time scales) interdepend. 25 In dynamics one encounters, in addition to the persistence length, the entanglement length which describes the topological constraints imposed by non-crossability of the chains. When the two scales come into the same order of magnitude, new behaviors emerge which can not be deduced by renormalizing to Rouse dynamics or other simple models. The fact that the dynamics is not described appropriately by analytical theories implies also that the connection of translational and rotational dynamics is not a priori known. This has to be kept in mind by interpreting NMR experiments which predominantly measure reorientation.
The concept of reptation is supported by the existence of a time-dependent plateau value of reorientation as a function of the length of the diffused path in our simulations. Reptation is more pronounced if the chains are stiffer because of both the intrinsic stiffness and the stronger entanglement. They lead to the chain being more closely confined to the primitive path of the tube. An analysis of the dependence of N e on l p and further consequences for chain and monomer motions will be discussed in more detail elsewhere. 25 | 2014-10-01T00:00:00.000Z | 2000-05-25T00:00:00.000 | {
"year": 2000,
"sha1": "e54e90f0fb3876e4908c023809c3beba0cf21fb4",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0005432",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e54e90f0fb3876e4908c023809c3beba0cf21fb4",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry",
"Physics"
]
} |
118677382 | pes2o/s2orc | v3-fos-license | The mechanism of anisotropic exchange interaction in superconducting iron arsenides
Using a combination of linear response theory and constrained orbital hybridization approach, we study the mechanism of magnetic exchange interaction of iron-based superconductor. We reproduce the observed highly anisotropic exchange interaction, and our constrain-orbital calculation unambiguously identifies that the anisotropic feature of exchange interaction is not sensitive to the unequal d$_{xz}$/d$_{yz}$ orbital population.
The discovery of high-temperature superconductivity in iron arsenides has attracted intense research interests 1,2,3,4,5 . While the mediator of pairing in these systems remains officially unidentified, a large amount of circumstantial evidence points to magnetic spin fluctuations. Therefore, tremendous amount of efforts have been devoted to understand the magnetic properties 5,6,7,8,9,10,11,12,13,14,15,16 .
However, despite vast efforts, the nature of magnetism in the iron-based superconductor is still a hotly debated topic 1 . Early theoretical studies suggest that superconducting iron arsenides have an antiferromagnetic spindensity-wave (SDW) instability due to Fermi-surface nesting 6,7 . Neutron scattering experiment 8 confirms that LaFeAsO indeed exhibits the predicted stripe antiferromagnetic (S-AFM) long-range ordering followed by a small structural distortion. However, the observed magnetic moment is much small than the theoretical one 8 . Moreover, although the general picture fits with a SDW model, there remain problems of matching to a purely itinerant scenario. In particular, the increased conductivity found in SDW state is not expected if a portion of the carriers become gapped 1 . Alternatively, a Heisenberg magnetic exchange model had been proposed to explain the magnetic behavior 9,10,11 . It had been suggested that nearest-neighbor and next-nearest-neighbor interactions between local Fe moments are both antiferromagnetic and of comparable strength 9,10 , which results in a magnetic frustration. These frustrating effects have been used to explain the structural phase transition and small ordered moment 9 . It was also suggested that the structural transition is actually a transition to a "nematic" ordered phase which will occur at a higher temperature than the SDW transition 13 .
On the other hand, a short-range and highly anisotropic exchange interaction had been predicted theoretically 14 and confirmed by the neutron scattering measurement subsequently 8 . To understand this unexpected anisotropy is a hot topic 16,17,18,19 . As a natural way to break the symmetry, orbital ordering (OO) had attracted intensive research attention 16,17,18,19 , and there is increasing experimental evidence about the orbital physics 20 . Band structure calculation proposes that the degeneracy between d xz and d yz orbital had been lifted and there is a ferro-orbital ordering, which results in not only the strong anisotropic exchange but also structural transition 16 . However, the electronegativity of As is much smaller than that of O, the crystal-field effect upon the 3d orbitals of Fe is much weaker than in transition metal oxides, consequently the orbital polarization is quite small. The OO had also been supported by the model calculation, but it is not clear whether the exchange anisotropy is related to OO or not 17 . Therefore, a extensive study about the mechanism of exchange interaction is an important problem. In this work we address this issue using the linear response approximation 21 as well as a recently developed constrained orbital hybridization approach 22 . While our linear response approximation reproduce the known anisotropic exchange interaction, our constrained orbital calculation allows us to provide theoretically a conclusive insights to various contributions to magnetic exchange interactions.
We perform our electronic structure calculations based on the full-potential, all-electron linearized-muffn-tin-orbital (LMTO) method 23 . Since for this system local spin density approximation (LSDA) can give reasonable re- sults 24,25 , we therefore adopt it as the exchange-correlation potential. With the electronic structure information, we estimate the exchange interaction J based on a magnetic force theorem 26 that evaluates linear response due to rotation of magnetic moments 21 . This technique has been used successfully for evaluating magnetic interactions in a series of compounds 14,21,27,28 . The main results and conclusions are found to be the same for all iron arsenides, we therefore focus on LaFeAsO at the following. The calculations are performed on the high-temperature tetragonal structure 29 . The x and y axes are taken to be along the Fe-Fe bond direction, with the x axis chosen along the AFM ordered direction of S-AFM as shown in Fig.1. Our calculated ground state properties, including the magnetic ordering configuration, density of state and band structure, are found to be in good agreement with previous theoretical results 14 . Based on the electronic structure information, we evaluate the interatomic exchange constants as an integral over the q space using (8,8,8) reciprocal lattice grid. Our numerical results show that despite the metallic nature, the exchange interaction is a short range one with the magnetic coupling further than the second nearest neighbor to be almost equal to zero. The short-range feature of the exchange interaction may be caused by the small density of state at Fermi energy. We reproduce the experimental observed strong anisotropic near-neighbor exchange interaction 8 . With the definition of positive J meaning the anti-ferromagnetic coupling, our numerical data of J 1x , J 1y and J 2 are 47.9, -8.0 and 21.0 meV, respectively, which are in good agreement with the previous theoretical results 14 .
Our LSDA calculation confirms that there is a small orbital polarization, and the difference between the occupation of d xz and d yz orbital is 0.135, which is very closed to the previous theoretical work (0.141) 30 . The magnetic moment at d xz and d yz orbital are 0.202 and 0.361 µ B respectively, which is also consistent with the previous calculation (0.149 and 0.338 µ B ) 30 . After reproducing the orbital/spin polarization, we made a calculation of J 's with an artificial constrained external potential applied to the d xz orbital of Fe to adjust its energy consequently to control the orbital occupation, so that we can check the exact effect of unequal d xz /d yz orbital population 22 . As shown in Fig.2(a), J 2 almost does not depend on the shifting of d xz level and the associated orbital polarization, which is contrary to the suggestion of strong dependence in Ref. 16 . Although the value of J 1x and J 1y do depend on the OO, but as shown in Fig.2(b) and Fig.2(c), even the d xy and d xz orbital has the same occupation (i.e., OO equal to zero), there is still strong anisotropy between them (J 1x is almost twice larger than J 1y ). Thus, we can conclude that the d xz and d yz orbital do have unequal population, but the anisotropic exchange is not related to it.
It is well known that the strength of hybridization between two orbitals strongly depends on their energy difference, therefore the exchange interaction will be sensitive to the shifting of special orbital if this orbital participate in the exchange process. We thus perform the constrained-hybriziation approach 22 to exactly analyze the possible virtual exchange mechanism directly. This technique has been used successfully in perovskite ruthenates and Europium Monochalcogenides 22,28 . It turns out that a upshift of 5d orbital of La or a downshift of 2p orbital of O does not affect the exchange interaction. Therefore, the exchange process happens almost completely in the FeAs layer, and the inter-layer exchange interaction is negligible.
In additional to d xz , we also shift other 3d orbitals of Fe. Shifting the 3d orbitals changes the orbital occupation, however only shifting d xy orbital has considerable effect on J 2 . Since As anion is located above the center of the Fe plaquette, one can expect that the hybridization between Fe-d xy and As-p x±y is strong. Thus, our numerical results clearly show that J 2 is mainly contributed by the As-bridged antiferromagnetic superexchange. In contrast to J 2 , all 3d orbitals have large effect on J 1x and J 1y , which indicates the importance of exchange interaction due to the direct hopping between nearest-neighbor Fe 3d electron. It is well known that the interatomic magnetic interaction basically is a band structure effect, and the spin ordering affects the covalency and details of the bonding topology. Therefore, it is not surprised that the exchange interaction depends on the magnetic configuration. For example, our additional calculation shows that even for NiO, which has well defined local moment, there is about 10% difference between the J from AFM and FM configuration calculation. The magnetism in iron arsenides is much more itinerant, moreover, there is a competition between the As-Fe superexchange and Fe-Fe exchange interaction. The combination of these effects results in the highly anisotropic nearest neighbor exchange interaction.
To clarify the relation between the structural transition and magnetic property, we also perform calculation for low-temperature orthorhombic phase 29 . Same with the high-temperature tetragonal structure, for orthorhombic phase the S-AFM configuration is also lower in energy comparing with other states. We reproduce that the ground state is the one with the magnetic moments at the iron sites aligning antiparallel along the longer a axis. However, both the obtained magnetic moment (1.67 µ B ) and the exchange interaction (J 1x =48.2, J 1y =-10.1, and J 3 =21.1 meV) are almost the same as those in the high-temperature phase. Moreover, we optimize lattice parameter and the internal atomic coordinate for both stripe antiferromagnetic ordering (S-AFM) and checkboard antiferromagnetic ordering (C-AFM). Our numerical results confirm that the structure of Fe-pnictide is almost not depend on the magnetic configuration. Therefore, exchange-striction effect, which had been used to explain the uncentrosymmetric structural distortion and the associated multiferroics 31 , cannot be used to explain the orthorhombic-tetragonal transition.
In summary, based on a combination of linear response theory and constrained orbital hybridization approach, we study the mechanism of magnetic exchange interaction of iron-based superconductor. Our results unambiguously identify that the magnetic exchange process happens in the FeAs layer, and the highly anisotropic feature of exchange interaction is not related to the orbital polarization. The magnetism is at least partially itinerant, which results in the anisotropic exchange interaction. While, the next nearest neighbor interaction J 2 is mainly contributed by the As-bridged superexchange, Fe-Fe exchange interaction has considerable effect on the nearest neighbor exchange interaction J 1x and J 1y .
The work was supported by National Key Project for Basic Research of | 2012-05-05T07:13:23.000Z | 2012-04-23T00:00:00.000 | {
"year": 2012,
"sha1": "163bbe39fd9877503d56d926b9b629a9a4604f05",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1205.1109",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "163bbe39fd9877503d56d926b9b629a9a4604f05",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
238011615 | pes2o/s2orc | v3-fos-license | Application of FME Deterministic Model in the Calculation of a Reservoir Arch Dam
. At present, the analysis of monitoring data for the stress of dam is mostly based on statistical models. However, the monitoring data of the stress on some arch dams have considerably large error, it is hard to build a reasonable statistic model based on the monitoring data. In order to solve the practical application problem of the project, this paper calculates the elastic modulus of the dam by using finite element analysis based on the displacement of the hydraulic component separated from the statistical model of horizontal displacement. Then according to the reversed elastic modulus, this paper has calculated the dam stress under different water levels and temperature conditions. Finally, it has built a stress-deterministic model of the dam.
Introduction
A reservoir is located in Taizhou, Zhejiang Province. The normal high water level before the dam is 176.0m. The check flood level is 186.3m. The dead storage level is 140m. The power generation dead water level is 156m. The dam is a concrete double-curved arch dam with variable radius and central angle. The maximum height is 74.3m, and the maximum bottom width is 15.5m. The dam crest width is 4.0m, and the thickness-height ratio is 0.208, and the dam crest length is 265.5m. The arc radius is 33m~110m, and the center angle is 70°~134°.
At present, the analysis of monitoring data for the stress of dam is mostly based on statistical models [1]. However, according to the actual situation of the stress monitoring data of this reservoir dam, the resistance measurement value of the strain gauge has a large beating range since 1990. In view of the unstable measurement values and the large errors, it is impossible to establish a reasonable stress statistical model. In this paper, through the three-dimensional finite element method, the displacement of the water pressure component separated by the horizontal displacement statistical model is used to inverse the elastic modulus of the dam [2]. Then this paper uses the inverse elastic modulus to calculate the dam body stress at different water levels and temperatures, and establishes a more reasonable dam stress deterministic model [3]. 3 Three-dimensional finite element numerical simulation analysis
3.1Model building
The cross section of the valley at the dam site is U-shaped, and the bedrock is fused tuff. According to the actual size of the reservoir dam and the elastic finite element theory, the length of the bedrock area along the flow direction is taken as 1 times the dam width, and the height is taken as 1 times the dam height [4]. Three-dimensional constraints are imposed on the bottom of the bedrock, and normal constraints are imposed on the upstream and downstream faces of the bedrock and the dam axis. When building the three-dimensional finite element model, the dam part simulates important structures such as overflow dam sections, spill holes, drainage corridors, etc., so that the final calculation results can reflect the actual deformation. When arranging the nodes, make measuring points as nodes as possible.
The unit adopts 6-hedron 8 nodes and 5-hedron 6 nodes isoparametric element. The X direction is positive toward the downstream, and the Y direction is positive toward the vertical direction.
The finite element calculation model is shown in Fig.1 and 2.
Numerical simulation results of stress under water pressure component
According to the analysis of monitoring data, the arch dam is in a good elastic working state, so this calculation uses a linear elastic model. When establishing the deterministic model of the main stress of the arch dam, it is considered that the abutment and the dam are areas with large stress values, and the actual arch dam also has many cracks in this area. Therefore, this paper takes the dam shoulder at 185m elevation and the arch crown beam at 156m elevation as the study area of the stress model.
The relationship curve between reservoir water level and principal stress under water pressure components is shown in Fig.3 and Fig.4. It can be seen from Fig.3 that on the upstream face of the dam body, the main tensile stress value is small as a whole. As the water level rises: the main tensile stress value at the dam shoulder and the middle of the dam gradually decreases; the value of the principal compressive stress on the upstream surface is less than the allowable compressive stress of concrete. As the water level rises, the main compressive stress value at the dam shoulder changes little, and the main compressive stress value in the dam gradually increases, reaching 3.29 MPa when the flood level is checked.
It can be seen from Fig.4 that on the downstream side of the dam, the main tensile stress in the middle of the dam is larger. As the water level rises, the main tensile stress on the abutment gradually decreases, and the main tensile stress in the dam gradually increases, reaching 1.18MPa when it is exceptional flood level. As the water level rises, the main compressive stress in the abutment and the middle of the dam gradually increases, reaching 2.24MPa when it is exceptional flood level.
Numerical simulation results of stress under temperature component
Temperature action is one of the main loads in arch dam design. With the change of temperature in the dam, the dam body will distort. Because the dam body is embedded in bedrock and cannot expand and move, the temperature stress will be generated in the dam. When calculating the temperature stress, the temperature closure temperature of the dam is considered, and the external boundary load of temperature is applied according to the water level and water temperature distribution in the upper and lower reaches of the reservoir, and the average temperature of the dam site over the years. [5] The annual average temperature in different months was taken as the temperature load on the dam boundary to calculate the stress of the dam body and establish the corresponding relationship curve between the principal stress value and the month, as shown in Fig. 5 and Fig.6. Fig. 5 and Fig. 6 show that: (1) At high temperature month: The tensile stress in the upstream dam is large and reaches the maximum value in July, and the maximum principal tensile stress in the dam is 1.11MPa. The compressive stress at the abutment of the upstream surface is larger and reaches the maximum value in July and the maximum is 1.08MPa. The compressive stress in the downstream dam is large and reaches the maximum value in July, and the maximum principal compressive stress in the dam is 1.00MPa.
(2) At low temperature month: The maximum principal tensile stress of abutment is large, which reaches the maximum value(1.00MPa) in January and February. The compressive stress value of the upstream surface is small. Tensile stress of different degrees appears in the abutment and middle of the dam downstream surface. The compressive stress value of the downstream surface is also small.
Deterministic model results
According to dam engineering theory and elastic theory, the stress of dam body caused by water pressure can be expressed by polynomial of water head, and the temperature stress caused by changing temperature can be expressed by periodic term.
Analysis of actual dam cracks
The arch dam has been in operation for many years, and many cracks have appeared in the dam under different loads. The schematic diagram of the downstream dam surface crack is shown below. According to the results of the deterministic model and Fig.7 and Tab.3, it can be seen that: (1) Cracks in the downstream surface are mostly concentrated in the dam and abutment area. Among them, there are several penetrating fractures of more than 10m.
(2) The downstream surface of the dam presents a large tensile stress area in the middle and abutment of the dam, which is basically consistent with the actual fracture area in Fig. 7. This also explains the reliability of the deterministic model to some extent.
Main conclusions
In this paper, a deterministic stress model of an arch dam is obtained based on the elastic modulus parameters of the dam body after inversion. These following conclusions were drawn: (1) This paper provides a deterministic model for reference when the stress monitoring data of a dam is out of alignment. The elastic modulus of the dam is based on | 2021-08-27T16:43:57.430Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "415e08282772e58b2953563abfc5d13796f1428c",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/52/e3sconf_wchbe2021_01030.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "f182f48bc16fa84c1ce62d62a92fce2a9f6d2494",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Geology"
]
} |
196648377 | pes2o/s2orc | v3-fos-license | 50-valent inactivated rhinovirus vaccine is broadly immunogenic in rhesus macaques
As the predominant etiological agent of the common cold, human rhinovirus (HRV) is the leading cause of human infectious disease. Early studies showed monovalent formalin-inactivated HRV vaccine can be protective, and virus-neutralizing antibodies (nAb) correlated with protection. However, co-circulation of many HRV types discouraged further vaccine efforts. We approached this problem straightforwardly. We tested the hypothesis that increasing virus input titers in polyvalent inactivated HRV vaccine will result in broad nAb responses. Here, we show that serum nAb against many rhinovirus types can be induced by polyvalent, inactivated HRVs plus alhydrogel (alum) adjuvant. Using formulations up to 25-valent in mice and 50-valent in rhesus macaques, HRV vaccine immunogenicity was related to sufficient quantity of input antigens, and valency was not a major factor for potency or breadth of the response. We for the first time generated a vaccine capable of inducing nAb responses to numerous and diverse HRV types.
32
HRV causes respiratory illness in billions of people annually, a socioeconomic burden 1 . 33 HRV also causes pneumonia hospitalizations in children and adults and exacerbations of asthma 34 and chronic obstructive pulmonary disease (COPD) 2 . HRV was found to be the second leading
55
There are three species of HRV, A, B, and C. Sequencing methods define 83 A types, 32 56 B types, and 55 C types 20,21 . It is thought there are 150 to 170 serologically distinct HRV types.
57
HRV A and C are associated with asthma exacerbations and with more acute disease than HRV 58 B 22,23 . HRV C was discovered in 2006 and 2007 24-27 and recently cultured in cells 28,29 . Here, we 59 focused on HRV A, the most prevalent species. There are no permissive animal challenge 60 models of HRV virus replication, but mice and cotton rats can recapitulate aspects of HRV 61 pathogenesis 30,31 . The best efficacy model is human challenge. In monovalent vaccine trials, 62 formalin-inactivated HRV-13 was validated prior to clinical testing by assessing induction of 63 nAb in guinea pigs, and a reciprocal serum nAb titer of 2 3 resulting from four doses of a 1:125 64 dilution of the vaccine correlated with vaccine efficacy in humans 9 . Although the nAb titer 65 required for protection is not defined, early studies established inactivated HRV as protective in 66 humans, and immunogenicity in animals informed clinical testing.
68
Results and discussion 69 We first used BALB/c mice to test immunogenicity. We propagated HRVs in H1-HeLa 70 cells and inactivated infectivity using formalin. Sera from naïve mice had no detectable nAb 71 against HRV-16 ( Fig. 1). Alum adjuvant enhanced the nAb response induced by i.m. inactivated 72 HRV-16 ( Fig. 1). There was no effect of valency (comparing 1-, 3-, 5-, 7-, and 10-valent) on the 73 nAb response induced by inactivated HRV-16 or to the 3 types in the 3-valent vaccine (HRV-16, 74 HRV-36, and HRV-78) (Fig. 1). The 50% tissue culture infectious dose (TCID 50 ) titers of the 75 input viruses prior to inactivation (inactivated-TCID 50 ) are provided in Supplemental Table 1. 76 Original antigenic sin can occur when sequential exposure to related virus variants results in 77 biased immunity to the type encountered first 32 . In bivalent HRV-immune mice, we observed 78 modest original antigenic sin following prime vaccination with 10-valent inactivated HRV, and 79 boost vaccination partially alleviated the effect (Supplemental Fig. 1), similar to influenza 80 virus 32 . Collectively, these results prompted us to explore more fully the nAb response to 81 polyvalent HRV vaccine.
82
In 1975, it was reported that two different 10-valent inactivated HRV preparations 83 induced nAb titers to only 30-40% of the input virus types in recipient subjects 33 . However, the 84 input titers of viruses prior to inactivation ranged from 10 1.5 to 10 5.5 TCID 50 per ml, and these 85 were then diluted 10-fold to generate 10-valent 1.0 ml doses given i.m. as prime and boost with 86 no adjuvant 33 . We hypothesized that low input antigen doses are responsible for poor nAb 87 responses to 10-valent inactivated HRV. We reconstituted the 1975 10-valent vaccine, as closely 88 as possible with available HRV types, over a 10 1 to 10 5 inactivated-TCID 50 per vaccine dose, and 89 we compared it to a 10-valent vaccine of the same types with input titers ranging from > 10 5 to > 90 10 7 inactivated-TCID 50 per dose. The reconstituted 1975 vaccine resulted in no detectable nAb 91 after prime vaccination and, following boost vaccination, nAb to the five types that had the 92 highest input titers (Fig. 2). The high titer vaccine resulted in nAb to 5 of 10 types after prime 93 vaccination and all 10 types after the boost (Fig. 2). Following the boost vaccinations, there 94 appeared to be a 10 4 inactivated-TCID 50 per vaccine dose threshold for the induction of nAb in 95 this model (Fig. 2b). Above this titer, there was no correlation between input load and nAb Table 2) to accommodate the volume adjustment. The 10-valent inactivated 102 HRV vaccine induced nAb to 100% of input types following the prime and the boost (Fig. 3a). 103 The nAb induced by 10-valent inactivated HRV were persisting at 203 days post-boost 104 (Supplemental Fig. 2). The 25-valent inactivated HRV prime vaccination induced nAb to 18 of 105 25 (72%) virus types, and the 25-valent boost resulted in nAb against 24 of the 25 types (96%) 106 (Fig 3b). The average nAb titer resulting from prime + boost was 2 7 for 10-valent and 2 6.8 for 25-107 valent. The data demonstrate broad neutralization of diverse HRV types with a straightforward 108 vaccine approach.
109
In order to increase vaccine valency, we chose rhesus macaques (RMs) and a 1.0 ml i.m. Table 3). The 25-valent 114 vaccine induced nAb to 96% (RM A) and 100% (RM B) of input viruses following the prime 115 vaccination (Fig. 4a). The 50-valent vaccine induced nAb to 90% (RM C) and 82% (RM D) of 116 input viruses following the prime vaccination (Fig. 4c). The breadth of nAb following prime 117 vaccination in RM was superior to what we observed in mice, which may have been due to 118 animal species differences and/or higher inactivated-TCID 50 input titers in the RM vaccines.
119
Following boost vaccination, there were serum nAb titers against 100% of the types in 25-valent 120 HRV-vaccinated RMs (Fig. 4b) and 98% (49 out of 50) of the virus types in 50-valent HRV-121 vaccinated RMs (Fig. 4d). The average nAb titer resulting from prime + boost in RMs was 2 9.3 122 for 25-valent and 2 8.6 for 50-valent. The nAb responses were type-specific, not cross- inactivated-TCID 50 per type per dose will be useful. Therefore, HRV stock titers ≥ 10 7 TCID 50 128 per ml are required for a potential 83-valent HRV A formulation in a 0.5 ml dose containing 129 alum adjuvant. The HRV stocks used in our vaccinations were produced in H1-HeLa cells, a 130 good substrate for HRV replication but not suitable for vaccine manufacturing. We compared the 131 infectious yield of 10 HRV types in H1-HeLa and WI-38, which can be qualified for vaccine 132 production. Adequate yields were obtained from WI-38 cells (Supplemental Fig. 4). Injectable 133 vaccines require defined purity. As proof of principle, we purified three HRV types by high 134 performance liquid chromatography and found uncompromised immunogenicity of trivalent 135 inactivated purified HRV in mice (Supplemental Fig. 5). 136 Forty years ago, the prospects for a polyvalent HRV vaccine were dour for good Advancing valency may be applicable to vaccines for other antigenically variable pathogens. In mice, peripheral blood was collected into microcentrifuge tubes from the submandibular vein.
244
Samples were incubated at room temperature for 20 min to clot. The tubes were centrifuged 7500 245 × g for 10 min to separate serum. The serum samples were pooled from mice of each group and 246 stored at −80 °C until used. Phlebotomy involving RMs was performed under either ketamine 247 (10 mg/kg) or Telazol (4 mg/kg) anesthesia on fasting animals. Following anesthesia with 248 ketamine or Telazol, the animals were bled from the femoral vein. Yerkes blood collection 249 guidelines were followed and no more than 10 ml/kg/28 days of blood was collected. After for each group, and nAb titers (y-axis) were measured against the indicated types in the vaccines.
304
The dashed line represents LOD. Undetectable nAb were assigned LOD/2, and some symbols 305 below LOD were nudged for visualization. Three independent experiments using low input titers 306 showed similar results. There was a statistically significant association between input TCID 50 307 virus titer and a detectable nAb response following prime (P = 0.01) and boost (P = 0.03) 308 vaccination (Fisher's exact test).
311
The inactivated-TCID 50 input titers per dose are specified in Supplemental Table 2 | 2016-11-01T19:18:48.349Z | 2016-05-17T00:00:00.000 | {
"year": 2016,
"sha1": "f54a007d9fac4188a44af0914a89edf845790085",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1038/ncomms12838",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "2d572219ab227378b5eefae621a0ad19dcf52b3c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
214080290 | pes2o/s2orc | v3-fos-license | Sustainability as a Key Factor in Tourism Competitiveness: A Global Analysis
: The aim of this study is to analyze the relationship between sustainability and tourism competitiveness and potential di ff erences in these parameters between geographical regions. The Travel and Tourism Competitiveness Index (TTCI) of the World Economic Forum is most commonly used to measure tourism competitiveness, however, this index has been criticized by some academics. We propose a synthetic indicator (I m α ) using the multicriteria double reference point method, which can measure tourism competitiveness more accurately by applying di ff erent degrees of substitutability among pillars. The Sustainable Development Index (SDG Index) frames the implementation of Sustainable Development Goals and was used to analyze sustainability data. The new tourism competitiveness index (I m α ) was obtained at both the global and regional level. It is important to note that some countries have a di ff erent ranking in the regional and global tourism competitiveness indexes, which shows a di ff erent behaviors among regions. The relationship between sustainability and tourism competitiveness is positive in all the analyses performed, though it is stronger when calculated without allowing substitutability, especially when considering the regional index. These trends show the value of this regional study of tourism competitiveness, because in addition to helping managers develop strategies to improve tourism competitiveness, it allows them to know the e ff ect that these strategies will have on sustainability.
Introduction
Tourism destination competitiveness (TDC) is highly relevant in the academic field and has been studied for more than 30 years. All tourism activities have an environmental footprint, and activities such as travel, accommodation, and food production and consumption can have negative impacts if not adequately managed. However, tourism has several benefits, including the higher involvement of administrative agencies in conserving natural resources, creation of economic value, regional development, and environmental and cultural heritage protection. Tourism integrated into local communities, distanced from mass tourism, is necessary to provide a more realistic experience, which raises awareness of the need to preserve the traditions of local communities and their surroundings and practice sustainable tourism.
In September 2015, during the Special Summit on Sustainable Development, the United Nations member states approved the 2030 Agenda for Sustainable Development, a program that involves a global commitment to achieve economic growth, including social and environmental sustainability, in all countries. The approved plan has 17 Sustainable Development Goals (SDGs) that aim to promote actions in the coming years in critical areas relevant to society and the environment [1].
To complement the official SDGs, the Sustainable Development Solutions Network (SDSN) and the Bertelsmann Stiftung foundation began to prepare the SDG Index & Dashboards Report in 2016, which benchmarks the performance of countries regarding SDGs, so that countries can assess their position on compliance with SDGs and set priorities for early action. The overall SDG Index score and goal scores indicate the level of achievement of sustainability.
The 2030 Agenda for Sustainable Development designated 2017 as the International Year of Sustainable Tourism and proposed SDGs. This proposal intended to promote changes in policies, business practices, and consumer behaviors to make tourism more sustainable, thus enabling the achievement of SDGs. This Agenda was an opportunity to create awareness among public and private sectors and society in general about the restrictions in economic growth and the importance of sustainable tourism to economic development, and increase the visibility of sustainable tourism in the business community.
Tourism's ability to promote social sustainability is reflected by the inclusion of tourism in three SDGs: promoting economic growth and employment (Goal 8), sustainable production and consumption (Goal 12), and marine conservation ( Goal 14). The cross-cultural nature of tourism and its multiplier effect in many other sectors allows the 17 goals to be attained.
We should ensure that all stakeholders work together, making tourism a catalyst for change, and respect nature, culture, and host communities during travel activities.
Sustainability is increasingly understood as a competitive advantage and a key factor of competitiveness in the tourism industry. Therefore, the analysis of the relationship between these two parameters is crucial.
Literature Review
Since 2007, the World Economic Forum (WEF) has formulated the Travel and Tourism Competitiveness Index (TTCI), which is published regularly in the Travel and Tourism Competitiveness Report and measures the competitiveness of the main tourist destinations worldwide. The objective of the TTCI is to evaluate factors and policies that make a destination attractive to international tourism. This index is increasingly used by researchers to assess TDC: Kendall and Gursoy [2] examined the relative positioning of eight Mediterranean destinations using a correspondence analysis technique; Gursoy, Baloglu and Chi [3] examined the relative positioning of ten Middle East destinations using a multi-dimensional scaling analysis; Kayar and Kozak [4] compared the competitiveness levels of EU countries with those of Turkey using cluster analysis and multidimensional scaling [4]; and Leung and Baloglu used the same technique in sixteen Asia Pacific destinations [5].
Other researchers developed new tourism competitiveness indexes. For example, Gooroochurn and Sugiyarto [6] used data from the competitiveness monitor scale proposed by the World Travel and Tourism Council, which measures TDC through the development of eight key indicators, using confirmatory factor analysis, in order to calculate an aggregate index; Croes [7] proposed a more accurate TDC index, using the most important factors affecting the competitiveness of island destinations; Croes and Kubickova [8] proposed an alternative TCI, which they apply to the Central American region; some authors used data from the TTCI of the WEF for an global analysis of tourism competitiveness [9][10][11][12], while other authors used other information sources pertain to specific destinations [13][14][15]. In this regard, it is worth mentioning the research works by Mendola and Volo [16] that offered the methodological foundations to build composite indicators in tourism and evaluated a set of currently available composite indicator.
It is also necessary to consider qualitative approaches to measure TDC; some researchers have used data from survey of tourists [17,18] and others researchers have used data from interviews of tourism experts and stakeholders [19][20][21][22]. Boroomand, Kazemi and Rankbarian [23] developed a mixed method, a quantitative and qualitative study in order to measure the tourism competitiveness of Iran. Others authors have focused their attention on analyzing different approaches in tourism competitiveness [24,25].
Goffi, Cucculelli and Masiero [39] tested if sustainability influenced tourism destination competitiveness in developing countries, with the case study Brazil; Pulido-Fernández, Andrades-Caldito and Sánchez-Rivero [41] demonstrated that progress in tourism sustainability does not affect a country's main economic tourism indicator in the short term, and does not constrain profitability and competitiveness; and Pulido-Fernández, Cárdenas-García and Espinosa-Pulido [44] used structural equation models to measure the possible relationships between tourism growth and environmental sustainability, showing that the expansion of tourism translates into an environmental deterioration of the destination and, furthermore, it substantiated that there are specific variables connected to environmental sustainability that contribute to greater tourism growth, so that the relationship between tourism and environmental sustainability is bidirectional.
These studies demonstrated the importance of the relationship between sustainability and TDC and concluded that the former could improve the latter, although the expansion of tourism may contribute to environmental degradation.
Analysing the relationship between TDC and sustainability can help identify the strengths and weaknesses of tourist destinations and highlight the major limitations of competitiveness as well as the pillars and sub-indexes that most contribute to the competitiveness of tourist destinations. Furthermore, this assessment provides policymakers and tourism managers with relevant information that helps them prioritize changes in the tourism industry.
Methods
The 2017 edition of the Travel and Tourism Competitiveness Report (TTCR) [45] measured different competitiveness indicators in 136 countries using 90 indicators, including political, socioeconomic, structural, environmental, and cultural factors, among others. These indicators were grouped into 14 pillars, and each pillar was classified into four sub-indexes: Enabling Environment, T&T Policy and Enabling Conditions, Infrastructure, and Natural and Cultural Resources. The sub-indexes were added to develop the TTCI. From a methodological perspective, at each aggregation level, the WEF develops a corresponding composite indicator as an unweighted average of the indicators, pillars or sub-indexes included in the level that immediately preceded it.
Each sub-index was calculated as the unweighted average of the pillars.
where m d is the number of pillars that form each sub-index of tourism competitiveness, and d is the n-th subdomain (for d = 1 to 4). Therefore, four sub-indexes are obtained. Furthermore, the overall TTCI for each country is calculated as the unweighted average of the four sub-indexes.
n is the number of sub-indexes that compose the TTCI.
Although pillars are not weighted, they are implicitly weighted because not all sub-indexes have the same number of pillars. The sub-indexes with a smaller number of pillars mean that these pillars carry a greater weight in the overall index. Thus, a country with high values in the fourth sub-index (natural and cultural resources), which contains two pillars instead of five as in the case of the first sub-index, would be more positively valued due to this grouping than if the 14 pillars were individually aggregated.
Therefore, it is important to be aware of the implicit weights that are being given and consider whether these are appropriate in accordance to the philosophy of tourism competitiveness.
In this sense we have developed a new synthetic indicator. It is based on a different standardization and aggregation of the pillars [9,46], which enables, on the one hand, further adjustment of the weighting, and on the other hand, evaluation of the state of all other countries in relation to each pillar.
The TDC of a country is determined by multiple criteria. Decision-making methods that simultaneously evaluate different strategies using several criteria constitute multicriteria approaches. The method proposed in this study for calculating the synthetic index uses a goal function proposed by Wierzbicki [47], based on the double reference method. This function normalizes objective functions (pillars) considering two reference levels for each pillar, an aspiration level (desirable value for a certain pillar), and a reservation level (below which the values will not be acceptable) [46]. This type of goal function normalizes all pillars within the range −1 to 2. The linearity of segments of the goal function allows information that could not be obtained through classical normalization (range between maximum and minimum) to be extracted. Given a country i and a pillar j, s ij equal to −1 indicates that the pillar score for that indicator is the minimum; s ij equal to 0 represents the reservation level; s ij equal to 1 corresponds to the aspiration level; and s ij equal to 2 indicates the maximum score.
The weak index is the arithmetic mean of the scores of the M pillars, the strong index is the minimum pillar score, and the intermediate index is the average between the weak and strong indexes [9]. The weak index measures aggregate competitiveness, allowing substitutability among pillars; the strong index, does not allow substitutability among pillars and therefore measures the state of the pillar with the lowest score; and a series of intermediate indexes composed of the two previous ones, that measure different degrees of tourist competitiveness for different levels of aggregation.
The authors subscribe to the philosophy of 'strong' competitiveness, according to which the weaknesses of certain indicators cannot be offset by the good results obtained in others. In any case, the analysis of all these indices (weak, strong and intermediates) permits a more precise assessment of each country and the study of the factors explaining each situation.
Analysis of Tourism Competitiveness: Contributions of the New Indexes
Using the above methodology, the tourism competitiveness of 136 countries worldwide was evaluated considering the data published in the 2017 TTCR. The descriptive statistics for the 136 countries are shown in Appendix A: Table A1. The pillars with the strongest correlations were 'human resources and labor market' and 'ICT readiness' and also 'ICT readiness' and 'tourist service infrastructure'. The pillars with the highest scores were 'safety and security' and 'health and hygiene', and 'cultural resources and business travel' and 'air transport infrastructure' were those with the lowest scores.
According to the WEF methodology, it is equally important to have a high score in any of the pillars with the same implicit weight. However, we can observe that it will be more difficult to obtain high scores for 'cultural resources and business travel' than for 'safety and security' because the average score is lower (Table 1). Our methodology considers this fact; so in this respect, the aspiration values are higher for pillars with higher scores, and countries that does not achieve high scores in these pillars will have a lower overall score.
In fact, if we observe Table 1, we can see that the pillars with higher and lower average values are different if we consider the WEF scores or the normalized scores of our proposed methodology; this is because our methodology uses the aspiration and reservation values in order to normalize. The pillars with the highest and second-highest score was 'safety and security' and 'tourist service infrastructure', respectively, whereas 'cultural resources and business travel' and 'international openness', were the pillars with the lowest and second-lowest score, respectively. This result is relevant because pillars with high and low scores affect tourism competitiveness synthetic index of different countries. Another advantage of the aspiration and reserve levels is that their use allows to highlight the differences between the global and regional ranking, what allows us to identified distinctive characteristics among regions. This is because the aspiration and reservation levels may vary globally and regionally since they were obtained from statistical calculations and they depended on the value of the group countries.
Consequently, we calculated the tourism competitiveness synthetic indexes (I d , I f , I m50 , I m60 , I m70 , I m80 and I m90 ) and the TTCI globally and for each of the five regions considered by the UNWTO (the Middle East, Europe, Asia and Pacific, the Americas, and Africa). We compared for each region, the global and regional ranking in each index. While the position of the countries in the WEF rankings did not vary, in ours it did. For illustrative purposes, only the case of the Middle East is show ( Table 2). The data concerning the other regions are available upon request.
As we can see in Table 2, the regional and global rankings obtained according TTCI in Middle Eastern countries are the same, while the rankings obtained using weak, intermediate, and strong indexes are different in global and regional rankings. For example, considering the weak index, Oman surpassed Bahrain at the regional level; although both countries improved at this level, but Oman improved more than Bahrain (Appendix B: Table A2). This is primarily due to the 'natural resources' pillar (P13), which is one of the most important pillars. The aspiration and reservation levels of 'natural resources' are lower in the regional than the global level due to the bad situation of the group of countries in this pillar, and Oman managed exceed the aspiration level. In addition, the United Arab Emirates remained in the first position in the four global rankings. However, at the regional level, this country ranked second when we used strong and intermediate indexes and it was surpassed by Oman. This result is due to the 'price competitiveness' pillar (P8). Aspiration and reservation levels were higher due to the good level of this region in this pillar. The score for the United Arab Emirates was 5.023, which was higher than the global reservation level (4.394). Nonetheless, the reservation level in the Middle East was 5.197 because the score of all countries for this pillar was high so that the score of the United Arab Emirates became bad compared to the rest. Therefore, the score for the United Arab Emirates was below the reservation value, whereas the score for Oman was above the aspiration level. Moreover, the score for the 'environmental sustainability' (P9) was higher for Oman because the score for the Middle Eastern region was lower for this pillar.
The proposed methodology allows differences between regions to be assessed. The analyzed regions had different characteristics, which could lead to different weights, as performed by WEF using the Global Competitiveness Index (GCI), which distributed weights according to the stage of development of each country based on the gross domestic product (GDP) per capita. These weights were justified because, although all the pillars are critical for the economies of these countries, some pillars are more relevant than others, depending on the level of development of each country. For this reason, higher weights can be attributed to pillars that are more relevant to an economy given its particular level of development [48].
Similarly, the pillars of tourism competitiveness affect countries in different ways according to their stage of development. For this reason, the strategy adopted to improve the tourism competitiveness of a developed country is different from that used for a developing country.
It should be noted that the GCI weights assigned by the WEF to the countries that were in the transition stages changed gradually as these countries developed. This approach ensures that higher weights are attributed to regions with higher potential for competitiveness and penalizes countries with less economic development [49].
This methodology was applied until 2017. The GCI assigns the same weight to the 12 pillars considering the complexity of prioritizing policies in the current context. This methodology is significantly different from a previous methodology, which assigned greater weight to the enabling conditions in low-income countries and a higher weight to innovation and sophistication in developed countries to indicate that the competitiveness strategies adopted by each country should vary according to the level of development. Under this premise, the WEF seeks to provide the same starting conditions and encourage the comprehensive development of all countries [50].
The reasoning is that, as the Fourth Industrial Revolution progresses, all competitiveness factors will have a similar impact on the competitiveness of countries, regardless of their income levels. Automation will likely reduce the possibility of development in countries that depend on low labor costs in manufacturing. In this respect, Rodrik [51] found that economic growth in many developing countries was determined by services, whereas recently industrialized countries began to deindustrialize much earlier than Western countries. At the same time, information and communication technologies are reducing information barriers and allowing the rapid transfer of concepts, technologies, and products worldwide, opening new opportunities for developing economies. Based on these data, the current GCI is less prescriptive about the path to prosperity, rewarding countries that advance and penalizing those that neglect any aspect of competitiveness, regardless of their stage of development.
We followed the same strategy and used similar weights for countries at different stages of development. We are also in line with the idea of penalizing countries that neglect aspects of tourism competitiveness. In this respect, the strong index penalizes countries that do not achieve reservation values in important pillars, so countries with low scores in essential pillars, which required the most urgent solutions, can be identified. The proposed methodology exposes the countries that offset low scores with high scores on other pillars.
Relationship between Tourism Competitiveness and Other Tourism Indicators
We analyze the relationship between tourism competitiveness and other tourism indicators to assess possible relationships between the most competitive and the most attractive countries for tourism. We performed a ranking analysis based on different variables, including 'international tourist arrivals', 'international tourism receipts' and 'average spending per international tourist', provided by UNWTO; and 'T&T industry GDP', 'T&T industry Employment', 'T&T industry Share of GDP', and 'T&T industry Share of Employment' provided by the World Travel & Tourism Council, Tourism Satellite Account Research. Tourism competitiveness tended to improve as the 'T&T industry GDP' or 'the international tourist arrivals' increased (Appendix C: Figures A1 and A2).
The TTCI was analyzed in greater detail using the tourism GDP of the evaluated countries, which were classified into four categories: To determine significant differences, the TTCI scores and pillars of competitiveness for each of these groups of countries were analyzed (Appendix C: Figures A3 and A4). The pillar scores increased as the tourism revenues of the countries increased. This trend was more pronounced for 'natural and cultural resources' and less pronounced for 'T&T policy and conditions'. The 'enabling environment' pillar had the highest average scores for all levels and similar scores among all countries; however, the scores of this pillar were slightly higher in medium-high-income countries than in high-income countries.
The pillars with the highest variability at all levels were 'air transport infrastructure' (P10) and 'cultural resources and business travel' (P14), whereas those with the lowest variability at all levels were 'business environment' (P1), 'safety and security' (P2), 'price competitiveness' (P8), and 'environmental sustainability' (P9). The scores of most pillars increased as tourism revenues increased, except for the pillars considered more homogeneous and 'human resources and labor market' (P4). The pillars with the lowest scores were 'international openness' (P7) (at all levels), 'air transport infrastructure', (P10) 'ground and port infrastructure', (P11) and 'natural resources' (P13) for low and medium-low-income countries and 'cultural resources and business travel' (P14) for all countries except for high-income countries.
However, according to the results and the strategy followed by the WEF, we do not consider it appropriate to allocate weights based on the characteristics of the tourist magnitudes of the countries.
Analysis of Sustainability: Relevance to Tourism Competitiveness
The SDG Index measures the sustainability of different countries, its global score indicates the level of achievement of sustainability. The countries with the lowest and highest scores in SDG Index 2017 (Appendix D: Tables A3 and A4) were located in Africa and Europe, respectively. Therefore, sustainability was analyzed by region to assess possible patterns (Appendix D: Figure A5).
The analysis of SDGs indicated that the scores for Europe were usually higher than those of other regions in the SDG index and in almost all the SDGs, except for SDG12 (scores were lowest), SDG13 (scores were lower than in the Americas), SDG16 (scores were similar to those of Middle Eastern countries), and SDG17 (scores were slightly higher than in Asia-Pacific countries). Africa had the lowest overall score and the lowest score for most SDGs, except for SDG5 and SDG11 (higher than in Middle Eastern countries), SDG10 and SDG16 (scores were higher than in the Americas), SDG13 (score was higher than in Asian and Middle Eastern countries), SDG12 (score was higher than in all other regions), and SDG15 and SDG17.
The SDGs with the highest scores were SDG1, SDG3, SDG4, and SDG6, and those with the lowest scores were SDG9 and SDG14. The SDGs with the highest inter-region variability were SDG1, SDG3, SDG5, SDG6, and SDG9, and those with the lowest variability were SDG2 and SDG14.
Several studies found a strong relationship between sustainability and tourism competitiveness. Therefore, this relationship was analyzed in the present paper using the obtained tourism competitiveness indexes (I d , I f , I m50 , I m60 , I m70 , I m80 , I m90 and TTCI) and the SDG index, for both global and regional. First of all we carry out the global analysis.
First, we assessed the relationship between the SDGs and the tourism competitiveness indexes and found that the correlation between these indexes and SDGs was significant, except for SDG15 and SDG17.
The relationship between the SDG index and the tourism competitiveness indexes showed a stronger correlation of TTCI and the weak index with SDG Index than between the strong index and SDG Index (Figure 1). However, an independent analysis by region and global rankings yielded different results (Figures 2-6).
Middle Eastern countries were at the bottom of the sustainability ranking and had intermediate tourism competitiveness values although, as the substitutability among pillars was reduced (by approaching the strong index), countries lost their position in tourism competitiveness (from quadrant 2 to quadrant 3), whereas the sustainability ranking remained constant because we did not apply our methodology (Figure 2). Sustainability increased as tourism competitiveness increased.
All European countries, except for Bosnia and Herzegovina, had a high sustainability ranking. With respect to tourism competitiveness, countries had scores above and below the average, and lost competitiveness as the substitutability among pillars was reduced. There was a growing trend between these two variables.
In American countries, there was a growing trend between tourism competitiveness and sustainability ( Figure 4). Most countries were located between quadrant 1 (high ranking) and quadrant 3 (low ranking).
Most African countries presented a low ranking for sustainability and competitiveness, and sustainability tended to increase as tourism competitiveness increased. It is worth noting that, by applying the strong index instead of the TTCI, African countries increased their ranking because the former penalized countries that achieved high TTCI values by offsetting among pillars with high and low scores.
In Asia-Pacific countries, sustainability tended to increase as tourism competitiveness increased. Most countries as in Americas, were located between quadrants 1 and 3. It should be emphasized that countries with a low overall sustainability ranking had higher tourism competitiveness when we applied the strong index, whereas countries with a high overall ranking had lower tourism competitiveness.
scores were SDG9 and SDG14. The SDGs with the highest inter-region variability were SDG1, SDG3, SDG5, SDG6, and SDG9, and those with the lowest variability were SDG2 and SDG14.
Several studies found a strong relationship between sustainability and tourism competitiveness. Therefore, this relationship was analyzed in the present paper using the obtained tourism competitiveness indexes (I d , I f , I m50 , I m60 , I m70 , I m80 , I m90 and TTCI) and the SDG index, for both global and regional. First of all we carry out the global analysis.
First, we assessed the relationship between the SDGs and the tourism competitiveness indexes and found that the correlation between these indexes and SDGs was significant, except for SDG15 and SDG17.
The relationship between the SDG index and the tourism competitiveness indexes showed a stronger correlation of TTCI and the weak index with SDG Index than between the strong index and SDG Index (Figure 1). However, an independent analysis by region and global rankings yielded different results (Figures 2-6). Middle Eastern countries were at the bottom of the sustainability ranking and had intermediate tourism competitiveness values although, as the substitutability among pillars was reduced (by approaching the strong index), countries lost their position in tourism competitiveness (from quadrant 2 to quadrant 3), whereas the sustainability ranking remained constant because we did not apply our methodology (Figure 2). Sustainability increased as tourism competitiveness increased. All European countries, except for Bosnia and Herzegovina, had a high sustainability ranking. With respect to tourism competitiveness, countries had scores above and below the average, and lost competitiveness as the substitutability among pillars was reduced. There was a growing trend between these two variables. In American countries, there was a growing trend between tourism competitiveness and sustainability (Figure 4). Most countries were located between quadrant 1 (high ranking) and quadrant 3 (low ranking). In American countries, there was a growing trend between tourism competitiveness and sustainability ( Figure 4). Most countries were located between quadrant 1 (high ranking) and quadrant 3 (low ranking). Most African countries presented a low ranking for sustainability and competitiveness, and sustainability tended to increase as tourism competitiveness increased. It is worth noting that, by applying the strong index instead of the TTCI, African countries increased their ranking because the former penalized countries that achieved high TTCI values by offsetting among pillars with high and Most African countries presented a low ranking for sustainability and competitiveness, and sustainability tended to increase as tourism competitiveness increased. It is worth noting that, by applying the strong index instead of the TTCI, African countries increased their ranking because the former penalized countries that achieved high TTCI values by offsetting among pillars with high and low scores. A different behaviour has been observed in each region, which can be collected in a more complete way with the tourist competitiveness synthetic indexes (I d , I f , I m50 , I m60 , I m70 , I m80 and I m90 ), as we already demonstrated in Section 4, Table 2; these differences were due to reservation and aspiration levels. Therefore, we have performed a regional analysis with the regional rankings. As in Section 4, in order not to saturate the reader, we show by way of example the case of the Middle East (Figure 7). Data may be required for the other regions. In Asia-Pacific countries, sustainability tended to increase as tourism competitiveness increased. Most countries as in Americas, were located between quadrants 1 and 3. It should be emphasized that countries with a low overall sustainability ranking had higher tourism competitiveness when we applied the strong index, whereas countries with a high overall ranking had lower tourism competitiveness.
A different behaviour has been observed in each region, which can be collected in a more complete way with the tourist competitiveness synthetic indexes (I d , I f , I m50 , I m60 , I m70 , I m80 and I m90 ), as we already demonstrated in Section 4, Table 2; these differences were due to reservation and aspiration levels. Therefore, we have performed a regional analysis with the regional rankings. As in Section 4, in order not to saturate the reader, we show by way of example the case of the Middle East ( Figure 7). Data may be required for the other regions. Figure 7 shows that at the regional level there was a stronger relationship between sustainability and tourism competitiveness when we used the strong index, that is, as pillar substitutability is Figure 7 shows that at the regional level there was a stronger relationship between sustainability and tourism competitiveness when we used the strong index, that is, as pillar substitutability is restricted there is a greater correlation between these magnitudes. This fact happens because the countries that improve positions using a strong index have a more homogeneous tourist competitiveness and that benefits sustainability to a greater extent. Having only one pillar with a very good level and the rest of the pillars with a very poor score, implies negative consequences for sustainability because the pillars that have a bad score can be harmful to sustainability.
Conclusions
This study has analyzed the relationship between tourism competitiveness worldwide and sustainability. Different synthetic indexes of tourism competitiveness were developed based on the data published in the TTCR for 2017 using a multicriteria double reference point method; these indexes improve some TTCI deficiencies. With the current WEF methodology, the results of the TTCI may lead to misleading conclusions, because there may be countries which obtain excellent results due only to certain pillars offsetting for the bad results in other pillars. Which can also have negative effects on sustainability. The application of the proposed methodology in this study avoids this problem, since a country will remain in the last positions of the ranking as long as the poor results of its indicators do not improve. Sustainability was evaluated using the Sustainable Development Index (SDG Index) developed by the SDSN and the Bertelsmann Stiftung Foundation. Moreover, a joint analysis of these two magnitudes was conducted at the global and regional levels. The proposed method allowed differences between geographical regions to be estimated and to elaborate different regional rankings.
The analysis of the tourism competitiveness for each of the five regions considered by UNWTO (the Middle East, Europe, Asia and Pacific, the Americas, and Africa) indicated that the position occupied by each country at the regional level was different from the overall ranking within a specific region. This finding is because the reservation and aspiration values varied as they were obtained from statistical calculations based on the scores of countries, indicating differences between regions and the need to perform evaluations by region.
To determine aspiration and reservation levels in a statistical way (according to the country values) offers us a relative ranking; if reference levels are determined by a panel of experts, we would not only get a ranking of countries, but it would also be possible to identify which countries are competitive and which are not. It lets us measure competitiveness not in relative terms, but in relation to the reservation value. This is a great challenge, since countries have very different characteristics and these levels should have agreement among experts since governments seek to improve their positions in the ranking and use these data to generate national policies.
In addition, the analysis of sustainability revealed that African countries occupied the lowest rankings, whereas European countries had the highest rankings. Therefore, sustainability was also analyzed by region to assess possible patterns. The assessment of SDGs indicated that European countries usually presented the highest overall scores and the highest scores for most SDGs, except for SDG12 (ensure sustainable consumption and production patterns), which was lower than in all other regions. In contrast, African countries presented the lowest overall score and the lowest score for most SDGs, except for SDG 12, which was higher than in all other regions. This shows that the more developed countries should pay more attention to improve this SDG.
The relationship between sustainability and tourism competitiveness was analyzed. There was a positive and significant correlation between all tourism competitiveness indexes and SDG Index and most of SDGs. Globally there was a higher correlation of the TTCI and the weak index (I d ) with the SDG Index than between the strong index and the SDG Index. However, an independent analysis by region and global rankings yielded different results. In some regions, the countries lost tourism competitiveness as the substitutability among pillars was reduced (by approaching the strong index) (Europe and Middle East) while Africa gained tourism competitiveness, it is due to the strong index that penalized the countries that achieved high tourism competitiveness scores by establishing offsets between pillars with high and low scores. It is worth highlighting that Asia-Pacific countries with a low overall sustainability gained tourism competitiveness when the strong index was applied, whereas countries with a high overall ranking lost tourism competitiveness.
Furthermore, regarding the relationship with sustainability, at a regional level as pillar substitutability is restricted there was a greater correlation between sustainability and tourism competitiveness. This fact happens because countries that improve their ranking position when we use strong index, have a more homogeneous tourist competitiveness and that benefits sustainability to a greater extent.
These analysis allowing managers and politicians to design specific strategies to improve areas that need attention. In this sense, the strong index can be considered a complementary index that avoids substitutability among pillars with high and low scores. So consequently this index identifies countries with low rankings in critical pillars, which require the most urgent solutions; allowing all stakeholders to work jointly to improve the competitiveness of the tourism industry in their national economies, thereby contributing to national growth and prosperity. Having only one pillar with a very good level and the rest of the pillars with a very poor score, implies negative consequences for sustainability too because the pillars with a bad score will be harmful to sustainability. Funding: This research received no external funding. | 2020-01-02T21:20:53.012Z | 2019-12-19T00:00:00.000 | {
"year": 2019,
"sha1": "7b0ece75e3983f6d0d0bc4524df39bd4c19516d2",
"oa_license": "CCBY",
"oa_url": "https://res.mdpi.com/d_attachment/sustainability/sustainability-12-00051/article_deploy/sustainability-12-00051.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "1a9bde601564cac521484c78ce159b8483b14c03",
"s2fieldsofstudy": [
"Environmental Science",
"Economics"
],
"extfieldsofstudy": [
"Business"
]
} |
252962156 | pes2o/s2orc | v3-fos-license | Runs of Homozygosity Revealed Reproductive Traits of Hu Sheep
Hu sheep, a famous breed in the Taihu Basin, has the advantages of non-seasonal estrus, multiple fetuses, coarse feeding tolerance, and suitability for house feeding. Runs of homozygosity (ROHs) were found to be an effective tool to detect the animal population structure and economic traits. The detection of ROHs is beneficial for reducing the incidence of inbreeding as well as identifying harmful variants in the genome. However, there is a lack of systemic reports on ruminants in previous studies of ROHs. Here, we sequenced 108 Hu sheep, detected ROHs in Hu sheep to calculate their inbreeding coefficient, and selected genes of Hu sheep breeds within the ROH islands which are relevant to agricultural economic characteristics. Then, we compared the characteristics of the occurrences of SNPs between Hu sheep and other sheep breeds, and also investigated the distribution of the frequencies of SNPs within specific gene regions of Hu sheep breeds to select their breed-specific genes. Furthermore, we performed a comparative genome and transcriptome analysis in human and sheep breeds to identify important reproduction-related genes. In this way, we found some significant SNPs, and mapped these with a set of interesting candidate genes which are related to the productive value of livestock (FGF9, BMPR1B, EFNB3, MICU2, GFRA3), healthy characteristics (LGSN, EPHA5, ALOX15B), and breed specificity (FGF9, SAP18, MICU2). These results in our study describe various production traits of Hu sheep from a genetic perspective, and provide insights into the genetic management and complementary understanding of Hu sheep.
Introduction
Sheep (Ovis aries), as an important livestock species, have been domesticated for thousands of years. Since the Neolithic, artificial selection has led to sheep population diversity [1]. The Hu sheep is a famous sheep breed from the Taihu Plain in China, which has the advantages of high prolificacy, year-round estrus, good lactation performance, and fast growth [2]. Its reproduction ability is an economically valuable and high-profile trait. It is one of most prolific sheep breeds in the world. Generally, ewes produce lambs twice a year and litter sizes are usually 2-3, occasionally 7-8 [3]. Thus, knowledge of the genes involved in the ovulation rate and litter size will provide meaningful information for sheep breeding. In addition, the Hu sheep's function of lactation is excellent. There are some researches on the nutritional value of Hu sheep milk. Chen et al. [4] reported that Hu sheep milk has unique lactic acid bacteria which has the advantage of direct vertical transmission and deserves further study. However, in recent years, most reports of microbes in milk are related to human milk or bovine milk, with few reports about other non-traditional sources of milk. Single-nucleotide polymorphisms (SNPs) are considered to be a kind of genetic marker existing widely in human and animal genomes, and regions with high homozygosity can be found effectively with intensive SNP detection [5]. ROHs, as continuous homozygous segments, are common in human and animal populations. ROH segments can be hereditary in a population and provide information about the demographic evolution of that population. Therefore, ROHs are used as a tool to characterize the degree of inbreeding depression within a population and to identify the candidate genes related to economically valuable traits. In summary, ROHs have various advantages in the detection of functional productive genes and guidance on livestock production. In recent research, ROHs have also been found in studies of livestock species. However, most of the studies on ruminants were concerned with cattle, and the research on ROHs needs to become more systematic and focused on more livestock breeds [6].
Therefore, the present study aimed to detect ROH patterns in Hu sheep populations, observe the degree of inbreeding in Hu sheep, and select candidate genes from within ROH islands which are related to breed-specific traits of Hu sheep. We investigated ROH islands which are related to characterized traits of Hu sheep using the sequencing data obtained by the Tn5-based method, which is a highly accurate, low-cost, and time-efficient low-coverage sequencing method [7]. We then explored the function of crucial Hu sheep genes by comparing them with other breeds. At the same time, we also used the results of human GWAS and TWAS to enrich the candidate genes, to further explore the expression of these genes in specific tissues and their association with disease development. Furthermore, we compared inbreeding coefficients which were calculated by different methods. Our study will provide fundamental evidence for Hu sheep breeding and complement present research on ROH detection in Hu sheep.
Data
In this research, a total of 108 Hu sheep (including 27 rams and 81 ewes) were analyzed. Their DNA samples were extracted from the sheep's peripheral blood at Shanghai Hu sheep Conservation Farm, China. The protocol that we used for each sheep sequencing library was based on Tn5. [7] Tn5 library generation was implemented by TruePrep Tagment Enzyme Mix (TTE Mix) which contains transposase and two equimolar adaptors. The premix solution was mixed with DNA and incubated at 55 • C for 10 min to achieve DNA fragmentation and end splicing by mixing 10 µL of 5×TTBL (TruePrep Tagment Buffer L), 50 ng of DNA, 5 µL of TTE Mix V50 (TruePrep Tagment Enzyme) and ddH 2 O. The total volume of the reaction was 50 µL. 24 µL of fragment product was amplified by mixing 10 µL of 5×TAB (TruePrep Amplify Buffer), 5 Ml of PPM (PCR Primer Mix), 1 µL of TAE (TruePrep Amplify Enzyme), 5 µL of N5XX and 5 µL of N7XX (two index primers). The PCR program was as follows: 9 min at 72 • C, 9 cycles of 30 s at 98 • C, and then 30 s at 63 • C. Size selection and purification were performed using VAHTS TM DNA Clean Beads. The purified and fragmented products obtained by the above steps is a library that can be sequenced. The libraries were sequenced on the Illumina 3000 platform.Whole-genome re-sequencing (WGS) data of sheep breeds (n = 248) were downloaded from the NCBI BioProject database under accession PRJNA624020 (mean 25.15×/sample). These sheep data included 16 Asiatic mouflon, 172 landraces, and 60 improved sheep from Asia, the Middle East, Africa, and Europe.
Raw data from re-sequencing and reduced sequencing were filtered using fastp software with default parameters. After filtering, the remaining reads were aligned to the sheep genome reference using the BWA tool [8]. Oar_rambouillet_v1.0 [9] was used as the reference genome for SNP calling in reduced sequencing data and re-sequencing data, which can detect higher SNP numbers than the method based on Oar v.4.0 [1]. SAM files were sorted by SAMtools software [10] with default parameters. Then, GATK4 software [11] was used for SNP calling of each individual. Population imputation was implemented by using BEAGLE 5.2. We then used bcftools-1.8 software [12] to filter the dosage R-squared for values lower than 0.9, with the command "bcftools filter -i 'DR2 > 0.9'". After that, PLINK Genes 2022, 13, 1848 3 of 11 v1.9 software [13] was adopted for quality control, with the command "plink -geno 0.1 -maf 0.05". The remaining SNPs were used for further ROH analysis.
Definition of ROHs
PLINK v1.9 [13] was used for ROH detection, which uses a sliding window with a default number of SNPs to find the ROH regions within the sheep's genome. To eliminate LD effects which were created by short homozygous segments, we set each ROH length threshold to 1 Mb. Specific parameter settings are based on the following: (1) each sliding window should contain 50 SNPs; (2) each ROH should have a sequence of more than a hundred consecutive SNPs; (3) the minimum density of each ROH segment was set to one SNP per 50 kb; (4) each ROH was allowed to contain up to five SNPs with missing genotypes and one SNP with heterozygous genotype due to genotyping error [14].
To reduce the occurrence of ROHs by chance, the minimum number of SNPs within a ROH was calculated using the formula below [15]: where α is the false-positive rate of ROH (set to 0.05), n s is the number of SNPs within the autosomes of an individual, n i is the total number of individuals within population i, and het is the mean heterozygosity of the total SNPs within population i. The minimum ROH length was set as 1 Mb. ROH segments were categorized based on their physical length into 1-5 Mb, 5-10 Mb, and ≥10 Mb, and identified as ROH 1-5Mb , ROH 5-10Mb , and ROH >10Mb [5].
To explore the characteristics of Hu sheep, ROH detection was carried out both on the reduced sequencing data of Hu Sheep and the deep re-sequencing data of other breeds. Six kinds of sheep breeds (n = 75) were selected from the deep re-sequencing data by looking at the iSheep website (https://ngdc.cncb.ac.cn/isheep/, accessed on 16 May 2022), including Sishui Fur sheep, Tan sheep, Altay sheep, Bashibai sheep, Chinese Merino, and Duolang sheep. These sheep samples were all from China and have little geographical difference to Hu sheep, and their litter size (≤1 lamb per year) was significantly lower than Hu sheep.
Calculation of Inbreeding Coefficients
The genomic inbreeding coefficients for each individual were based on the ROHs (F ROH ) in our study, and the equation that we used was as follows: where L ROH is the total length of all ROHs in the genome of an individual and L aut is the total 108 lengths of the autosomal genome covered by the SNPs (2.36 GB). F SNP1 , F SNP2 , and F SNP3 were calculated using "-ibd" in the GCTA software. These three parameters were estimated based on the information of SNPs [16,17].
Candidate Gene Annotation within ROH Islands
We used the frequency of each individual SNP in a ROH region to identify ROH islands. The SNPs with frequencies in the top 0.5% were identified as the candidate SNPs. The adjoining candidate SNPs form a region which is called a ROH island, and genes were further identified in these ROH islands [14]. We used an R package GALLO [18] to identify and annotate the genes in the ROH islands.
To identify the functions of candidate genes, KOBAS 3.0 (available online: http: //bioinfo.org/kobas, accessed on 13 April 2022), a website for the enrichment of functional gene-related pathways and diseases, was used to perform the Kyoto Encyclopedia of Genes and Genomes (KEGG) and Gene Ontology (GO) enrichment. The annotation of genes within the ROH islands was based on the Animal Quantitative Trait Loci (QTL) Database [19].
In addition, to further explore the potential roles of the candidate genes within ROH islands, we collected genome-wide association study (GWAS) summary statistics and transcriptome-wide association study (TWAS) research for related traits in humans to annotate candidate genes with potential relevance to human disease. The results of GWAS and TWAS were collected from the webTWAS database [20].
DNA Sequencing and Genetic Diversity
Through the Tn5-based low-coverage sequencing method, a total of 1.36 billion clean reads were found, and the mean was 12.63 million clean reads (with 0.29% mean sequencing coverage) per sheep ( Figure 1 and Table S1). After imputation and filtering the invalid data, we obtained 14.18 million whole-genome single-nucleotide polymorphisms in 108 Hu sheep at a mean depth of 22.39×. A total of 6.55 million (55.54%) SNPs were found in gene regions, and there were 48,657 SNPs in exon regions and 7668 SNPs in untranslated regions. The exons made up only 0.7% of the SNPs in gene regions, and this could be due to the uncomplete annotation information and high-depth sequencing after imputation.
To identify the functions of candidate genes, KOBAS 3.0 (available online: http://kobas.cbi.pku.edu.cn, accessed on 13 April 2022), a website for the enrichment of functional gene-related pathways and diseases, was used to perform the Kyoto Encyclopedia of Genes and Genomes (KEGG) and Gene Ontology (GO) enrichment. The annotation of genes within the ROH islands was based on the Animal Quantitative Trait Loci (QTL) Database [19].
In addition, to further explore the potential roles of the candidate genes within ROH islands, we collected genome-wide association study (GWAS) summary statistics and transcriptome-wide association study (TWAS) research for related traits in humans to annotate candidate genes with potential relevance to human disease. The results of GWAS and TWAS were collected from the webTWAS database [20].
DNA Sequencing and Genetic Diversity
Through the Tn5-based low-coverage sequencing method, a total of 1.36 billion clean reads were found, and the mean was 12.63 million clean reads (with 0.29% mean sequencing coverage) per sheep ( Figure 1 and Table S1). After imputation and filtering the invalid data, we obtained 14.18 million whole-genome single-nucleotide polymorphisms in 108 Hu sheep at a mean depth of 22.39×. A total of 6.55 million (55.54%) SNPs were found in gene regions, and there were 48,657 SNPs in exon regions and 7668 SNPs in untranslated regions. The exons made up only 0.7% of the SNPs in gene regions, and this could be due to the uncomplete annotation information and high-depth sequencing after imputation.
Statistics of Inbreeding Coefficients
The inbreeding coefficients based on different methods are shown in Table 1. Among the three average inbreeding coefficient estimates based on different lengths of ROH segments, FROH1-5Mb is significantly larger than FROH>10Mb and FROH5-10Mb.
Statistics of Inbreeding Coefficients
The inbreeding coefficients based on different methods are shown in Table 1. Among the three average inbreeding coefficient estimates based on different lengths of ROH segments, F ROH1-5Mb is significantly larger than F ROH>10Mb and F ROH5-10Mb . The inbreeding coefficients obtained from the different physical lengths of the ROH fragments vary greatly; F ROH>10Mb and F ROH5do10Mb were much smaller than F ROH1-5Mb ( Table 2). The correlation among ROH-based methods was strong, and the highest correlation (0.965) was found between F ROH-all and F ROH1-5Mb . ROH segments of 1-5 Mb in length made up 95.49% of all ROH segments. The weakest correlation (0.649) among the ROH-based estimates was between F ROH1-5Mb and F ROH5-10Mb , which was 0.649. Table 2. Correlation coefficients (lower panel) among seven types of inbreeding coefficient estimates (F ROH-all , F ROH1-5Mb , F ROH5-10Mb , and F ROH>10Mb , F SNP1 , F SNP2 , and F SNP3 ). The correlation of inbreeding coefficients obtained from the SNPs information has some differentiation; F SNP2 was weakly correlated with F SNP1 and F SNP3. Except for F SNP2 , other F SNP had weak correlations with ROH-based inbreeding coefficients.
Distribution of ROHs
Through ROH detection, 5904 ROH segments were found in 108 Hu sheep (Figure 2). The longest segment (24.11 Mb) and the shortest segment (1 Mb) were both found in chromosome 3, which contains 114,373 SNPs and 4466 SNPs, respectively. The statistics of ROH numbers and length classified by the physical length of the ROH are shown in Table 3.
In terms of physical length, the short ROH (1-5 Mb) segments made up the majority (95.49%) of the whole ROH length, and ROH segments longer than 10Mb accounted for just 0.73% of the whole ROH length. In terms of physical length, the short ROH (1-5 Mb) segments made up the majority (95.49%) of the whole ROH length, and ROH segments longer than 10Mb accounted fo just 0.73% of the whole ROH length.
Gene Annotation
Through the method mentioned above, nineteen ROH islands were found with 59,268 SNPs and 142 genes. The position of each ROH island and the number of SNP within each ROH island are shown in Table 2. The longest ROH island is in chromosome 10, which was found between 37,178,926 and 41,620,003 bp. This island contained 17 gene and may be the most relevant region for functional expression in Hu sheep.
We then annotated all genes within potential ROH islands, and found that the ma jority of them were correlated with some economically important traits, particularly a lo of reproductive traits (Table 4 and Figure 3A). The genes that we detected had connection with Hu sheep reproduction (FGF9, BMPR1B, EFNB3), milk (LGSN, OCA2, HERC2), mea (MICU2, HAO1, OCA2, SPATA5), and wool traits (GFRA3, CDC23, CDC25C). Then, based on the sheep QTL database, we found four QTLs within these ROH islands which were
Gene Annotation
Through the method mentioned above, nineteen ROH islands were found with 59,268 SNPs and 142 genes. The position of each ROH island and the number of SNPs within each ROH island are shown in Table 2. The longest ROH island is in chromosome 10, which was found between 37,178,926 and 41,620,003 bp. This island contained 17 genes and may be the most relevant region for functional expression in Hu sheep.
We then annotated all genes within potential ROH islands, and found that the majority of them were correlated with some economically important traits, particularly a lot of reproductive traits (Table 4 and Figure 3A). The genes that we detected had connections with Hu sheep reproduction (FGF9, BMPR1B, EFNB3), milk (LGSN, OCA2, HERC2), meat (MICU2, HAO1, OCA2, SPATA5), and wool traits (GFRA3, CDC23, CDC25C). Then, based on the sheep QTL database, we found four QTLs within these ROH islands which were related to reproductive traits.
Through the function enrichment of candidate genes, we found that nineteen KEGG pathways and seven GO terms were significantly correlated with human disease processes or biosynthesis pathways, specifically related to breast disease (Table S3).
To further explore the role of reproduction-related genes in animal function and their relevance to disease development, we collected the results of GWAS summary statistics, and TWAS research for related traits in humans from the webTWAS database. Thirteen candidate genes associated with reproduction are shown in Table S4. These genes were mainly associated with human reproductive organs (testis, vagina) and vascular/heart problems. This result indicates that reproduction-related genes of Hu sheep express similar effects in humans and play an important role in human health.
Pattern of ROHs
Through ROH detection, the lengths of the ROHs were mostly found to be within the Furthermore, to find breed-specific genes in Hu sheep, we selected sheep breeds with weak reproductive ability in the re-sequencing data and calculated the frequencies of their SNPs. By comparing characteristics of the distribution of SNPs in Hu sheep with other sheep, we found that these sheep shared 8804 SNPs and 19 genes with Hu sheep. After eliminating the common SNPs, we enriched the Hu sheep's unique genes and observed the distribution of the density of SNPs within the three reproduction-related gene regions with the highest frequency of ROHs ( Figure 3B). These genes (FGF9, SAP18, MICU2) may be correlated with specific high-profile reproductive traits of Hu sheep.
Pattern of ROHs
Through ROH detection, the lengths of the ROHs were mostly found to be within the region of 1-5 Mb. The number of large segments is much lower, especially the segments longer than 10 Mb. This indicates that the level of inbreeding among these Hu sheep is low and they are hardly influenced by recent inbreeding events. However, it is worth mentioning that ROH length is not only dependent on inbreeding events. There are reports claiming that the formation and evolution of ROHs is a random event due to the dynamic recombination and randomness in the process of gamete formation. Furthermore, decreased population size and bottleneck events can also affect the characteristics of short segments of less than 4 Mb [21].
The pattern of the ROHs in our study has specificity in different chromosomes, and the number of ROHs was affected by the chromosome's physical length. Nandolo et al. [22] also reported a similar result, and they found that ROHs tend to enrich on the specific chromosomes which have high levels of support for IBD accumulation.
In addition, the coverage of ROHs is also relevant to the functional genes of animals. Chr 6 and Chr 10 displayed the highest coverage of ROHs within the Hu sheep population, and annotated the largest number of candidate genes ( Figure 2). This result could be due to artificial selection, which is predisposed to improve the level of accumulation of ROHs in certain gene regions and allows the expression of relevant economic traits.
Inbreeding Level within the Population
The inbreeding coefficients were calculated based on ROH segments. F ROH can be more efficient than a traditional pedigree that is manually recorded because humanrecorded information is prone to error. Furthermore, inbreeding coefficients based on ROHs and SNPs can reflect more information about the realized homozygous site than traditional pedigrees [23].
The values of other genomic inbreeding coefficients (F SNP1 , F SNP2 , and F SNP3 ) which were calculated using GCTA software were not close to reality, and some of the coefficients were even higher than 1. This is because of the way that the GCTA software calculates inbreeding coefficients based on the genomic relatedness matrix, and the result may be affected by the population size and depth of sequencing. F GCTA was not as accurate as F ROH as a measure of genomic breeding [24]. Furthermore, reduced sequencing was used in this study, which only covered 0.3% of the whole genome. Through the sequencing and imputation method, some regions were incorrectly identified as homozygous segments, and produced false-positive results.
Functional Enrichment Analyses
In the ROH island which we identified, several candidate genes were annotated and were mainly related to reproduction, milk production, meat, and some economic traits. We annotated a series of genes which were associated with the Hu sheep's high reproduction traits ( Figure 3A). According to the sheep QTL database, we found that some genes map to QTLs related to testes and lambs (ID = 12923, ID = 154661, ID = 154663, ID = 130456). These genes have been previously reported in other animals and are closely related to the expression of reproductive traits in many animals. Previous studies have shown that embryo development is an important factor influencing reproduction. Zygotic genome activation initiates the expression of the parental genome, during which any misbehavior may terminate embryonic development. Mutations in GJB6 may lead to ectodermal dysplasia [25]. MPHOSPH8 can make Fam208a play a dual role during zygotic cleavage and early embryonic development through interaction with it [26]. It was reported that FGF9 can affect reproduction in several respects, including the ovarian function [26], the development of sperm [27], maintaining pregnancy [28], and male sex development [29]. These results may prove that FGF9, GJB6, and MPHOSPH8 play a similar role in sheep and determine sheep reproductive ability by affecting ovarian and embryo development as well as sperm formation.
We also identified some genes which were reported as reproductive-relative genes in sheep or other livestock. BMPR1B has been reported in many sheep breeds, and it was a major gene affecting litter size [30]. EFNB3 was regulated by two up-regulated lncRNAs and one down-regulated lncRNAs, and these lncRNAs were involved in Hu sheep reproduction [31]. Feng et al. [32] found SAP18 had significantly different expression in the early stage of chickens' gonad development. ATP12A was related to the development of trophectoderm in bovines [33].
By comparing with other sheep with poor reproductive ability, we found that Hu sheep and other sheep breeds shared EFNB3, GJB6, and SAP18. This phenomenon could confirm that these three genes are double-regulated in the expression of reproductive traits in animals [25,31,33]. In the sections with significant density differences, FGF9, SAP18, and MICU2 were screened, which may explain the unique trait of high prolificacy in Hu sheep.
We then performed a functional enrichment analysis for candidate genes with human GWAS and TWAS results. We found that LGSN can also have a high expression in human tissues and may be associated with some human autoimmune diseases. We also noted that FGF9, BMPR1B, EFNB3, GJB6, and SAP18, which were identified as the reproductionrelated genes of Hu sheep, are highly expressed in human reproductive organs, such as testes, ovaries, and so on. These results explain the functions of these candidate genes and suggest that they may express similar functions in humans.
Sheep milk is another high-profile trait of Hu sheep. In KEGG pathway enrichment analyses, there are five genes in breast disease pathways. LGSN, OCA2, and HERC2 reportedly mediated resistance to breast cancers [34][35][36]. In addition, we found some candidate genes which are associated with livestock meat traits (MICU2, HAO1, OCA2, SPATA5) and some genes related to wool traits (GFRA3, CDC23, CDC25C). These results can explain the good meat quality, milk, and wool-production performance of Hu sheep.
Conclusions
In the present study, we sequenced Hu sheep and detected ROHs in Hu sheep to calculate their inbreeding coefficient, and selected ROH islands that contain the candidate genes related to breed-specific traits of Hu sheep breeds. We found that the population of Hu sheep was not significantly affected by historical inbreeding events. Then, we identified the functional genes within the ROH islands and conducted enrichment analyses. The results showed that the vast majority of genes within the ROH island were related to human disease and biologically related pathways. We also performed a comparative genome analysis in different sheep breeds and identified that FGF9, SAP18, and MICU2 may be the Hu sheep's breed-specific reproduction-related genes. Furthermore, we used human GWAS and TWAS results to annotate candidate genes and observe the expression of these genes in organs, tissues, and disease development to identify important reproductionrelated genes. We found that FGF9, SAP18, MICU2, BMPR1B, EFNB3, GJB6, and LATS2 are highly expressed in human reproductive organs and may play an important role in the reproductive function of sheep and humans. These findings may provide new insights into the characteristics of Hu sheep and bring new strategies for future genetic improvements in sheep.
Supplementary Materials:
The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/genes13101848/s1, Table S1: Quality of Tn5-based low-coverage sequencing among 108 Hu sheep; Table S2: List of 128 potential candidate genes within ROH islands in Hu sheep; Table S3: GO terms and KEGG pathways enriched (p < 0.05) based on ROH islands; Table S4: Candidate gene enrichment based on human GWAS and TWAS results.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy restrictions. | 2022-10-18T16:08:15.936Z | 2022-10-01T00:00:00.000 | {
"year": 2022,
"sha1": "a1dc124c08d17c2484d4bc24929f6f181894d5c7",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4425/13/10/1848/pdf?version=1666582232",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "63b5a1c617a1e5292dd81bcdc2983607bcf6b071",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
20239438 | pes2o/s2orc | v3-fos-license | Factors influencing occupational therapy students ’ perceptions of rural and remote practice
Introduction: There is a serious shortage of health professionals in rural and remote areas in Australia and world wide. The purpose of this article was to add to existing information about allied health students, particularly occupational therapy students, and rural and remote practice by reviewing the literature on occupational therapy students’ perceptions of rural and remote practice. A variety of influencing factors were identified, as were the main characteristics of rural practice in relation to the future employment of students. The effect of undergraduate rural training programs on students’ perceptions was identified. Literature review: The shortage of rural and remote health practitioners is well documented. Because rural and remote practice is characterised by a diversity of healthcare needs, rural health professionals need a variety of knowledge and skills. This diversity may attract rural health professionals and encourages undergraduate students to consider rural and remote practice. A student’s rural background was reported to be one of the strongest factors in their decision to work rurally, and an undergraduate rural program is one useful strategy to overcome the rural health professional shortage. Undergraduate rural programs promote students’ positive perceptions of rural and remote practice by exposure to a rural location, and factors such as rural fieldwork experience and fieldwork supervisors are likely to be influential. Negative influential factors include a student’s desire to work as a ‘specialist’, and personal, social and professional factors, such as a lack of professional development opportunities in a rural setting.
Introduction
There is a serious shortage of health professionals in rural and remote areas in Australia and world wide [1][2][3][4][5][6][7][8] .Because health professionals favour metropolitan areas as practice locations 9 there are fewer health services available in rural and remote areas 10,11 , and this has been linked to the comparatively poorer health of people living in such areas 9 .
Strategies that have been applied to alleviate the shortage of rural and remote health professionals include favoured university enrolment of rural students, scholarships, financial incentives to undertake rural placement, community involvement, recruitment of overseas health professionals, rural bursaries, rural student clubs, and an expansion of the undergraduate rural health curriculum 2,6,9,[12][13][14] .
The aim of rural health education is to increase students' familiarity with rural practice 8 .While undergraduate training has endeavoured to increase the recruitment and retention rates of rural and remote health professionals 2,15,16 , the traditional undergraduate curriculum has not actively included rural practice 17 .Undergraduate rural fieldwork placements have been identified as having a positive influence on students' perceptions of rural and remote practice 17,18 ; however, the extent to which the undergraduate curriculum as a whole influences students' perceptions is not known 19 .and this should be determined 20 .
Previous studies on students' perceptions of rural and remote practice have focused mainly on medical students 4,8 .
Although these results may be transferable to other health disciplines, discipline-specific data should also be examined 8 .
Aim
The purpose of this article was to add to existing information about allied health students, particularly occupational therapy students, and rural and remote practice by reviewing the literature on occupational therapy students' perceptions of rural and remote practice.
The literature review will identify a variety of influencing factors and the main characteristics of rural practice in relation to the future employment of students.The effect of the undergraduate rural training programs on students' perceptions will be examined.
Search strategy
Sources were identified through aggregated data bases via the James Cook University Library catalogue.The data bases searched were CINAHL, Informit and Medline, using the key words: undergraduate, curriculum, university, student/s, recruitment, rural, remote, country, 'allied health', 'occupational therapy' and perception/s.Some resources were identified from the reference lists of published articles.
Searches were limited to English-language articles available in full text from 1985 to 2008.
Study inclusion criteria
The initial study inclusion criteria were cohort or crosssectional studies of undergraduate occupational therapy students that explored factors influencing their perceptions towards rural and remote practice, and included qualitative and quantitative designs.Seven articles were identified in the initial search.However, due to the limited number of identified articles, the inclusion criteria were extended to studies of allied health graduates and allied health professionals working in rural and remote areas, in order to obtain a more holistic view of rural and remote practice issues.A further criterion was to identify undergraduate health degree course content or curriculum in relation to rural and remote practice.
Definition of rural and remote areas
The words 'rural' and 'remote' are used frequently in the literature.However, these terminologies do not have a single definition or classification system 9,21 .Government departments and rural organisations have acknowledged a discrepancy in terminology for many years 21 .Consequently, differing classification systems and interpretations of 'rural' and 'remote' have been utilised in the literature 19 .For example, Devine 2 used the Accessibility Remoteness Index of Australia (ARIA) to select participants, while Lee and Mackenzie 3 utilised the Rural Remote and Metropolitan Area (RRMA) classification.In addition, population measures were used by Sheppard 11 to determine rurality.
While resources were collected for this article no specific criterion was set for the terms of rural and remote.
Therefore, the difference between each author's interpretation of 'rural' and 'remote' will not be considered.
Rather, the use of the terms 'rural' or 'remote' by the authors were considered to be general terms used to describe those areas.
Profile of articles
A total of 57 potential references were identified in the initial search.Of those 57 references, 13 could not be retrieved due to limited access.A further two studies were excluded because they described specific university rural programs without identifying a relationship with students' perceptions.
Identification of key articles
Key articles were identified to obtain an overview of issues relating to students' perceptions of rural and remote practice.
It was intended that the identified articles would direct the literature review, and the following selection criteria were set: • Study participants included either occupational therapy students or occupational therapists • Students'/therapists' perceptions or attitudes towards rural and remote practice or factors influencing their perceptions were discussed • Studies were conducted in Australia.
A total of 10 articles were identified using these criteria.
Description of key articles
The 10 key articles identified are shown (Table 1).Other components of the university curriculum, such as the theoretical content, were not identified as influential.
Distance from family was also considered important when considering future employment.However, if the placement was conducted in a regional city, the experience was not found helpful in choosing their current rural position.Rural-based theoretical subjects in the occupational therapy program were also found to be valuable.Participants believed that the rural-based subjects allowed them to develop essential skills in rural practice.
Diversity needs in rural and remote practice
Diverse healthcare needs: McAllister et al. 18 found that a variety of cases and the opportunity to gain general practical experience were common themes mentioned by participants after the completion of rural fieldwork placements.Other rural health professional studies have also identified diversity in clinical experience as a positive element 2,3 that provides an opportunity to enhance future career by broadening knowledge and skills 3 .
Interdisciplinary approach in rural and remote practice: An interdisciplinary approach is often taken by rural health professionals in order to meet the diverse healthcare needs of rural and remote populations 9 .
Interdisciplinary team experiences during a placement were valued by students from multiple health disciplines 18 , and this approach was identified by Devine 2 as one reason health professionals were attracted to rural practice.
Other general skills required in rural and remote practice: Sound administrative and broader management skills have been identified as essential for rural and remote health professionals, and well established time management and organisational skills may ease their stress 2 .Among other important identified professional skills were problem solving and networking 2,3 .
Professional support in rural and remote practice: Lack of professional support has been identified as an issue 2 , with new graduates working in rural and remote areas reported to be reluctant to work as a sole therapist for this reason 3 .
Because one-third of rural therapists are new graduates, sound professional support of their role may encourage others to seek rural employment 3 .
Factors influencing perceptions/decisions to work rurally
Students' rural background: Studies on allied health students and rural allied health practitioners have identified having a rural background as one of the strongest factors in starting a rural career 2,3,4,8 .A rural background includes being brought up in rural and remote areas, in addition to previous living or working experiences in those areas 2,4 .
Undergraduate students from rural and remote areas are under-represented in tertiary education, particularly in health courses, for a number of reasons, including geographical isolation, social and economic issues, and lack of resources and career promotion opportunities 9 .
Rural lifestyle: Two studies identified the rural living environment as an attractive feature of rural and remote practice 3,18 , with aspects such as a welcoming community, friendly people and a relaxed atmosphere identified as valued elements 17,18 .
Desire to work as a specialist in future employment: Being a 'specialist' is valued in metropolitan practice settings, with rural health professionals described as 'specialists' or 'specialist generalists' 9 .However as rural and remote practice is not yet recognised as a specialised area 9 students may be discouraged from considering rural and remote practice 22 .
In addition, despite students developing transferable knowledge and skills, the fact that undergraduate education is often urban-based 17,22 may send a message that urban practice is more valuable than practice in a rural location 22 .
Fieldwork placement, supervisors and university educators: Academic educators, fieldwork supervisors and staff members influence students' perceptions throughout the undergraduate program 22 ; in particular, fieldwork placements and clinicians encountered there have been identified as influencing students' career plans 22 .Fieldwork supervisors have a great impact on students' perceptions, with aspects such as ability to teach; attitudes to students and their work; level of support, positive feedback and advice offered to students; and opinions expressed identified by students as influential 13 .
Other influential factors: Five articles discussed other factors influencing participants' perceptions of rural and remote practice, including: • personal factors, such as marrying a person from a rural and remote area; having family or friends living in those areas 2,3,22 • a belief that there is increased community appreciation of health professionals in rural areas 18 • an expectation of a better salary or job availability in a rural area 12 • a perception of closer relationships between health professionals and their rural community 18 .
Negative influencing factors on perceptions/decisions to work rurally
The negative aspects students identified are congruent with factors that influence rural practitioners to leave rural practice 23 .These factors included: • Partners' lack of employment opportunities 14 • Perceived own capabilities to undertake the position 14 • A lack of appropriate professional development/support services 18 • Limited social entertainment facilities in rural and remote areas 18 • Geographical location of positions: distance from family and friends 14,18 • A lack of privacy 18 • Concerns about large workloads 18 .
The students' perceptions identified in the present review may discourage students from eventual rural employment 22 .
Undergraduate education
The aim of undergraduate rural training is to regularly expose students to rural and remote health practice throughout their training 17 .Integrating rural and remote practice into undergraduate education programs will enable students to obtain the knowledge and skills for rural and remote practice 2,9 .This is associated with students' increased intention to practice rurally 8,22 .
Undergraduate rural education and intention to work rurally:
The undergraduate course program provides the necessary skills and knowledge to fulfil rural health professional roles 2 ; however, a lack of undergraduate rural exposure has been found to prevent students from subsequently practicing in rural and remote areas 17 .
Millsteed 9 reported that occupational therapists working in rural settings felt under-prepared for rural and remote practice when they entered the workforce, and this resulted in short term employment.
McKenna et al. found that occupational therapy students'
interest in working rurally increased throughout their course of study; however, the students sought future employment in metropolitan areas 22 .This conflicting finding may be the result of pragmatic concerns, such as distance from family and difficulty making a commitment to rural and remote practice at an early stage of their working life 14 .perceptions and the theoretical subjects in an undergraduate program 2,22 .This is due to greater importance being placed on fieldwork or practical experience as influencing students' perceptions of rural and remote practice 22 .
Undergraduate
Lee and Mackenzie reported that new rural graduate occupational therapists had not completed specific rural curriculum content during their undergraduate program 3 .
Devine 2 found that specific rural practice related subjects during the undergraduate course were beneficial for rural occupational therapists and prepared them for their rural position.McKenna et al. also found that particular university subjects influenced occupational therapy students when considering future career plans, and suggested further research should explore the effectiveness of undergraduate educational programs as a whole in influencing students' perceptions 22 .
Fieldwork placement: Fieldwork placement has proved successful for enhancing students' positive perceptions regarding rural practice 18 .After rural fieldwork placement, allied health students' intentions to work rurally increased 8,17 .
Russell et al. found that occupational therapy students felt well prepared to seek employment in rural and remote areas following rural fieldwork placements 17 .The participants were provided with clinical skills as well as administrative skills 17 , both of which have been identified as necessary in rural and remote practice 2,3 .
The importance of a rural fieldwork placement was also described by rural occupational therapists 2 .Lack of rural fieldwork placement opportunities may contribute to a shortage of rural health professionals 17 .Support should be provided for rural therapists during student supervision, and financial support is recommended for students during fieldwork to ensure increased uptake 22 .
Positive elements of fieldwork placements: Successful fieldwork experiences were found to have a positive impact on students' perceptions of rural and remote practice 4 .
Although it is difficult to determine what makes a positive experience, the relationships among supervisors, other professionals and students during placements appear to be the key to success 14 .
Length of fieldwork placement:
The length of fieldwork placements has been identified as a positive characteristic in influencing students' perceptions 14 .Playford et al. found that shorter placements (4 weeks or less) had a positive influence on students' perceptions and led to future employment 4 .
Shorter placements for urban-background students may decrease the financial burden and social isolation, while providing a positive rural practice experience 4 .
Voluntary
versus compulsory fieldwork placement: Voluntary fieldwork placement was found to have a significant relationship with future rural practice 4 .
Forty-four percent of students who completed their voluntary rural placement were found to practice rurally following graduation, compared with 23% who completed a compulsory placement (p <0.001) 4 .It was suggested that these students used rural placements as a transition between increasing awareness of rural and remote practice and active intention to work rurally 4 .The voluntarily nature of the rural placement was seen as important and leading to eventual rural employment 4 .
Suggestions for undergraduate rural program: Specific
examples of rural practice and a variety of content regarding rural health needs must be provided throughout undergraduate training 22 .Devine 2 found that rural occupational therapists identified the need for greater mental health content in the undergraduate curriculum.Greater opportunities for direct contact with rural health professionals during the undergraduate program was also identified as important by occupational therapists 2 .
Conclusion
This study was limited by having no set criteria for the terms 'rural' and 'remote', and the fact that each examined article used different criteria to measure rurality.This is not a desirable situation for a review 24 .
Despite this limitation, the study concluded that there is a serious shortage of rural health professionals in Australia and internationally.Many strategies have been implemented to overcome this issue.Integration of a rural program throughout undergraduate education is one of these strategies.
A diversity of healthcare needs is a characteristic of rural and remote practice, and an interdisciplinary approach is often employed to cater for this.Additionally, the need for multiple skills, including administrative skills, was identified as essential for a rural practitioner.Lack of support for rural therapists was identified, so professional support, especially for new graduates, is recommended.A student's rural background has been identified as one of the most positive influential factors in students' perceptions of rural practice.Under-representation of students with a rural background in the health professional undergraduate education system must be addressed.
The desire to be a 'specialist', a role that is valued in metropolitan practice settings, and the fact that rural practice is yet to be established as a specialised area may prevent students from seeking a rural career.An associated concern is that undergraduate education in metropolitan universities may have a negative influence on students' perceptions of rural and remote practice.
Devine 2
studied six rural occupational therapists who were the first graduating cohort from the James Cook University Occupational Therapy Program, Queensland.At the time of the study, participants had been employed in rural practice for 5 to 18 months.Attraction to the type of position rather than the rural location was identified as a reason for working in their current position.A participant's rural background and family/spouse's rural background were also found to be influential.Participants recognised their undergraduate rural placements as important to their choice of rural position.
Lee and Mackenzie 3
studied five new graduate occupational therapists practicing in rural areas in NSW at the time of the study.A participant's previous rural living experience and family/spouse's rural background were found to influence their decision to practice in a rural area.Additionally, interests in rural practice, the rural life-style and clinical experience opportunities were identified as factors that attract participants to rural practice.In this study, undergraduate rural fieldwork placement was not found to influence their decision about work location as they had chosen to practice in a rural area prior to their university study.McAllister, McEwan, Williams and Frost 18 conducted a longitudinal qualitative study between 1991 and 1996 with health-discipline students, including occupational therapy students, from the University of Sydney, NSW.Data were collected from 92 of 156 students.The students were asked to submit a 1000 word report following completion of their rural attachment, during which they had developed overall positive attitudes towards rural and remote practice.The identified advantages of rural practice included diverse caseloads, relaxing lifestyle and team work.The challenges of rural practice included lifestyle factors, isolation, reduced professional education opportunities and restricted access to other services.McKenna, Scholtes, Fleming and Gilbert 22 conducted a cohort study between 1994 and 1997.Eighty-four first year occupational therapy students completed the first questionnaire in February 1994, and 59 final year students completed the second questionnaire in October 1997.Twenty-two percent of students indicated an interest in working in a rural hospital in the first questionnaire.However, this interest had increased to 39% in the second questionnaire.Ninety-five percent of students identified clinical placements as influential in their career plans in the second questionnaire, while 87% of students perceived particular clinicians as influential.Mills and Millsteed 23 studied 10 occupational therapists who had worked in rural occupational therapist positions in Western Australia and had returned to the metropolitan area within the previous 24 months.Initial challenges identified by participants included the emerging role of an occupational therapist and initiating the operational side of the position.Identified rural practice issues included lack of support, isolation and workload.Although the participants had established social circles, this was found initially difficult due to the nature of small communities.Both personal and professional issues were identified as reasons to leave the position, while experiences in rural practice were valued by the participants.Millsteed 9 discussed Australian rural and remote area issues in relation to occupational therapy.A shortage of occupational therapists in rural and remote areas was identified and contributing factors were also discussed.Factors that may influence rural recruitment were found to include rural origin, undergraduate education, appropriate skills for rural practice and other lifestyle factors.Playford, Larson and Wheatland 4 conducted a longitudinal survey with allied health, including occupational therapy, and nursing students from three Western Australian universities between 2000 and 2003.Previous experience living in rural and remote areas was strongly associated with rural employment.However there was no association found between university rural student club membership and future rural employment.Compulsory placement was not found to be related to rural employment.Most participants identified their undergraduate rural placement as positive, whether they were working in rural and remote or urban areas.Russell, Clark and Barney 17 compared two groups of students.One group consisted of 21 occupational therapy students from the University of South Australia and La Trobe University in Victoria, who attended the rural student unit (RSU).The second group consisted of 68 occupational therapy students from the University of South Australia undertaking rural and metropolitan fieldwork not in the RSU.The RSU was a 16 month pilot program based in Whyalla, South Australia, which aimed to provide opportunities for students to participate in rural placement and to support students and rural therapists through their learning experience.The RSU students were found to have more positive attitudes towards rural practice (p< 0.05).Additionally, RSU students were identified as more positive about future rural employment opportunities (p < 0.05).
Both rural and urban students use placements to move along a continuum of choice from increasing awareness to active intention 4 .For students from rural or remote areas, a course program focused on rural health reinforces their intentions to practice in those areas 3 .Dalton et al. found the intention of students with a rural upbringing to seek rural employment was higher than their urban counterparts.However a change in students' intentions to work rurally following rural fieldwork placements was more positive with urban-background students 8 .Rural health education should therefore place greater focus on urban background students 8 .Theoretical content in undergraduate program: Few studies have examined the relationship between students'
Fieldwork
experience and supervisors' influence on students' future employment intention were also identified.The attributes of influential supervisors included their ability to teach; their attitude towards students and their work; the level of support, positive feedback and advice provided to students; and the opinions they expressed.Other influential factors identified included the rural lifestyle, personal factors, job availability, social isolation and a lack of professional development opportunities, the students' perceptions of their own capabilities, the geographical location of positions and concerns about large workloads.Rural exposure during the undergraduate program has proved beneficial, and while the influence of undergraduate theoretical course content on students' perceptions has been acknowledged, the influence of fieldwork placement has been found to be significant.Future research should explore the relationship between students' perceptions and the whole undergraduate academic curricula.This review of the literature has not only identified various factors influencing students' perceptions of rural and remote practice, but has also found evidence of a gap in the research relating to the influence of the theoretical content of rural undergraduate education.Although fieldwork placements appear to be an important factor, the undergraduate rural program as a whole should be explored to assist in addressing the rural therapist shortage. | 2017-08-15T01:13:38.535Z | 2009-03-13T00:00:00.000 | {
"year": 2009,
"sha1": "805c5dbbd0e901eaa5627f24ad4372e535f2c1c6",
"oa_license": "CCBY",
"oa_url": "https://www.rrh.org.au/journal/download/pdf/1078/",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "805c5dbbd0e901eaa5627f24ad4372e535f2c1c6",
"s2fieldsofstudy": [
"Education",
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
12550946 | pes2o/s2orc | v3-fos-license | Minimizing Risk of Nephrogenic systemic fibrosis in Cardiovascular Magnetic Resonance
Nephrogenic Systemic Fibrosis is a rare condition appearing only in patients with severe renal impairment or failure and presents with dermal lesions and involvement of internal organs. Although many cases are mild, an estimated 5 % have a progressive debilitating course. To date, there is no known effective treatment thus stressing the necessity of ample prevention measures. An association with the use of Gadolinium based contrast agents (GBCA) makes Nephrogenic Systemic Fibrosis a potential side effect of contrast enhanced magnetic resonance imaging and offers the opportunity for prevention by limiting use of gadolinium based contrast agents in renal failure patients. In itself toxic, Gadolinium is embedded into chelates that allow its safe use as a contrast agent. One NSF theory is that Gadolinium chelates distribute into the extracellular fluid compartment and set Gadolinium ions free, depending on multiple factors among which the duration of chelates exposure is directly related to the renal function. Major medical societies both in Europe and in North America have developed guidelines for the usage of GBCA. Since the establishment of these guidelines and the increased general awareness of this condition, the occurrence of NSF has been nearly eliminated. Giving an overview over the current knowledge of NSF pathobiochemistry, pathogenesis and treatment options this review focuses on the guidelines of the European Medicines Agency, the European Society of Urogenital Radiology, the FDA and the American College of Radiology from 2008 up to 2011 and the transfer of this knowledge into every day practice.
Review
Cardiovascular Magnetic Resonance (CMR) has gained an increasingly important role among the diagnostic methods due to its superb soft tissue imaging qualities without ionizing radiation and minimal risks. CMR is one of the uprising areas of MRI offering deep insight in both cardiac structure and function which minimizes the risks of failing to make an accurate diagnosis or resorting to more invasive tests [1]. Use of contrast enhanced imaging techniques allows the assessment of perfusion, tissue viability and detailed angiographic studies. The overwhelming majority of these techniques use Gadolinium based contrast agents (GBCAs). From the early days of GBCA enhanced imaging, these contrast agents were considered safe with only rare allergic reactions or local irritation from extravasations. However, in 1997, a new disease emerged, originally called Nephrogenic Fibrosing Dermopathy. It was first described by Cowper in 2001 [2] and the relationship to GBCA exposure has been strongly suspected since 2006 [3][4][5][6][7]. Later this entity was renamed Nephrogenic Systemic Fibrosis (NSF) when involvement of internal organs was discovered. An NSF Registry, www. icnfdr.org documented over 355 cases that all occurred in patients on dialysis or with severe renal dysfunction [8]. However since the connection between NSF and GBCAs has become known changes in MRI protocols with the focus on prevention has led to a decrease in NSF incidence. Reports are showing virtually no new NSF cases since 2008 in both patients with normal renal function and patients with renal impairment [9][10][11] in spite of continued use of GBCA, albeit at lower doses.
Here we review the clinical features of NSF and show how to use GBCA safely in patients at risk for NSF.
Pathobiochemistry of gadolinium
Gadolinium is one of the 14 elements of the lanthanide group. Virtually all of its compounds contain it as the paramagnetic Gd 3+ ion which has seven unpaired electrons in its half-filled 4 f outer shell. Gd 3+ has a long electronic relaxation time based on its totally symmetric S state making it well suited for use as an MR contrast agent. It accelerates the relaxation of the water molecules present in the tissue, giving rise to an enhanced signal on T1-weighted images and, together with appropriate sequence parameters, an improved image contrast. However, gadolinium, like most metals, interferes with the complex biochemical processes of living organisms. In particular, it can act as a competitive inhibitor of calcium ions due to its large ionic radius (0.97 Å vs. 1.06 Å for Ca 2+ ) and its high ionic charge. As a result various physiological processes involving Ca 2+ can be influenced by the presence of Gd 3+ such as Ca 2+ -activated ATPase in the sarcoplasmatic reticulum of skeletal muscle fibres, the reticuloendothelial system and some other enzymes such as dehydrogenases and glutathione S transferases. It is also known that Gd 3+ has an inhibitory effect on Kupffer cells [4].
The toxic effects of Gd 3+ can be suppressed by encasing it in an organic chelator. A variety of such chelators have been FDA/EMEA approved for use in patients and others are still being investigated [12][13][14]. The contrast agents in clinical use are based on the linear ionic chelator diethylenetriamine pentaacetic acid (often dubbed as DTPA or "pentetic acid"), the linear non-ionic chelator benzyloxyproprionictetra-acetate ("BOPTA") or on the cyclic ionic chelator tetraazacyclododecane tetraacetic acid (DOTA). Other cyclic non-ionic chelators are tetraazacycl ododecane (DO3A). Stability of these agents is characterized in two ways: Thermodynamic stability describes the tendency of the chelate to dissociate into its components given an unlimited amount of time. It is expressed numerically as the logarithm of the stability constant. Kinetic stability describes the timescale of the dissociation expressed either as a rate constant or a half-life. Both characteristics depend on the surrounding milieu: Decreasing pH and increasing temperature favour dissociation [15]. As a rule, the macrocyclic chelates are several orders of magnitude more stable with regard to dissociation and transmetalation than their linear counterparts [16][17][18]. Furthermore, the presence of ions such as Ca 2+ , Cu 2+ or Zn 2+ which can replace Gd 3+ from the chelate in a transmetalation reaction promote the unwanted release of Gd 3+ exposing tissues to its toxic effects [19]. Indeed, early in vivo studies using radioisotope labelled Gd chelates indicate a relationship between kinetic stability and tissue-uptake of gadolinium [20].
Besides the characterization in terms of macrocyclic and linear chelate structures, the nine currently available GBCAs (Table 1, Figure 1) can also be categorized by ionicity. Non-ionic Gd 3+ chelates cause less osmotic stress because they do not require counterions such as Na + in their formulations. They are closer to ionic neutrality with lower viscosity, and are less hydrophilic than ionic Gd 3+ chelates [21]. As with iodinated contrast, non-ionic Gd 3+ chelates appear to have a lower rate of allergic reactions [22]. Unfortunately non-ionic linear chelates are also less stable than their negatively charged analogs. With the exception of three linear ionic GBCAs that have lipophilic groups in their chelate structures (Gadobenate, Gadoxetate, Gadofosvescet), GBCAs are eliminated exclusively via the renal pathway. The estimated half-life of renal elimination of GBCAs for patients with normal renal function is about 90 min. However, with decreasing renal function the effective half-life increases to up to 18 -34 hours [16,23]. Within this timescale gadolinium release for the linear chelates may be significant [20].
Two of the three mentioned linear, ionic GBCAs (Gadobenate, Gadoxetate) have an aromatic component within the chelate structure that allows hepatocellular uptake and partial excretion via the biliary pathway. The third mentioned GBCA (Gadofosveset) has a biphenylcyclohexyl group that reversibly binds to albumin, extending the plasma half-life to about 18.5 hours for patients with a normal renal function [16,19].
Pathogenesis of nephrogenic systemic fibrosis
NSF occurs in patients with acute or chronic renal failure [32,33]. The vast majority of NSF cases (approximately 95 %) have occurred in renal failure patients who received GBCA enhanced CMR imaging techniques prior to symptom onset [34]. Thus, it is likely that GBCAs play a role in triggering NSF. All GBCA stimulate the proliferation of fibroblasts, the linear GBCAs show a more potent stimulation than macrocyclic GBCAs [35][36][37][38] which raises the possibility that GBCA dissociation from the chelator is not necessary for NSF to occur. One hypothesis is that macrophages phagocytose the Gd 3+ complexes that then, being located in intracellular lysosomes, stimulate the production of cytokines, and growth factors [39,40]. Local inflammation may be due to local Gd 3+ deposition, that is triggered by local CD68+ or XIIIa+ dendritic cells, and a systemic inflammatory response that is associated with CD34+ fibrocytes that originate from the bone marrow [41]. The heterogeneous phenotypes of cells found in NSF lesion imply that both local and systemic inflammatory mechanisms might coexist [42]. TGF-beta1 levels are elevated in patients with NSF, and some studies have shown increased decortin levels, alpha-smooth muscle actin and hyaluron synthesis [38,[43][44][45]. A modulation of collagen syntheses with an increase of Collagen I and III production as well as an increase in fibronectin expression has been documented [38,46], as well as increased VEGF levels, Ostepontin, and TIMP-1 expression [47]. At least for one of the GBCAs (Gadodiamide), an effect on the expression of chemokine genes has been tracked down as the presence of GBCAs increase the activation of a NFκB pathway. It shows that exposure to GBCAs and the included Gd 3+ is a potent stimulator of normal macrophages [48]. Most recent studies suggest that possibly other cells might be involved in the development of NSF as well, such as tissue monocytes and macrophages in the peripheral blood [49].
Additionally, it has been discussed that pro-inflammatory events, like vascular thrombosis, myocardial infarction, trauma, sepsis or recent surgery might contribute to the development of NSF [33]. Sepsis might even trigger the onset of NSF without exposure to GBCAs [50]. A resent work showed that the presence of GBCA increases DNA damage in lymphocytes [51]. The presence of high iron and erythropoetin levels are suspected to contribute to the development of NSF as well [44,[52][53][54]. The Gd 3+ complexes seem to have an effect on calcium phosphate as a study showed that the calcium phosphate precipitation is increased thus activating macrophages [55]. non ionic Ionic linear macrocyclic * * * Figure 1 Molecular structures of all currently available Gadolinium-based contrast agents. The three agents in the lower section (asterisk) are presently considered the agents with highest safety due to their macrocyclic structure.
Symptoms, diagnosis and differential diagnosis
Patients with NSF present with skin lesions typically beginning on the distal extremities starting with indurated plaques and papules especially on edematous lower extremities, later on the upper extremities, trunk and eye section develop over days to several weeks and later show a woody texture. The plaques are described as brawny, and the skin may develop hyperpigmentation [56] [57]. NSF lesions usually occur symmetrically.
Patients may report sharp pain as well as pruritus, causalgias and paresthesias in afflicted areas. Stiffness and joint contractures can lead to decreased mobility. The progression is rapid in an estimated 5 % of cases [8] leading to immobility within weeks and in a few instances death has been attributed to NSF. Besides skin developing lesions, internal organs can be afflicted including lung, heart, liver, bones and kidneys [33,34]. NSF diagnosis is usually postulated upon the medical history/physical exam and confirmed with a deep punch skin biopsy [2,56,58,59]. Eosin and hematoxylin staining are used for demonstrating the typical features. The skin biopsy shows a dermal fibrosis with a high density of CD 34 positive and procollagen I positive fibrocytes (circulating fibrocytes) and collagen bundles with prominent clefts between the bundles. Besides these collagen bundles elastic fibres can be detected as well. Additionally, but not required for diagnosis of NSF, factor XIIIa positive dendritic cells may be detected [56]. However the histopathological and clinical features can overlap with other entities. Among these are Lipodermatosclerosis, Scleroderma and Morphea, Scleromyxedema, Porphyria cutanea tarda, Spanish toxic Oil syndrome, eosinophilic fasciitis and Eosinophilia-myalgia syndrome and chronic graft versus host disease [2,58,60]. It is necessary to combine both the clinical features and the histopathologic findings in order to avoid misdiagnosis.
Exposure to gadolinium might arouse suspicion towards the diagnosis of NSF however it has to be stressed that exposure to GBCAs does not factor into the diagnosis. The lack of GBCA exposures does not exclude NSF as diagnosis, as in about 5 % of NSF patients no GBCA exposure prior to symptom onset could be found.
Besides the development of NSF other more acute reactions to GBCA exposure have been described. Symptoms that imply the development of septicaemia have been described within 12-36 h after administration of GBCA [61]. Allergic reactions albeit rare are another GBCA risk [22] [62]. The possibility of deterioration in renal function after GBCA exposure in renal failure patients is controversial but in the usually administered dosages GBCAs are less nephrotoxic than iodinated contrast agents even with the high doses used for MR viability imaging and MR Angiography [63,64].
Incidence and prevalence
The incidence of NSF prior to 2008 varied widely among different institutions ranging from 0.26 % in patients on dialysis without any contributing factors to up to 8.8 % in patients with a eGFR smaller than 15 ml/min/1.73 m³ without hemodialysis [50,65]. A study published in 2007 in another center calculated an absolute risk for developing NSF in patients on chronic dialysis of 2.4 % per GBCA enhanced study and an absolute risk of 3.4 % per patient [66]. The combination of renal insufficiency and proinflammatory processes (e.g. operations, thrombembolic events, endothelial/vascular injury) adds up to an NSF incidence of 4.6 % [33]. The incidence is further increased when renal failure patients on dialysis develop sepsis, and has been estimated at 6.3 % [50]. It is also noteworthy that the overwhelming majority of cases was either on dialysis or had an eGFR < 15 ml/min/1,73 m³. Only a handful of cases estimated in patients with an estimated GFR > 30 ml/min/ 1,73 m³and most of these were patients in acute renal failure where GFR estimation is unreliable. The incidence of reported NSF cases in patients with a normal renal function is zero.
The vast majority of NSF cases have been reported in patients who underwent GBCA enhanced imaging however, about 5 % of NSF cases showed no traceable exposure to GBCAs prior to NSF onset [34]. The exposure to GBCAs seems to be a cofactor to developing NSF, with the incidence depending besides the already mentioned patients collective also on the type and dose of GBCA. In patients who received a GBCA dose according to the labeling (e.g. 0.1 mmol/kg) overall incidence is near zero, regardless of their renal function. Based on a report of Prince (2008, [65]) the use of higher GBCA doses such as double or triple dose GBCA [71]. Most likely, not all cases have been reported to the NSF registry, but directly to the FDA or not at all.
NSF is rarely seen in pediatric patients [72][73][74]. The youngest known case of NSF is a 6 year old patient even though many newborn babies with immature kidneys in the past received high doses of GBCA for multiple MR scans to assess congenital heart disease. This suggests that infants and newborns may be a protected population [63].
Treatment of nephrogenic systemic fibrosis
Many NSF patients have improved or even been cured with restoration of normal renal function. This has occurred when acute renal failure resolves and with renal transplantation. Otherwise, there is no proven effective therapy for the treatment of NSF and to date the treatment options are limited to symptomatic relief. Physical therapy supposedly improves the range of motion [75]. Pain medication usually is needed and includes the use of opioids, NSAR´s, steroids and antidepressants. A single case reports that acetazolamide showed a good pain relieving effect in a meningeal affection of NSF. In some Monitor patients after GBCA administration, good documentation, no repeat GBCA imaging procedures until GBCA is eliminated from the body cases intravenous procainamid has been shown to suppress the nociceptive aspects of pain. However, its application has to be performed under continuous cardiac monitoring. The treatment options for NSF are based on case studies with limited numbers of patients. In the early days of NSF, the benefits of intense hemodialysis were discussed. In patients with end stage kidney disease approximately 98 % of the Gadolinium that freely circulate in the blood is removed with 3 hemodialysis cycles. Not more than 30 % of the Gadolinium chelates are removed with conventional dialysis techniques. Use of ultrapure dialysate and a high bicarbonate concentration combined with high flow for a duration of 5-6 h is said to achieve the best clearance rate [76]. There are some cases where a renal transplant and the resulting improved renal function lead to a decrease in the NSF symptoms. Among other discussed therapy options that were published are the extracorporal photopheresis, plasmapheris, UV-A1 Therapy, high dose intravenous Immunoglobulins, Imatinib Mesylate, Rabamycine and Pentoxifylline. The NSF registry additionally mentions some therapeutic regimens that were anecdotally reported, namely topical Dovonex, Cytoxan and Thalidomide. The use of oral steroids was discussed as well however the reports on the effectiveness were inconsistent ( Table 2) [76][77][78][79][80][81][82][83][84][85][86][87][88][89].
Considering the small numbers of case reports and the fact that NSF can possibly be avoided by preventive measures, the focus of further attention should be primarily on prevention. The FDA was the first agency to release a public health advisory back in 2006 when the association of GBCA and NSF was not as convincing. However the linkage of high dose GBCA with NSF was suspected, accordingly the advisory stated that only if deemed necessary patients with advanced kidney failure (dialysis or eGFR < 15 ml/min/1.73 m²) should receive imaging studies documentation are also obligatory (Table 3) [90]. It should be also noted, that only little data on the safety of GBCAs for pediatric patients and almost no data for children younger than 2 years old is available. The European Medicines Agency (EMA) has defined three risk classes of Gadolinium-based contrast agents. For each of them, the EMA has defined measures to reduce the risk of NSF. The EMEA's recommendations on the use of GBCAs are based on this classification (see Table 4). However the concern about newborns is controversial because there are no known cases of NSF in any patient less than 6 years old in spite of a extensive use of high doses of gadolinium based contrast agents for evaluating congenital heart disease in this population [63]. The guidelines also point to the necessity of good documentation of contrast agent type and dose [92] which has not been reliable in the past (Table 4) [93].
High risk
The European Society of Urogenital Radiology agrees with the EME on three risk classes for Gadolinium based contrast agents. Additionally, the ESUR defines three risk classes for patients. Patients with chronic kidney disease (CKD) stage 4 and 5 (eGFR < 30 ml/min), patients on dialysis and patients with impaired renal function and pending or received liver transplant are considered high risk patients. Patients with CKD stage 3 and small children younger than 1 year old have a lower risk and with regards to renal function healthy patients are at no risk for developing NSF. The EUSR guidelines also mention the treatment of pregnant women. Due to lack of experience, these patients are to be treated according to the protocol for infants [94].
Further preventative measures should be strived for. It is theorized that low iron serum levels, and an optimized Calcium, phosphate and acid base balance before injection of GBCAs may protect against the development of NSF [85]. Recent research has focused on the goal to limit GBCA doses as far as possible in order to still ensure good image quality. Some authors have reported sufficient image quality in 3 T scanners e.g. for abdominal imaging with 0,025 mmol/kg GBCA at 3 T [95], 0,05 mmol/kg for soft tissue characterization [96] and with 0,05 mmol/kg for vascular malformations [97]. Modifications in CE-MRA protocols offer further reduction of contrast agent doses [98].
Guidelines of the American College for Radiology (2010)
The American College of Radiology (ACR) recommendations of 2010 agree with the other guidelines on the associations of GBCA exposure, dosage relations and kidney function. The need for adequately assessing the renal function has been highlighted. In the opinion of the ACR patients at risk, e.g. patients with a past medical history of a renal disease, aged above 60 years old, who have hypertension or diabetes mellitus, should be tested for their renal function 6 weeks prior the planned CMR study. Additionally, a verbal assessment might be helpful. The recommendations for the specific patient groups, divided by their renal function is listed in Table 5 [99].
The recommendations of the above mentioned guidelines can be incorporated into an easy to use short work sheet for non-pregnant adults for everyday use ( Figure 2, Table 6: list of current label doses).
Conclusions
NSF is one of the major risks associated with GBCA enhanced CMR procedures. Many of its features and pathogenetic characteristics have been already revealed, however important aspects still have to be focused on, for example the treatment of NSF. For the time being, the treatment is limited to symptomatic relieve and prevention. Several up to date guidelines are available, explaining the aspects of a safe GBCA enhanced CMR procedure. The benefits of GBCA enhanced CMR techniques are many in numbers, and should not be withheld rashly. Even with the lack of new NSF cases, the general awareness for NSF has to be strengthened in order to prevent the reoccurrence of this GBCA side effect. EN: Significant research support: Bayer Healthcare, all others have no conflicts of interest. | 2017-06-23T19:59:48.258Z | 2012-05-20T00:00:00.000 | {
"year": 2012,
"sha1": "723e95197c2bab99bf053687b1ee6fa8f828c898",
"oa_license": "CCBY",
"oa_url": "https://jcmr-online.biomedcentral.com/track/pdf/10.1186/1532-429X-14-31",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3478fafd0db43076d8a7f8f4fb8c5e659e970ad8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
205691143 | pes2o/s2orc | v3-fos-license | Unlocking the genetic diversity of Creole wheats
Climate change and slow yield gains pose a major threat to global wheat production. Underutilized genetic resources including landraces and wild relatives are key elements for developing high-yielding and climate-resilient wheat varieties. Landraces introduced into Mexico from Europe, also known as Creole wheats, are adapted to a wide range of climatic regimes and represent a unique genetic resource. Eight thousand four hundred and sixteen wheat landraces representing all dimensions of Mexico were characterized through genotyping-by-sequencing technology. Results revealed sub-groups adapted to specific environments of Mexico. Broadly, accessions from north and south of Mexico showed considerable genetic differentiation. However, a large percentage of landrace accessions were genetically very close, although belonged to different regions most likely due to the recent (nearly five centuries before) introduction of wheat in Mexico. Some of the groups adapted to extreme environments and accumulated high number of rare alleles. Core reference sets were assembled simultaneously using multiple variables, capturing 89% of the rare alleles present in the complete set. Genetic information about Mexican wheat landraces and core reference set can be effectively utilized in next generation wheat varietal improvement.
Adaptation of wheat landraces in their native environments has resulted in the accumulation of favourable alleles for domestication traits [14][15][16] . Mexican landraces, also known as Creole wheats, were brought to the Americas from the 16 th through 18 th centuries and gradually became adapted to the local environments. Their genetic diversity is believed to be depleted in the germplasm collections of Spain and Europe 17 . Mexico has climatic diversity because of its large variety of landscapes, from tropical and temperate forests to desert areas 18 . Broadly dry and tropical climatic regimes prevail in the south-central and northern parts of Mexico, respectively. During the cropping season in some of the northern states (e.g., Durango), temperatures can reach up to 40 °C ( Supplementary Figures 1 and 2). Landrace accessions adapted to the varying climates of Mexico should thus have useful genetic variation for stress tolerance. The large-scale introduction into breeding pipelines of the genetic diversity available in these landraces could greatly help in developing next generation varieties, leading to increased global wheat production.
Characterization of a large collection of landraces adapted to wide climatic regimes is an urgent need of the wheat breeders worldwide. An ambitious project of the International Maize and Wheat Improvement Center (CIMMYT), Seeds of Discovery 19 , aims to characterize all the accessions in the Wheat Germplasm Bank (~120,000 accessions) and move the unexploited variation into the breeding pipelines 20 . This is being followed by genotyping using the state-of-the-art technology, genotyping-by-sequencing (GBS) which provides a cost effective platform for genotyping thousands of accessions. Numerous platforms are available for assessing single nucleotide polymorphisms (SNPs) and cost per data point is least in GBS rendering it a suitable platform for large scale germplasm characterization. A total of 9811 accessions collected from different Mexican states during the 1990s are maintained in the CIMMYT wheat germplasm bank 21 . An in-depth and systematic characterization is required to capture the full potential of these valuable genetic resources.
An important step in utilizing gene banks is defining a manageable core reference set as a subset of the larger germplasm collection 22 . A manageable core reference set is the one which breeders can precisely evaluate for their trait (s) of interest. Core reference sets have been established in the past mainly based on one variable [23][24][25] , for example genotypic data or phenotype measures or geographical distribution. Simultaneous use of multiple types of variables (genotype, phenotype, geography etc.) while classifying a germplasm set should provide a robust diversity estimate for its application in plant breeding. However, merging different types of variables in one matrix is a challenge and researchers are often reluctant to do such analyses because of the limitations of the software tools or methods 26 . Till date there is no report available in wheat wherein core set is developed by using multiple variables. A strategy for utilizing both discrete and continuous variables, as well as combining hierarchical multiple-factor analysis (HMFA) and the two-stage Ward Modified Location Model (Ward-MLM), has been proposed by Franco et al. 27 . Gower 28 proposed a method for simultaneously analyzing continuous and categorical variables by transforming each to a 0-to-1 scale, irrespective of the type of variables. This approach was followed in the present study to define a core reference set of Mexican wheat landraces.
The objectives of this study were to: (1) characterize Mexican wheat landraces conserved in the CIMMYT wheat germplasm bank and (2) develop a core reference set using multiple variables simultaneously. Of 8,416 Mexican landraces subjected to analysis, 7,986 were found to be hexaploid based on their genotypic profiles (Supplementary Figure 3), phenotypic evaluation and grain characteristics. Total number of high quality and filtered SNPs used for the study was 20,526 out of which 8,297 were present in frequency less than 0.05. All these markers showed heterozygosity in range of 0-30.5%. The percentage of GBS-based SNPs with allele frequencies ranging from 0.05 to 0.95 was 59.6. The remaining 40.4% had allele frequency <0.05 or >0.95 which enabled us to identify useful genetic variation presented in different sections onwards in this report. Means and variances of phenotypic evaluation and grain characteristics have been presented in Table 1.
Results
Genetic classification of Mexican hexaploid wheat landraces. The principal component analysis (PCA) explicitly revealed broad separation of the northern (Durango, Chihuahua and Coahuila) hexaploid landraces from the accessions of southern Mexico (Oaxaca) and central valley (Mexico, Puebla, Tlaxcala, Queretaro, Toluca, Guanajuato, Hidalgo and Michoacán) with little overlaps (Fig. 1). Genetic classification revealed 15 groups of hexaploid landraces. Thirteen of the 15 groups included accessions from just one region: central, southern, or northern Mexico. Some of the groups had accessions from specific places in Mexico, such as Durango (Group 5), Oaxaca (Group 8), Mexico (Group 9), Coahuila (Group 11), Michoacán (Group 13), Chihuahua (Groups 6 and 14) and Guanajuato (Group 15). Group 3 had accessions from Chihuahua and Oaxaca, located in the extreme northern and southern parts of the country respectively, whereas, Group 7 had lines from the central and southern regions with maximum overlap (Fig. 2). Group 1 showed highest Shannon's and Nei's diversity indices followed closely by groups 4, 7, 8, 12 and 13 containing accessions from southern and/or central Mexico. In contrast, group 5 had very low diversity ( Table 2). The genetic differentiation (F st ) among these 15 groups ranged from 0.041 to 0.277. Groups 3, 5, 6 and 14 were genetically the most divergent (Supplementary Table 1).
The genetic diversity analysis of above mentioned 15 groups clustered them into six, referred to as clusters 1-6 (Supplementary Figure 4). Cluster 1 was very distinct from the others and had just one group that contained lines from the northern and southern regions. Cluster 2 contained the largest number of accessions and consisted of lines only from the central region. Clusters 3 and 4 had accessions from two or more regions, while Clusters 5 and 6 had just one homogeneous group each. Accessions from the north appeared in four of six clusters and in five groups. Accessions from the central region appeared in four clusters and nine groups. Accessions from the south appeared in two clusters and just three groups, including Group 8. Characterization of the core reference sets. A core reference set of 1,133 landrace accessions was selected from the complete population (Supplementary Table 2 Figure 6) and diversity indexes of complete and core sets were comparable (Table 3). Phenotypic means and variances were compared using ratio of complete set/core set (Supplementary Table 3). Range for means and variances were 0.92-1.02 and 0.81-1.04, respectively. Finally, representativeness of core reference set has been shown in multidimensional scaling (MDS) graph in Fig. 3. Overall, comparative analysis of phenotypic variances, diversity measures and allele frequencies among the complete and core reference sets shows that the core reference is a representative of the complete set. The core reference set was also subjected to evaluation for yellow rust disease which led to identify seven resistant landrace accessions with disease severity of 20% or less through screening across two locations-Punjab Agriculture University, India and Toluca, Mexico (Supplementary Table 4). Characterization of rare alleles. Marker alleles, with minor allele frequency (MAF) less than 0.05 were considered as rare alleles. Of 20,526 SNPs, 8,539 had frequencies ranging [0, 0.05) and 7,775 alleles had a frequency within (0, 0.05), which were considered as rare alleles (Supplementary Figure 7). The total numbers of rare alleles in the complete and core reference sets were 7,775 (18.94%) and 6,876 (16.74%), respectively. Comparative analysis of the two populations revealed that the core reference set captured 88.75% of the rare alleles of the complete set (Table 3). The average number of rare alleles per accession was estimated for each of the 15 genetic groups. The highest number of rare alleles per accession occurred in the genetic groups belonging to Guanajuato (Group 12) followed by Durango (Group 5), Chihuahua (Group 14), Oaxaca (Group 8) and Michoacán (Group 13), as shown in Table 4 and Fig. 4. Interestingly, a very high percentage of unique rare alleles were observed in the landrace accessions of Michoacán (Group 13). The rare allele analysis carried out for identifying fixed marker alleles (not segregating) revealed genomic regions dispersed throughout the twenty one chromosomes (Supplementary Figure 8). Maximum fixed alleles were found in accessions from Chihuahua followed by the ones the central valley. List of the marker alleles is presented in Supplementary Table 5. Clustering based on longitude, latitude and altitude, grouped Chihuahua accessions in to one which also corresponding genetic groups 6 and 14 (Supplementary Table 6).
Discussion
The Green Revolution has been associated with the replacement of traditional varieties and landraces by high-yielding, input-responsive, semi-dwarf varieties of cereal crops 29,30 , which significantly contributed to global food security. However, this also led to increased monoculture and the depletion of on-farm varietal diversity, which today poses a serious threat to food production under climate change scenarios. Reinforcement of the Table 3. Summary statistics of complete set and the core reference sets of Mexican wheat landraces. rare CS allele: rare allele of complete set; MAF: minor allele frequency. genetic variation from underutilized landraces into modern varieties should provide impetus to achieve the targets of proposed climate smart agriculture 5 . Efforts made in the present study are an example of large-scale characterization of germplasm that remained unexploited in gene banks. At the global level, similar approaches for mobilizing gene bank and on-farm diversity to breeding pipelines will make it possible to accomplish the goals of FAO-CSA initiative. Mexican wheat landraces have been used by researchers in breeding for abiotic stress tolerance and grain quality improvement. Small sets were formed based on the available information (e.g. collection site) and therefore only 25% of entire Mexican landrace collection was utilized [31][32][33] . Three-fourths of the Mexican landrace collection remained uncharacterized. The present study provides a thorough genetic understanding of Mexican wheat landraces for researchers intending to utilize them. PCA, genetic classification and cluster analysis showed a broad differentiation of landraces belonging to northern and southern Mexico while some of the groups, being adapted to specific regions (Figs 1 and 2). They therefore have more potential for adaptation to a wide range of environments compared to the landraces that are adapted to specific ecosystems. Contrarily, some other landrace groups showed different patterns, for example, regional groups from Oaxaca, Guanajuato, Michoacán, Durango, Coahuila and Chihuahua (Fig. 2). For example, group 3 has accessions from extreme north and south Mexico but group 8 represents only Oaxaca, the southern province but diversity index of group 8 was significantly higher than that of group 3. Above mentioned distinct groups belonging to specific regions showed presence of relatively higher number of rare alleles per accession as compared to other groups (Table 4). Possible reasons for their adaptation could be either their early introduction or development of strong environmental imprints in them. One of the genetic groups representing a specific collection site in Mexico (Michoacán) showed an exceptionally high frequency of unique rare alleles (Fig. 4). Temperature and rainfall were at the higher end of the scale for this site, representing a unique climatic regime. The unique allelic diversity of the Michoacán landraces (Group 13) may be associated with their adaptation to this peculiar climate. Landraces from Durango (Group 5), a region characterized by very high annual average temperatures and low precipitation, had a very high number of rare alleles per accession. Combinations of different rare alleles may be the contributing factor to the adaptation of landraces to dry climates where heat stress and droughts are frequent. An intensive analysis of these two groups (Groups 5 and 13) could unveil the relevance of allelic diversity to climatic adaptation. Additionally, landrace accessions adapted to specific environments could be utilized efficiently in developing varieties for similar target ecosystems. Sources for heat stress tolerance as well as other stresses have been identified from these landraces and being used through large scale pre-breeding efforts (Sukhwinder Singh, CIMMYT, Unpublished). Nevertheless, Mexico is not the center of origin for wheat; landraces are adapted to a wide range of the climatic regimes which render them fit candidates for climate resilient wheat genetic improvement. Further, comparison of their diversity profile with landraces from the Fertile Crescent and Southern Europe will generate useful information regarding their evolutionary history and practical deployment in climate resilience breeding.
Another interesting pattern was observed in landraces from Chihuahua which is situated in extreme north part of the country. Maximum number of accessions with fixed marker alleles was found in Chihuahua followed by Central Valley (Supplementary Figure 8). Gene flow analysis further indicated towards their divergence; however, a definitive test would confirm this (Supplementary Figure 9). There might be two different possibilities, either wheat landraces from different sources would have been introduced to Chihuahua and Central Valley or genomic regions harboring such alleles could have undergone natural selection. Since wheat is introduced almost 400-500 years ago into Mexico by Spanish, the former possibility seems more likely. However, an in-depth analysis of phylogenetic history would confirm about 'introduction' and 'selection imprint' . Adaptation of genetic groups to diverse climatic regions of Mexico, the diversity pattern of these genetic groups (reflected in their diversity indices) and the distribution of rare alleles have provided first-hand information that will allow wheat researchers worldwide to efficiently utilize Mexican landraces in breeding pipelines.
The systematic utilization of the Mexican landrace diversity requires a manageable representative germplasm set. Natural, non-elite landrace populations are known to harbor agronomically important alleles in very low frequencies 13 . Therefore, a core reference set of natural populations with maximized rare alleles will likely unveil genetic variation useful for crop improvement. In the past, wheat core reference sets have been developed using one variable at a time 23,24 , resulting in significant reduction of rare allele diversity 24 . In this study, a unique core reference set development strategy was followed, i.e., using an integrated data matrix of both continuous and discrete variables (phenotypic and genotypic), thereby maximizing overall diversity for analysis. In this strategy, we reduced the dimensions of the marker data to 2,000 principal components explaining 84% of the total variance and then merged them to 23 phenotypic variables to form a data matrix in such a way that genotypic and phenotypic contributions were 75% and 25%, respectively. This arbitrary ratio was chosen for enhancing the role of phenotypic variation to core development. Detailed methodology has been explained in 'materials and methods' section. Wingen et al. 24 concluded that the core reference set strategy is not useful for discovering very rare alleles, perhaps due to the sample size or low-throughput marker platform. We report a significant enrichment of rare alleles in the core reference set. The percentage of rare alleles in the original landrace population was 18.94%, whereas it was 16.74% in the core reference set, suggesting an 88.75% rare allele recovery in the latter. Furthermore, frequencies of 158 rare alleles (in the complete set) rose above 0.05 in the core reference set (Supplementary Figure 5). This enrichment of rare alleles in the study was due to the base population (7,986 genotypes), high-throughput SNP data ( >20,000 markers), coupled with a unique core reference set development strategy, which simultaneously uses multiple variables. Minor allele frequency (MAF), geographical representation and phenotypic variance of the core reference set were comparable to those of the complete set (Supplementary Figure 6, Tables 3 and Supplementary Table 3).
Characterization of the core reference set enriched with rare alleles is a valuable germplasm resource for trait dissection and gene discovery. This representative population can be efficiently utilized by wheat researchers globally. Core reference set has already been distributed to researchers in Africa, South Asia, and the USA. Its evaluation in two geographically divergent environments, India and Mexico for yellow rust disease has identified resistant genotypes (Supplementary Table 4). Information presented in this study about Mexican wheat landraces will serve as a high value resource base for wheat breeders world-wide to develop high-yielding and climate-resilient next-generation wheat varieties.
Scientific RepoRts | 6:23092 | DOI: 10.1038/srep23092 Online Methods. Description of plant material. Wheat landraces used in this study were collected from Mexico. A total of 9,811 Mexican wheat landraces were collected from 16 Mexican states as part of a project sponsored by Mexico's National Commission for the Study and Use of Biodiversity 21 . After eliminating duplicate accessions based on phenotypic information, a total of 8,416 Mexican wheat landraces, including hexaploid and diploid accessions as well as other species, were used in the analysis. A total of 7986 hexaploid landraces were used for genetic analysis and their details have been provided in Supplementary Table 7.
Genotypic analysis of Mexican wheat landraces. Seed of a single plant from each accession was used for DNA extraction and seed of the same plant was used for the phenotypic evaluation. Extraction of genomic DNA was carried out by a modified CTAB (cetyltrimethylammonium bromide) method 34 , followed by quantification using NanoDrop 8000 spectrophotometer V 2.1.0. Genotyping was performed through DArT-seq GBS technology (called DArTseq ™ ) at DArT Pyt Ltd, Canberra, Australia 35 . This method follows two-step complexity reductions. In this approach, two enzymes, PstI_ad/TaqI/HpaII and PstI_ad/TaqI/HhaI_ad, along-with TaqI restriction enzyme were used to eliminate subsets of PstI -HpaII and PstI-HhaI fragments, respectively. To encode the DNA samples in plates which were ligated into small restriction fragments, PstI-specific adapters were tagged with 96 different barcodes. These PstI adapters were characterized by a sequencing primer and tags generated after sequencing were read with the help of PstI sites. Restriction products were amplified, quality was checked and then all 96 samples in a plate were pooled. Pooled DNA was run in a single lane on an Illumina Hiseq 2000 instrument for sequencing. To obtain the DArT score tables and SNP tables, a proprietary analytical pipeline developed by DArT P/L was used.
Marker data were filtered based on reproducibility, call rate and the average read depth using the pipeline. Reproducibility was determined by assaying approximately 60% of samples twice. The minimum threshold values for completeness, reproducibility and call rate were kept at 50%, 95% and 85%, respectively. The average read depth was 7. Variants were called within the data by clustering sequences by sequence similarity, thereby avoiding the use of an external reference genome. In this approach, either the most common sequence in the population or a wheat sequence previously recorded by DArT genotyping protocol was considered as the reference. This approach of recalling GBS samples has been used in recent wheat studies with the DArT-seq markers 20 .
A total of 20526 SNPs were recalled from the raw GBS data (Supplementary Data File). These markers had varying numbers of missing scores (Supplementary Figure 10). Of these, 20039 markers with less than 50% missing scores were used in the analysis. Stringency was kept to this level to (1) include a high number of minor/rare alleles in the analysis, and (2) minimize the risk of under-representing a genomic region because the chromosome locations of all GBS tags were not known. Missing scores could represent biological data points (presence and absence variations-PAV). A total of 20039 SNP markers were used in the diversity analysis, in core reference set development and in marker-trait association analysis. For determining the genomic position of markers, two different consensus genetic linkage maps were referred (20,DArT,Australia,unpublished). In order to estimate diversity indices and genetic differentiation (Fst values), we select 301 markers as equally spaced as possible in the chromosome (Supplementary Figure 11). Selection was performed following the next steps: (1) Calculate the percentage of markers per chromosome and determine the number of markers of 301 to preserve the representation of the chromosomes; (2) Calculate the distance between the first and the last marker in each chromosome and divide this amount by the number of markers obtained in 1; (3) Starting from the first marker, calculate the points in the chromosome to have equally spaced markers; and (4) Identify the markers as closest to each point. Those markers were selected in the sample. This analysis was performed using a custom code in SAS v9.4. The set of 301 markers was used. Map in Fig. 2 was made using ESRI's ArcGIS Desktop ArcMap 10.2.2 software 36 . The dataset used to make the maps on rainfall and temperature was Worldclim 1.4 37,38 . 30 s resolution (ca 1 km) long term monthly average grids for rainfall and minimum, maximum temperature were used to generate annual average grids for rainfall and average temp and mapped subsequently. Geo-referenced locations of origin of land races were converted to vector data and integrated in the maps. . Trials under heat stress were planted in April 2012 so they were exposed to high temperature stress at anthesis. Each trial was conducted as an augmented design with 0.3 m 2 plot size; check varieties 'Vorobey' and 'Baj' were repeated multiple times in the experiment. Appropriate measures were used to fertilize and control weeds, diseases, and pests. In the well-irrigated trial, the plots were watered so that approximately 600 mm of water were applied during the complete wheat cycle. Irrigation was provided to plots whenever approximately 50% of available soil moisture was depleted according to gravimetric scales. Approximately 200 mm of total soil moisture was available during the growing season for the drought treatment. The heat stress treatment was watered the same as the irrigated treatment to avoid the confounding effect of drought.
Seeds for the grain quality analysis were obtained from the irrigated treatment; grain morphological characteristics were evaluated with the digital image system SeedCount SC5000 (Next Instruments, Australia) and thousand-kernel weight (TKW, g), test weight (TW, kg/hl), average grain length and width (mm), as well as percentage of grains affected by yellow berry (%) were determined. Grain size distribution was measured using sieves of 2.8 mm (Screen 1), 2.5 mm (Screen 2) and 2.2 mm (Screen 3). Near-infrared spectroscopy (NIRS, Antaris II FT-Analyzer, Thermo Scientific, USA) was used to determine grain hardness (GH, %) and grain protein content (GP, %). The near-infrared spectroscopy instrument was calibrated based on AACC methods 39 for particle size index (AACC Method 55-30) and protein (AACC Method 46-11A). Grain protein was adjusted to 12.5% Scientific RepoRts | 6:23092 | DOI: 10.1038/srep23092 moisture content. Whole-meal flour samples were obtained using a UDY Cyclone mill (0.5 mm sieve). Only one gram of whole-meal flour was used to perform the SDS-sedimentation (SDS, ml) test 40 .
Phenotypic screening of Mexican core reference set for yellow rust. Evaluation of the core reference sets for yellow rust was performed under field conditions at CIMMYT's Experiment Station in Toluca, Mexico (May 2014 to October 2014) and at Punjab Agriculture University, India (October 2014 to March 2015). Accessions were planted in a randomized complete block design with two replicates, in a plot consisting of two one-meter rows. Approximately 60-70 seeds were sown in each plot. Yellow rust susceptible variety Avocet was planted around the whole experimental block. Inoculation of spreader/border rows was done according to the method explained by Hao et al. 41 . Disease assessment was performed when the susceptible check variety Avocet showed 100% yellow rust severity (during the mid-dough stage of plant growth). Assessment of percent disease severity was performed according to the modified Cobbs Scale 42 .
Cluster and diversity analysis of Mexican landrace accessions. A three-step approach was followed for classifying Mexican wheat landraces. First, we performed principal component analysis (PCA) using only genetic data from all the accessions, including hexaploid, tetraploid and other species. Next, we conducted a genetic diversity analysis of just the hexaploid landraces to define their characteristics with respect to their geographic collection sites. Finally, we used classification to develop a core reference set of hexaploid accessions combining different types of variables (genotypic and phenotypic) using the Hierarchical Multiple Factor Analysis 43,44 ; and Gower's distance was used as a measure of genotypic and phenotypic diversity 28 .
Principal component analysis of Mexican landrace accessions.
Principal component analysis was done with the GBS data using the "princomp" function of R-project version 3.1.1 45 .
Cluster and diversity analysis of Mexican hexaploid wheat landrace accessions. Genetic distances were calculated using SAS PROC DISTANCE and cluster analysis was done with the SAS PROC CLUSTER procedures of SAS v9.4 46 . Classification of landrace accessions was done using hierarchical multiple-factor analysis (HMFA) in a step-wise manner, and in every step, one group was split into two groups. We identified final working groups that allowed representation of heterogeneity between individuals and as homogenous as possible considering geographical distribution. A simple matching coefficient transformed into squared Euclidean distance was used to measure the similarity between landraces. Formula used for Euclidean distances was: EU u u u 2 D EU = Euclidean distance; u = individual landraces and X u and Y u are different individuals. The genetic distance between two landraces x and y, denoted as d(x, y), was calculated by one minus the ratio between the number of matches (M) and the total number of non-missing pairs (N), that is, A complete algorithm (used for clustering) was used to estimate the distance between groups based on the greatest distance between any two individuals in different groups. The distance between groups a and b, denoted as d(a, b), was estimated as the maximum distance among all pairs of individuals in different groups: ij ai bj where x ai represents the individual i in group a, and x bj represents the individual j in group b. Allele frequencies were determined with Tassel version 5 47 . Nei's and Shannon's distances were calculated to estimate the extent of diversity within the entire population, within each group, and within the core reference set based on Euclidian distances. For Nei's index, the following formula was used, Shannon's index was calculated as, where P i is the frequency of the major allele of the i th marker and t is the number of markers.
For estimation of gene flow, the coefficient of gene differentiation (G ST ) was calculated as: 49 were calculated using Genepop software 50 . We also performed correlation analysis between sample size and diversity indices to see the effect of sample size on diversity statistics and concluded that in our study sample sizes do not significantly affect the diversity indices (Supplementary Figure 12).
Rare allele characterization in different populations. Rare alleles were defined for different purposes with respect to populations and presented in qualitative ways i.e. (1) rare allele per unit accession, (2) unique rare alleles and (3) fixed rare alleles in accessions belonging to specific regions.
(1) For comparing allele richness in core and complete set complete set of Mexican landrace (7986 individuals) was considered as a population and rare alleles were determined with respect to this population. Richness of these 'rare alleles' were then observed in the core set population comprising 1133 individuals (Table 3, Supplementary Figure 5). (2) Secondly, rare alleles were estimated in each of the genetic groups. Each group was considered as a population and rare alleles were defined based on the respective population. These genetic groups were defined based on genetic similarity matrix. Further, in order to explain rare alleles in a more un-biased way we have estimated the, "Unique rare alleles" which is a qualitative measure-not affected by size of the groups (Fig. 4). (3) Further, to identify alleles fixed in groups (groups based on genetic similarity matrix or geology) we first selected list of markers with at least 60% data points showing allele frequency of less than 0.05. Clusters were formed using latitude, longitude and altitude informations with the k-means method. Euclidean distance and least square method were followed for this analysis. A table with cross classification between cluster based on geographical data and 15 genetic groups was created (Supplementary Table 6). Allele frequency was analyzed in environmental clusters to identify the alleles fixed in particular locations. Markers with fixed allele in particular locations (Chihuahua and central valley) were then mapped to the wheat genome using Jim Kent's blast-like alignment tool (BLAT) 51 . The version 2.26 of the International Wheat Genome Sequencing Consortium was downloaded from Gramene website 52 . All the markers where aligned to each of the chromosomes and filtered using the highest match percentage. Markers showing identity of 95% and above were used for presentation. Based on this information genomic regions have been determined and presented in Supplementary Figure 8.
Simulation of optimum population size for core set selection. Optimum size of the core reference set was determined through Monte Carlo simulations by 1000 replications. Twenty levels (i.e., 5-100% with step size 5%) of population size were randomly selected from the whole population. For each level of sub-population size, the genetic variance was calculated by the following equations: where f ST is the statistic to test for allele frequency differences among populations; p and q are allele frequencies satisfying p + q = 1; var (p) is the variance of allele frequency p; n is the number of populations, which is equal to the number of simulations in this study; p and q are the mean allele frequencies of p and q, respectively, across simulation runs, and σ G 2 is the genetic variance 53 . A core reference set that contained approximately 15% of the complete set could reach the highest genetic variance among the twenty sub-populations (Supplementary Figure 13). Therefore, 1,133 landrace accessions (14.18%) out of 7,986 accessions in the complete population were selected as the core set.
Classification of Mexican landraces for core reference set selection. To select the core reference sets, Mexican landraces were classified using the data matrix containing genotypic and phenotypic variables. We first used PCA by the marker data to reduce the dimensions of the marker data to 2,000 principal components (from 20,039) that explained 84% of the total variance. Thereafter the scores of the 2,000 PCA components and the 23 phenotypic variables were merged to form a data matrix that was then used in an HMFA 43,44 . Six principal axes (PA) were selected from HMFA in such a way that genotypic and phenotypic contributions to the total variance explained by the six axes were 75% and 25%, respectively. Using this data matrix we estimated the contributions on genotype and phenotype for principal axes (PA) of 1 to 25. The first dimension (or principal axis) explained phenotype: genotype variability ratio of 50:50% whereas, 25 th dimension correspond to 8:92% ratio. Six PA explained 25% and 75% phenotype and genotype variability respectively. Similarly, another landrace data set Scientific RepoRts | 6:23092 | DOI: 10.1038/srep23092 (2403 Iranian landraces) was analyzed which also suggested that 6 PA explain 30% and 70% phenotype and genotype variation respectively (CIMMYT, Unpublished data). Total number of PC dimensions representing 20,526 markers was 2000, whereas only 23 phenotype variables were present and to provide weight to phenotype 25% phenotype contribution was decided. Therefore, the 25:75 ratio (phenotype: Genotype) was not exactly an arbitrary criterion but based on two data sets of two independent genetic populations (Mexican and Iranian wheat landraces). For the HMFA analysis, the FactoMineR library was used in R software 54 . The ratio of genotypic and phenotypic variables was more than a thousand to one, so increasing the phenotype representation ratio of 75:25 was chosen to achieve some equilibrium. The coordinates for each accession on each PA were then used to group accessions following the mixture of normal distributions methodology and the optimum number of groups was selected by maximum likelihood 27 . A total of 21 groups were defined. A predefined number of accessions were selected from each group. The number of accessions per group was estimated by the D-method, proportional to group diversity 55 . Following the stratified random sampling method, 1000 candidate reference sets were created and their diversity was estimated using the average Gower's distance 28 ; the subset showing the maximum average Gower's distance was selected to be the core reference set. Analyses were performed using different libraries from the open source software R 45 . To visualize whether the core reference set is representative of the complete set, we used a graphical multidimensional scaling (MDS) method. Secondly, we determined the variance of the phenotypic traits in both populations. | 2017-07-31T23:26:05.487Z | 2016-03-15T00:00:00.000 | {
"year": 2016,
"sha1": "9f1ff9508546dd9ec64616bfd5ca43655061a85c",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/srep23092.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bd8d6ddff47c7e87286489dcd1187cc741b45b66",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
216301604 | pes2o/s2orc | v3-fos-license | Metacognitive skill on students of science education study program: Evaluation from answering biological questions
Mochammad Yasir a,1,*, Aida Fikriyah a,2, Nur Qomaria a,3, Aushia Tanzih Al Haq b,4 a Department of Science Education, University of Trunojoyo Madura, Telang Street PO Box 2 Kamal, Bangkalan City 69162, Indonesia b Department of Life Science, National Central University, Number 300, Zhongda Road, Zhongli District, Taoyuan City 32001, Taiwan 1 yasir@trunojoyo.ac.id *; 2 aida.fikriyah@trunojoyo.ac.id ; 3 nur.qomaria@trunojoyo.ac.id ; 4 aushia.tanzia@nhri.edu.tw * Corresponding author
INTRODUCTION
In the current era, the teachers are not only required to deliver learning materials but also empower various 21st-Century competencies (Docherty, 2018;Serdyukov, 2017;Wilson & Bai, 2010). Various thinking skills that are the foundation of 21st-Century skills must be known and understood by the teachers and the way they are empowered must be known (AACTE, 2010;Darling-Hammond, Flook, Cook-Harvey, Barron, & Osher, 2019). According to several thinking skills, metacognitive skills become essential skills that also support and relate to other skills (Blummer & Kenton, 2014;Chauhan & Singh, 2014;Demirel, Aşkın, & Yağcı, 2015).
The empowerment of metacognitive skills is seen as an urgent for several reasons. First, metacognition is closely related to a student's ability to deal with problems while learning (Chauhan & Singh, 2014). Second, metacognition can also support students in the problem solving process (Persky, Medina, & Castleberry, 2018). Third, metacognition is also related to cognitive and self-regulatory control abilities in students (Efklides, 2014). Moreover, this competency can maximize personal development, academic writing skills, and mastery of concepts (Sudarmin et al., 2016).
The teacher as the main component in the learning process is also expected to have good metacognitive skills. The better the teacher's metacognitive skills, the more optimal the empowerment of these competencies. Therefore, prospective teachers must have good metacognitive skills (Ahmad . The reason is, it is not possible for a person to be able to empower metacognitive skills well if he himself does not master these skills (Bahri, Idris, Nurman, & Ristiana, 2019;Jiang, Ma, & Gao, 2016;Suratno et al., 2019).
Ethnoscience is one of lectures that have to be learned by students in Science Education study program. Regarding to this lecture, it discusses about the learning process which links on the relationship between scientific knowledge, local culture, and indigenous science (Sudarmin et al., 2016). Ethnoscience is important to be mastered since it supports the Curriculum 2013 and its relationship between 21 st -Century skills (Sudarmin, 2014). The Curriculum 2013 can support students to link the science knowledge and culture (Kemendikbud, 2014). As we know, Indonesia has a diverse culture, but has not been widely used as a material of science learning. As the impact, Indonesia culture and local wisdom are continually left and forgotten by society, especially students. Therefore, pre-service science teachers need to empower this subject in order can transfer this to their students later.
During ethnoscience learning process, one of higher thinking skill that have to be taught to pre-service teacher is metacognitive skill. This skill is important to be empowered since it will be used in analyzing and identifying various problems related to biology phenomenon and absolutely also in solving those authentic problems (Chauhan & Singh, 2014;Persky, Medina, & Castleberry, 2018). According to the observation result, it also can be concluded that students' metacognitive skill still needs to be improved during the learning process, especially on the aspect of planning, monitoring, and evaluating.Responding to the importance of metacognitive skills, various studies examining metacognitive skills in Indonesia have been conducted several times. From the various reports, it was found that some forms of learning can improve this competency (Siregar, Susilo, & Suwono, 2017;Tamsyani, 2016). The development of modules and learning media was also carried out by previous researchers to streamline the empowerment of metacognitive skills (Dewi, Kannapiran, & Wibowo, 2018;Siagian, Saragih, & Sinaga, 2019). However, from the many studies that have been conducted, the assessment of the metacognitive skills profile of prospective teacher students is still difficult to find. Its difficulty is caused by the type of instrument in which generally the instrument used to evaluate metacognitive skill is only in the form of questionnaire. Moreover, the metacognitive skill which is a part of cognitive domain can be analysed and evaluated using a test instrument. This kind of research is important because it can be important information in evaluating the quality of teacher education through such kind of test instrument modified from Schraw and Dennison (1994). Therefore, the purpose of this study was to analyze the metacognitive skills of students majoring in science education to respond to biological questions.
METHOD
This study was a case study research, involving 110 students of Science Education Study Program at University of Trunojoyo Madura. The research subject involved students from class A, B, and C in the academic year of 2018. Students were taught Ethnoscience lecture during one semester, and the metacognitive skill on students were evaluated. Data were collected through Metacognitive Awareness Inventory (MAI) with adapting from Schraw and Dennison (1994). This test instrument consisted of 15 essay question items asking the concepts of Genetics on Human and its relationship with Ethnic and Society on Madura, East Java-Indonesia.
The metacognitive test was arranged in several indicators and sub-indicators. The first indicator was declarative knowledge with three sub-indicators; identifying the problem, analysing the prior knowledge to solve the problem, and examining own weakness and capabilities, on item number 1, 2, and 15. Furthermore, the second indicator asked about procedural knowledge with two sub-indicators; giving the alternative solutions to overcome problem and providing steps or ways to solve problem, on item number 6 and 7. The third indicator was about indicator of conditional knowledge in which the sub-indicators were about deciding the best answer and giving reasons of choosing that answer, on item number 8 and 10. Furthermore, the forth indicator was about planning with two sub-indicators; relating the prior knowledge and new information to solve the problem, and arranging plan to solve problems on item number 3, 4, and 5. The fifth indicator was about monitoring with two-sub indicators; evaluating the formula that was used to solve the problem related to human genetics and analysing strategies that are used to correct the results, on item number 9, 11, 12, and 13. Finally, the last indicator was about evaluation with one sub-indicator; re-checking the assignments, on item number 13 and 14.
The obtained result of study were analyzed using quantitative descriptive method. Data of metacognitive skill obtaining from test score was converted using the Formula 1 based on Corebima (2009) where y1 is cognitive test score, y2 is the combination of cognitive and metacognitive test score, and x is metacognitive skills score. Findings were also obtained from the metacognitive questionnaire in which it used Guttmann scale (Abdi, 2010), with score 1 and 0, then the total score was categorized based on the interval on Table 1. Finally, the level of metacognitive skill was determined using criteria on the Table 2. (1)
RESULTS AND DISCUSSION
Metacognition is an important competency that needs to be optimally empowered at all levels of education. The metacognitive skills of the students on science education study program involved in this study are presented in Table 3 and the level of each component of their metacognition is presented in Figure 1. Based on Table 3, the metacognitive skills on students in class A, B, and C have developed well, showed by the average of 72.93%. Furthermore, it also can be seen that the each metacognitive skill component reached on score above 60 (Figure 1). Data in the Figure 1 indicates that students' metacognitive skill in all components has developed well, showed in percentage of declarative knowledge skill (75.81%), procedural knowledge (71.46%), conditional knowledge (73.80%), planning (64.53%), information management strategy (67.60%), comprehension monitoring (65.78%), and evaluation (68.50%). Meanwhile the skill of debugging strategies reached the highest percentage which means that this skill has developed very well (88.30%). The good profile of metacognitive skills of students analyzed in this study is not in line with some previous studies. Some previous studies conducted in Indonesia have reported that the metacognition profile of students is still unsatisfactory. These studies are not only conducted at middle school (Diella & Ardiansyah, 2017;Nurajizah & Windyariani, 2018;Tjalla & Putriyani, 2018), but also higher education (Ahmad . Therefore, findings in this study showed that learning process especially in Science Education study program is conducted well by enhancing metacognitive skill. It is supported that metacognitive skill can enhance well if it is implemented continually during the learning process.
The difference results between researches that was conducted in middle school with this present research can be happened due to the difference of education level of research subject. The statement is based on the information from previous research that inform level of education has an essential factor in contributing student thinking skills (Coşkun, 2018). Education level has an impact on metacognitive skills due to these competencies can improve when students regularly use their cognitive. Therefore, the longer the students involved in education processes, the higher their metacognitive skills (Ahmad . Then, the difference results between the finding between this present research and the previous research that conducted in the other higher education program and institution can be happened due to the difference learning activities experienced by research subjects. The good profile of metacognitive skills of students in science education program that involved in this research indicated that the course activities that held in this study program could empowered metacognitive skills. Learning activities have been known as main factor affect students metacognitive level (Aydin, 2011;Zohar & Barzilai, 2013).
However, results of this study indicate that students' declarative skill has increased well, but less optimal. It is also assumed that students were not able to control their meta-comprehension skill since they were difficult in monitoring their selves. If this ability is low, then students will face difficulties in understanding concepts well (Sudarmin et al., 2016). This condition reveals that several students still found difficulties on facing and analysing problems related to Ethnoscience, especially in the concepts of Genetics on Human and its relationship between Ethnic and Social.
Furthermore, based on Figure 1, the students' procedural knowledge skill has increased well. However some of them still need assistance in applying cognitive strategy during the learning process. Procedural knowledge skill is related to the way of doing something. Several strategies can be implemented to rise metacognitive skill such as by getting students to identify what they already know and do not know, re-tell about their thinking, arrange plans, identify questions, and evaluate their selves (Corebima, 2016;Sudarmin et al., 2016).
Then, conditional knowledge is a skill to decide when declarative and procedural knowledge will be applied during the learning process. It is found in this study that students' conditional knowledge has developed well. However, some of them still were not capable to understand in learning strategy decision. Students should be used to decide what learning strategy that will be used to learn since it can enhance thinking skill. Use of various learning strategies will make learning process easier.
Planning ability is also one of skill in metacognitive which is related to arrange plan of learning activity. Findings in this study showed that students were not capable to control their selves before start the learning process, most of them also did not state learning aims, and manage time of learning. However, it can be assumed that this skill is really important to be applied during the learning process since it can affect on the learning achievement.
Furthermore, information management strategy related to the skill to analyze and identify ideas and use learning strategy to make meaningful information. It is found in this study that students were used to read text book in learning. Furthermore, the other component in metacognitive skill is comprehension monitoring which includes the evaluation of learning activity and strategy. Students should be used to evaluate themselves since it is advantageous to understand which the best learning strategy that is suitable with their needs and personality.
The last component is debugging strategies in which students in this study reached the highest score. It is assumed that students were able to revise wrong understanding and assignments. However, it should be always implemented during the learning process to make the metacognitive skill in all components can develop very well.
To sum up, according to the findings obtained in this study, students' metacognitive skill reached on 71.93% meaning that it has developed well. However, it is found that the weakness related to the students' metacognitive skill in this study should be solved effectively by applying several learning strategies, for instance by applying mind mapping or concept mapping. Metacognitive skill is not an inherited skill however it can be taught continually through active learning. Through various activities such as keeping a reflective journal, talking about thinking, and self-questioning, it is expected that students are able to enhance their metacognitive skill and thus can be implemented when they work and interact with society. Some references suggest a active learning such as mind-mapping (Pedone, 2014). self-reflection activities (Colbert et al., 2015), as well as inquiry learning (Adnan & Bahri, 2018). Moreover, several learning model have also been confirmed could improve students' metacognitive skills. Some of the learning model is project-based learning (Sumampouw, Rengkuan, Siswati, & Corebima, 2016) and problem-based learning (Haryani, Masfufah, Wijayati, & Kurniawan, 2018;Panchu, Bahuleyan, Seethalakshmi, & Thomas, 2016).
CONCLUSION
According to the findings of this study, it can be concluded that: the students' metacognitive skill has developed well with the average of 71.93%. Furthermore, the students' metacognitive skills in every component also developed well (64.53 -75.81%), except the ability of debugging strategies which enhanced excellently (88.30%). This study has examined the profile of metacognitive skills of students of science education courses. The results of the analysis have shown that their metacognitive skills are good. However, this conclusion is only based on metacognition data that collected using one instrument. Therefore, further research involving more than one type of metacognition instrument needs to be done. In addition, to confirm the effectiveness of lectures in science education in empowering metacognition, research that examines the metacognition profile in various study programs needs to be conducted.
ACKNOWLEDGMENT
We would like to thank to Directorate of Research and Community Service of University of Trunojoyo Madura-East Java, Indonesia who has supported fully on this study. We are also immensely grateful to all colleagues from Department of Science Education-University of Trunojoyo Madura for their comments and suggestions on this manuscript, although any mistakes are ours and may not tarnish the reputations of these persons. | 2020-04-02T09:14:46.916Z | 2020-03-31T00:00:00.000 | {
"year": 2020,
"sha1": "7a3b8867b8229f05fe55e17da8fa30a9400889e6",
"oa_license": "CCBYSA",
"oa_url": "https://ejournal.umm.ac.id/index.php/jpbi/article/download/10081/7702",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d5ea723928e68299d27dce9150c62e2d3013455d",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
201155796 | pes2o/s2orc | v3-fos-license | Sleep Disorders and Menopause
Sleep disorders are one of the main symptoms of menopause. Symptoms of sleep disorders that menopausal women complain about include falling asleep, frequent awakening and/or early morning awakening. There are many possible causes of sleep disorders in postmenopausal women, including vasomotor symptoms, ovarian hormone changes, restless legs syndrome, periodic leg movement syndrome, and obstructive sleep apnea. In this review, we discuss the relationship between menopause and sleep disorders.
INTRODUCTION
Sleep is composed of rapid eye movement (REM) sleep, characterized by rapid eye movement and decreased muscle tone during sleep, and non-REM (NREM) sleep. Normally, sleep level repeats at a cycle of about 90 to 120 minutes between REM and NREM sleep and occurs about 5 cycles per night. REM sleep accounts for 20% to 25% of total sleep and NREM sleep 75% to 80% [1]. Sleep is controlled by two ways: the homeostatic process, and circadian process. The longer people stay awake, the more they sleep. The desire for sleep is one of the automatically-controlled homeostatic processes and depends on the amount of prior wakefulness. The circadian process is regulated by an endogenous circadian pacemaker. The suprachiasmatic nucleus mainly controls the circadian rhythm and is controlled by hypothalamus. Complex interactions among neurotransmitters, and interconnected neurons promote or suppress sleep and wakefulness [1]. Insomnia and fatigue are the most common symptoms of postmenopausal women. The definition of menopause refers to the period after 1 year has elapsed since the last menstrual period [2]. However, changes in hormones begins to occur 7 to 10 years before menopause, leading to a decrease in estradiol and inhibin and an increase in follicle-stimulating hormone and luteinizing hormone [3,4]. Women undergo physical and psychological changes as a result of hormonal changes, e.g., mood swing, anxiety, stress, forgetfulness, and sexual dysfunction [5,6]. Many women especially complain of sleep disorders at this time.
According to the Study of Women's Health Across the Nation (SWAN), the prevalence of sleep disorders increases with age. The prevalence of sleep disorders is variable, ranging from 16% to 42% in premenopausal women, from 39% to 47% in perimenopausal women, and from 35% to 60% in postmenopausal women [7]. Symptoms of sleep disorders that menopausal women complain about include falling asleep, frequent awakening and/or early morning awakening [8]. The etiology of sleep disorders in menopausal women isn't yet clear and seems to be different to according to the specific symptoms of sleep disorder. However potential factors include the menopause itself, aging, vasomotor symptoms, depression, anxiety, and many other medical condition, such as cardiovascular, endocrine disease, medication, and psychosocial factors [9].
Vasomotor symptoms
Vasomotor symptoms are the most common menopausal symptoms reported by 75% to 85% of postmenopausal women. Hot flashes are defined as a sudden sense of body heat or redness around the face and neck, often accompanying sweating and tachycardia. Most symptoms last for 1 to 2 years after menopause but rarely last for more than 10 years. Although many perimenopausal and postmenopausal women experience vasomotor symptoms, the cause is not clear. However, there is evidence that vasomotor symptoms are caused by estrogen withdrawal [10]. Some herbal treatments for menopausal symptoms contain hops (Humulus lupulus L.) and its components which have high estrogenic potency. Although the mechanisms which hops relieves menopausal symtoms are not clearly understood, preparations based on hops have been found to decrease the severity and frequency of hot flashes [11].
In some studies, vasomotor symptoms of perimenopausal and postmenopausal women have been reported as one of the causes of sleep disturbances. A recent study has shown that vasomotor symptoms were related to poor sleep quality [12]. Other studies have supported this finding by suggesting that hormone therapy improves sleep quality [13]. However, some argue that there is no close relationship between vasomotor symptoms and sleep disorders. Young et al. [14] showed that menopause was not a strong predictor of sleep disorders, although perimenopausal and postmenopausal women have lower quality sleep than premenopausal women.
Hormone change (estrogen/progesterone)
Ovarian hormones have been reported to affect sleep disorders. Progesterone has both sedative and anxiolytic features. It stimulates the production of the NREM associated gamma-aminobutyric acid receptors by stimulating benzodiazepine receptors [15]. In addition, progesterone also acts as a respiratory stimulant and has been used to treat mild obstructive sleep apnea (OSA) [16]. The effect of estrogen on sleep structure is complex as estrogen has a wide range of effects that potentially affects sleep structure. First, it is associated with metabolism of norepinephrine, serotonin, and acetylcholine-neurotransmitters that affect sleep pattern. Estrogen has been proved to decrease sleep latency, the number of awakening after sleep occurs, and cyclic spontaneous arousals; and increase total sleep time [17,18]. Second, estrogen has a regulating effect on body temperature. During the night, estrogen plays a role in keeping the central body temperature low [19,20]. In mammals, estrogen is a hormone that regulates the temperature of the lowest body temperature during the night. When decreased estrogen, this time shift forward and the depth of the temperature drop changes [19]. Estrogen has a direct effect on mood by affecting the norepinephrine activity and serotonin response and uptake in the brain. All of these effects mean that estrogen would have an antidepressant effect [20,21].
If we treat menopausal symptoms earlier in the menopause period with estrogen or estrogen-progesterone therapy, it will have a more beneficial effect in improving menopausal symptoms. Estrogen therapy is very effective in treating vasomotor symptoms, which improves sleep quality, and hormone replacement therapy is one of the main therapies for osteoporosis, mood disorder, and depression [22,23]. Studies have shown that estrogen replacement therapy improves sleep quality, enables falling asleep, decreases nighttime wakefulness and also reduces vasomotor symptoms [13]. Therefore, hormone replacement therapy is recommended for menopausal insomnia to improve the quality of sleep and life. For those who are starting hormone replacement therapy, use of low does estradiol rather than conjugated estrogen is more suitable.
Melatonin
Melatonin plays a major role in circadian rhythm, especially in sleep onset and in sleep maintenance through block arousal mechanism. These effects help to keep humans sleeping at night. However, the relationship between melatonin and menopause is still unclear. Melatonin levels decline with aging process but are not always associated with menopause. Melatonin levels decrease with aging before menopause, but then increase again over the years. In some studies, melatonin levels in postmenopausal women with insomnia were lower than those in premenopausal women [24]. Recently, more potent melatonin analogs (selective melatonin-1 [MT-1] and melatonin-2 [MT-2] receptor agonists) have been developed with extended effects and slow release melatonin preparations [25,26]. The MT-1 and MT-2 melatonin receptor ramelteon [27,28] was found to be effective in reducing total sleep time and sleep efficiency as well as reducing sleep initiation delay in patients with insomnia [29]. It has been shown that melatonergic antidepressant agomelatine, with strong MT-1 and MT-2 melatonergic agonists and relatively weak serotonin 5-hydroxytryptamine (2C) receptor antagonists [30,31], is effective in treating insomnia with insomnia. In short, melatonin compounds can be useful in the treatment of insomnia [32][33][34][35][36][37][38][39][40].
Mood disorders
Mood disorders such as anxiety or depression are associated with sleep disorders in postmenopausal women [41]. Difficulty in falling asleep cause to anxiety, irritability, and inadequate sleep and possibly depression [42]. One of the main causes of depression is insomnia. In addition, women with hot flushes are more likely to develop depression. Women with depression and hot flushes have a lower quality of sleep than women without depression. Consequently, depression and hot flushes may have additional effects on sleep pattern.
Obstructive sleep apnea
The incidence of OSA is significantly increased in postmenopausal women. In some studies, from 47% to 67% of postmenopausal women are reported to suffer from OSA [43,44]. Women tend to gain weight after menopause. Gained weight leads to the increased neck circumference, the body mass index, and waist-hip ratio [43][44][45][46]. Thus, the upper airway changes anatomically after menopause and causes problems like OSA during sleep [15,47,48]. Therefore, it seems that the prevalence of OSA in postmenopausal women is higher than in premenopausal women [46]. However, body weight is not the only factor that causes this disease. Some studies have reported that testosterone is a factor that aggravates OSA [49]. The primary treatment for OSA is positive airway pressure. The effect of hormone therapy, such as estrogen and progesterone, is controversial. While some studies show improvement in symptoms [50,51], others do not [52].
Restless leg syndrome and periodic leg movement syndrome
The restless leg syndrome (RLS) is a disorder which that causes an urge to move the leg accompanied by an uncomfortable sensation. Although RLS is not associ-ated with menopause and hormone therapy, it seems to increase with age [53]. The etiology is unclear but is known to be related to iron deficiency anemia, pregnancy, and uremia [46,54]. Periodic limb movement disorder (PLMD) is repetitive cramping or jerking of the legs during sleep occurring about every 20 to 40 seconds. PLMD is considered to cause to disrupt sleep and make arousal. The only reliable treatment for both diseases is dopamine agonist and also do not clearly respond to hormone therapy [55].
CONCLUSION
We have reviewed sleep disorders in postmenopausal women. We presented several factors and changes that affect women during the menopause transition and suggested the effect of these factors on sleep. However, because the etiology of sleep disorder in menopause is multifactorial, sleep disorders are simply a part of the aging. Although the cause of menopausal sleep disorder is not clear, some studies have reported that hormone therapy improves sleep quality and is considered a primary treatment if other causes are excluded. Despite of the effectiveness of hormone therapy, sleep disorders may simply be associated with limb movement syndrome, depression, anxiety and so on. Therefore, to manage effective sleep disorders in menopausal women, it is helpful to evaluate the cause. Further investigation by randomized controlled trials is needed to assess the efficacy of these treatments in postmenopausal women. | 2019-08-23T02:03:43.884Z | 2019-08-01T00:00:00.000 | {
"year": 2019,
"sha1": "a3d30d1ee5e00e6f9f852809c0cf9c59eeb57b97",
"oa_license": "CCBYNC",
"oa_url": "http://e-jmm.org/Synapse/Data/PDFData/3165JMM/jmm-25-83.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "ea85c2e3198e7706ab8423ecfdb05b358f159542",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
214470563 | pes2o/s2orc | v3-fos-license | Evaluation of Pole-type French bean ( Phaseolus vulgaris L.) Genotypes for AgroMorphological Variability and Yield in the Mid-hills of Nepal
Knowledge of genetic diversity is crucial to assess the variability of genotypes and their potential use in crop improvement programs. The present experiment was conducted at Horticulture Research Station (HRS), Dailekh (1300 masl) for three years during 2016-2018 to study the agro-morphological variability and performance of six genotypes of French bean (Phaseolus vulgaris L.) for pod and seed yield. The genotypes viz; Bhatte, Chaumase, Dhankute Chirrke, WP Con Bean, White OP, and Trishuli were experimented in Randomized Completely Block (RCB) design with four replications. Observations were recorded on 14 qualitative and 12 quantitative traits. Among the qualitative traits, French bean genotypes observed variability w.r.t. stem pigmentation, leaf color, leaflet shape, stem hairiness, flower color, pod color, pod shape, pod cross-section, pod beak position, pod appearance, seed size, seed shape, and seed color. Analysis of variance for quantitative traits showed significant differences among all the genotypes for all the characters studied. Three year mean results showed the genotype Chaumase (35.0 t/ha) followed by Trishuli (28.0 t/ha), WP Con Bean (24.6 t/ha) and White OP (22.9 t/ha) recorded the maximum green pod yield. Similarly, genotypes Chaumase (2.1 t/ha), Trishuli (2.1 t/ha), Dhankute Chirrke (1.44 t/ha) and White OP (1.09 t/ha) were found promising for seed production purpose. The agro-morphological variation observed in growth and pod characters could be utilized in variety improvement programs. Future research work needed to be focused on the further evaluation of these genotypes under different production systems for yield and seed production and to identify traits useful for crop improvement. Randomized Complete sown with 75cm row to plant plant distance. Manure and fertilizer compost (20 t/ha) -1 and Gap filling out after 8 of sowing and 32 each Within six taken tagged recording Pods harvested at the time of marketable maturity for recording the observations. Scoring of agro-morphological characters viz. stem pigmentation, leaf color, leaflet shape, stem hairiness, flower color, pod color, pod shape, pod cross-section, pod beak position, pod pubescence, pod appearance, seed size, seed shape, and seed IBPGR (International for Plant Resources) descriptors for vulgaris The quantitative observations for the germination percentage, pod length (cm), pod diameter (mm), individual pod weight (g), pod -1 , plant -1 , pod yield plant -1 (kg), pod yield per hectare (t), pods plant -1 , yield plant -1 (g), seed yield (t/ha) and seeds weight (g). qualitative characters/traits a team of 10-5 and pod length and pod diameter of a meter-scale and vernier caliper respectively. Germination percentage by Pooled mean values of the in each
Introduction
French bean (Phaseolus vulgaris L.), the oldest domesticated plant species is a native crop of Central and South America (Swaider et al., 1992). It is also known as a common bean, snap bean, kidney bean, and haricot bean. The common bean is predominantly self-pollinated diploid annual species (2n=x=22). The green pods are nutritionally rich containing on an average of 1.7 % protein, 4.5 % carbohydrate, 1.8 % fiber, calcium 50 mg, magnesium 28 mg and iron 1.7 mg per 100 gm of pod (Shanmugavelu, 1989). Apart from protein, French bean also contains vitamins and minerals which can help in the partial alleviation of the malnutrition problem. It is majorly cultivated for its tender pods as vegetable, dried seeds used as pulse and the foliage is used as fodder for animals (Pandey et al., 2012). In Nepal, it is cultivated in a wide range of agro-climatic conditions and different season from 300 m to 2,500 masl (Neupane et al. 2008). Both pole and bush-type French beans are cultivated in the hilly region (500-1600 masl) for green pods during summer to autumn. These beans are grown as a mono-crop in the commercialized peri-urban areas using staking for pole beans or intercropped with maize as a rain-fed crop in the hills. Farmers regard beans as a cash-generating crop in the hills and grow several landraces with varying morphologies (Neupane and Vaidya, 2002). The current research was initiated with the objectives of evaluation of pole type French bean genotypes for agro-morphological variability and their yield potential. and a longitude of 83 o 58'27.72" E is characterized by subtropical climate with an elevation of 1300 masl. The climatic data of the location viz. precipitation, relative humidity, maximum and minimum temperature for three years period are presented in figures 1a, 1b, and 1c. The experiment was arranged in a Randomized Complete Block (RCB) design with four replications; each replication was presented in a four-row plot. Seeds were sown with 75cm row to row and 25 cm plant to plant distance. Manure and fertilizer were applied as compost (20 t/ha) and 40:60:50 kg ha -1 N, P, and K respectively. Gap filling was carried out after 8 th day of sowing and 32 plants were maintained in each plot. Within the plot, six plants were randomly taken and tagged for recording observations. Pods were harvested at the time of marketable maturity for recording the observations. Scoring of agro-morphological characters viz. stem pigmentation, leaf color, leaflet shape, stem hairiness, flower color, pod color, pod shape, pod cross-section, pod beak position, pod pubescence, pod appearance, seed size, seed shape, and seed color was done according to the procedures given in the IBPGR (International Board for Plant Genetic Resources) descriptors for Phaseolus vulgaris (IBPGR, 1982). The quantitative observations recorded for the experiment were namely germination percentage, pod length (cm), pod diameter (mm), individual pod weight (g), seeds pod -1 , green pods plant -1 , green pod yield plant -1 (kg), green pod yield per hectare (t), dry pods plant -1 , dry pod yield plant -1 (g), seed yield (t/ha) and 100 seeds weight (g). The qualitative characters/traits were measured by a team of 10-5 experts and consumers. The pod length and pod diameter were measured with the help of a meter-scale and vernier caliper respectively. Germination percentage is calculated by Pooled mean values of the parameters in each replication were statistically analyzed through R programming (R Core Team, 2014). Statistical testing was carried out using Duncan's new multiple range test at the P < 0.05 level. Microsoft Excel was used for plotting figures, and graphs.
Result and Discussions
3.1 Agro-morphological Attributes 3.1.1 Stem pigmentation Stem pigmentation is a useful DUS trait for classifying and differentiating genotypes. The data on stem pigmentation of six genotypes are presented in Table 1. Among the genotypes, Bhatte, Dhankute Chirrke, White OP and Trishuli had green pigmentation on stem, Chaumase had green with red streaks and WP Con Bean had green with purple stem pigmentation. Among the 15 genotypes of Dolichos bean studied for stem pigmentation, four genotypes were light green stem color, four genotypes purple stem and seven genotypes dark purple stem (Golani et al. 2015). Fifteen Jack bean genotypes were categorized based on stem color into three groups viz., light green, purple and dark purple (Lenkala et al., 2015).
Leaf Color
Leaf colors in beans are categorized as pale green, green, and dark green. Among the six, three genotypes namely Bhatte, Dhankute Chirrke, and WP Con Bean had pale green leaf color. Chaumase and Trishuli produced the dark green whereas White OP had green leaf color (Table 1). Similar findings have been reported by other researchers. Leaf color intensity of hyacinth bean varied from pale green to green to dark green (Islam et al. 2010). Studying the leaf color, only green and purple vein colors among 107 hyacinth bean genotypes were observed and leaf color intensity varied from pale green to green to dark green (Sultana, 2011).
Leaflet shape
Leaflet shape is a certifying DUS characteristic for distinguishing genotypes. The data on the leaflet shape of six genotypes observed are presented in Table 1. The leaflet shape of five genotypes namely Bhatte, Chaumase, WP Con Bean, White OP, and Trishuli was round while Dhankute Chirrke had ovate form of leaflet shape. Similar classification of soybean varieties and hyacinth bean genotypes was observed based on leaflet shape (Islam et al., 2010;Agarwal and Pawar, 1990). Fifteen genotypes of Jack bean based on leaf density were categorized as sparse, intermediate and dense (Lenkala et al., 2015).
Hairiness on the stem
Data on hairiness on the stem of French bean genotypes are presented in Table 1. Out of the six, three genotypes namely Bhatte, WP Con Bean and White OP had glabrous (without hairs) whereas the genotypes Chaumase, Dhankute Chirrke, and Trishuli had sparse hairiness. Seven French bean genotypes based on seedling pubescence was categorized as glabrous and dense (Prashanth, 2003).
Flower Color
Flower color is an important DUS characteristic that offers a quick and easy identification module for characterizing genotypes. The petal color of four genotypes namely, Bhatte, WP Con Bean, White OP, and Trishuli had white color whereas Chaumase had lilac and Dhankute Chirrke had violet-purple petal color. Similarly, 284 bean accessions were studied and categorized them into three groups viz., white, plain red to dark lilac and purple colored flowers (Okii et al., 2014).
Pod Shape
Pod shape influences the consumer preference in the market and also qualifies as distinguishing DUS trait. The data on the pod shape of French bean genotypes are presented in Table 1. Among six French bean genotypes studied for pod shape, Bhatte and Dhankute Chirrke had straight pods; Chaumase, WP Con Bean and White OP produced slightly curved and Trishuli produced recurring pods. The study of French bean genotypes found most of the genotypes had straight pods and a few had slightly curved pods (Muchui et al., 2008). Among the eighty accessions of local and exotic bean germplasm for pod curvature on fully expanded immature pods, 43 accessions were slightly curved, 29 were straight and 7 accessions were curved (Neupane et al., 2008).
Pod Color
Pod color is an important NBPGR crop descriptor for classifying and distinguishing genotypes. The data on pod color at an immature stage of French bean genotypes are presented in Table 1. Among the six genotypes, normal green pods were observed in Bhatte, Chaumase, and Trishuli, light green were recorded in WP Con Bean and White OP and green with red stripes were obtained in Dhankute Chirrke. A similar way of classification was done based on pod color (Islam et al., 2010;Okii et al., 2014).
Pod Cross Section
Data on pod cross-section of French bean genotype are presented in Table. Among six genotypes studied for pod cross-section, Bhatte and Dhankute Chirrke were very flat and Chaumase, WP Con Bean, White OP, and Trishuli had round elliptic pod cross-section.
Pod Beak Position
The data on pod beak position of French bean genotype are presented in Table. Among six genotypes studied for pod beak position, Bhatte, Chaumase, WP Con Bean, White OP, and Trishuli had marginal beak and Dhankute Chirrke had non-marginal beak position.
3.1.10 Pod Pubescence Data on hairiness on pod of French bean genotype are presented in Table 1. Among six French bean genotypes studied for hairiness on pod, all genotypes were glabrous (without hairs) type. Pod surface in 15 French bean genotypes was observed and reported as smooth pod surface in eight genotypes and pubescent pod surface in the remaining seven genotypes (Kar et al., 2006).
Seed Size
Classifying genotypes on the basis of seed size is important for designing future breeding strategies for fulfilling the selective market needs of the concerned community. The data on seed size of six French bean genotypes are presented in Table 1. The genotype Bhatte, and White OP had large seed size; Trishuli had medium and Chaumase, Dhankute Chirrke, and WP Con Bean had small size seed. Thirty-two French bean cultivars and classified them on the basis of 100 seed weight ranging from 18.4 to 50.6g (Anonymous, 2000). Similarly, eighteen different germplasm accessions of hyacinth bean (Lablab purpureus) and reported the seed sizes ranged from 5.7 to 14.3 mm in length and 4.0-8.6 mm in width (Maass, 2006).
Seed Shape
Seed shape influences the consumer preference in the market. Therefore, to meet out the aspirations of the market it becomes essential to screen out and classify the genetic stock as per the market orientation. Among the six genotypes, Bhatte and Dhankute Chirkke had circular to elliptic seed shape; Chaumase, WP Con bean, and White OP had kidney shape and Trishuli had elliptic seed shape (Table 1). Eighteen French bean varieties collected from ICAR institutes and SAUs were reported circular to elliptic, kidney and elliptic and seed shape (Singh et al., 2014). Similarly, twenty-two common bean genotypes were observed as round, oval, kidney and cuboid shape (Boros, 2014).
Seed coat color
Seed coat color is an identification indicator and useful trait to the distinctness of a genotype. The genotypes under study produce different colors as light brown, black, orange white and purple, white and brown. In the present study, different French bean genotypes observed varied seed coat color. Chaumase produced black seed color. White OP and WP Con Bean white color seed coat whereas Trishuli alone produced brown seed coat. Bhatte and Dhankute Chirrke produced light brown and orange white with purple respectively (Table 1). Seed coat color was used to distinguish 80 accessions of bean germplasm (Neupane et al. 2008). Examination identified different color patterns viz., pink, purple, ash, cream, yellow, maroon, black, violet, shining purple and red among different seed samples. Similarly, the diversity of common bean landraces classified based on seed color (Bode et al. 2013;Pandey et al., 2011).
Germination Percentage
Pooled data of three years revealed that germination percentage differs significantly among the French bean genotypes ( Table 2). The genotypes which showed relatively higher germination percentage were Bhatte (93.8%) and Chaumase (93.2%). However, the lowest values of these attributes were associated with White OP (84.0%) and WP Con Bean (84.2%). Relatively higher germination percentage in some genotypes may be due to the bold seed character of the genotype.
Pod Length, Pod Width, and Individual Pod Weight
The pooled analysis of three-year data revealed that pod length, pod width, and individual pod weight differed considerably among the genotypes (Table 2 and Table 3). Significantly highest pod length was observed for the genotype Trishuli (19.3 cm) and lowest for Dhankute Chirrke (11.3 cm). In general consideration, it can be concluded from the result that genotypes Trishuli, Chaumase, White OP and WP Con Bean produced relatively longer pods whereas the genotypes like Bhatte and Dhankute Chirrke have relatively smaller pod length. Highest pod diameter was measured as 11.9 mm from the genotype Bhatte and lowest from WP Con Bean (9.5 mm) and White OP (9.5 mm), statistically identical to each other. Genotypes like Chaumase, Trishuli and Dhankute Chirrke having intermediate pod diameter ranging from 9.6 mm to 11.7 mm. The genotypes included in the study obtained an average variation of individual pod weight from 8.7 g to 14.7 g. Among the genotypes, Trishuli measured the Note: NS, * and ** indicate non-significant, significant at P<0.05, and P<0.01, respectively. Means followed by the same letter (s) in the column are not significantly different at 5% by DMRT highest individual pod weight (14.7 g) followed by Chaumase (13.3 g). The lowest value was recorded for Dhankute Chirrke (8.7 g).
The variation in pod length, pod width and individual pod weight of French bean genotypes observed in the present study may be due to their inherited traits and to some extent by environmental factors. Similarly, variability in different varieties of French bean was observed for pod length and pod width (Nepane et al., 2008;Pandey et al., 2011). Similarly, variation for pod length and pod width was observed in varieties of hyacinth bean (Islam et al., 2010) and lablab bean (Pengelly and Maass, 2001).
Green Pods Plant -1 , Green Pod Yield Plant -1 , and Green Pod Yield
The pooled analysis of three-year data revealed that green pods plant -1 , green pod yield plant -1 and green pod yield in a considerable manner among the genotypes (Table 3 and Figure 2). Green pods plant -1 ranged from 39.9 to 70.5 (Table 3). The maximum green pods plant -1 was observed for the genotype Chaumase (70.5) and the minimum number of pods plant -1 was recorded for Dhankute Chirrke (39.9). The variation in green pods plant -1 might be due to differences in the number of inflorescences, pods per raceme, flower dropping tendency of the genotypes (Khan, 2003). The highest green pod yield plant -1 was observed for the genotype Chaumase (0.57 kg). The lowest green pod yield plant -1 was obtained from Bhatte (0.35 kg), Dhankute Chirrke (0.36 kg), WP Con Bean (0.38 kg), White OP (0.40 kg) and Trishuli (0.41 kg) which were statistically identical. Similarly, the maximum green pod yield was obtained for genotype Chaumase (35.0 t/ha) and minimum yield for Bhatte (20.2 t/ha) and Dhankute Chirrke (20.5 t/ha), which were statistically identical. This higher green pod yield plant -1 and per hectare for Chaumase is attributed due to a higher number of green pods plant -1 and individual pod weight. Similar results were reported by Pandey et al. (2012) with the genotype Chaumase (Four Season) obtaining the greatest fresh pod yield (25.75 t/ha) at different sowing times. Similarly, the pod yield in bean was influenced by the genotype (Neupane et al. 2008). They found that the genotypes sown on the same date produced green pod plant -1 ranging from 5 to 32.
Dry Pods Plant -1 and Dry Pod Yield Plant -1
The pooled analysis of three-year data revealed that dry pods plant -1 and dry pod yield plant -1 differ significantly among the genotypes (Table 4). Chaumase recorded the highest dry pods plant -1 (53.9) which was statistically identical with White OP (47.9); whereas, the lowest number was observed in Trishuli (31.7), statistically at par with Dhankute Chirrke (317) and Bhatte (35.7). The highest dry pod yield plant -1 was recorded for Chaumase (152.2 g) and the lowest in Bhatte (80.2 g) showed the least dry pod yield plant -1 .
Seeds Pod -1 , 100 Seed Weight and Seed Yield
The pooled analysis of three-year data revealed that seeds pod -1 , 100 seed weight, and seed yield differed significantly among the genotypes (Table 5 and Figure 3). Chaumase (8.2) recorded the maximum seeds pod -1 statistically identical with Trishuli (7.9) followed by White OP (7.1) and WP Con Bean (7.1). The minimum seeds Note: NS, * and ** indicate non-significant, significant at P<0.05, and P<0.01, respectively. Means followed by the same letter (s) in the column are not significantly different at 5% by DMRT Figure 3 Performance of different genotypes of French bean for seed yield (t/ha) at HRS, Dailekh during 2016-18 Note: NS, * and ** indicate non-significant, significant at P<0.05, and P<0.01, respectively. Means followed by the same letter (s) in the column are not significantly different at 5% by DMRT pod -1 was found for Dhankute Chirrke (5.3) statistically identical with Bhatte (5.5). 100 seed weight was maximum for Dhankute Chirrke (60.0 g) followed by Trishuli (42.3 g), Bhatte (39.2 g) and Chaumase (29.9 g) whereas least was recorded for WP Con Bean (23.9 g) statistically identical with White OP (24.4 g). The maximum seed yield was recorded in Chaumase (2.1 kg/ha) statistically at par with Trishuli (2.10 kg/ha) whereas, least was recorded in WP Con Bean (1.09 kg/ha). Similarly, pod and dry seed yield in bean was influenced by the genotype (Neupane et al. 2008). They found that the genotypes sown on the same date produced seed yield (g/m 2 ) ranging from 5.9 to 306.5.
Conclusions
The agro-morphological variation observed in the genotypes could be utilized in the selection of genotypes for varietal improvement program. Among the qualitative traits, French bean genotypes observed variability concerning stem pigmentation, leaf color, leaflet shape, stem hairiness, flower color, pod color, pod shape, pod cross-section, pod beak position, pod appearance, seed size, seed shape, and seed color. Three year mean results showed the genotype Chaumase (35.0 t/ha) followed by Trishuli (28.0 t/ha), WP Con Bean (24.6 t/ha) and White OP (22.9 t/ha) recorded the maximum green pod yield. Similarly, genotypes Chaumase (2.1 t/ha), Trishuli (2.1 t/ha), Dhankute Chirrke (1.44 t/ha) and White OP (1.09 t/ha) were found promising for seed production purpose. Future research work needed to be focused on the further evaluation of these genotypes under different production systems for yield and seed production and to identify traits useful for crop improvement. | 2019-12-05T09:18:47.705Z | 2019-11-29T00:00:00.000 | {
"year": 2019,
"sha1": "2e5da7ea653b9c697bbb7caaa6f479dc25a69557",
"oa_license": null,
"oa_url": "https://doi.org/10.31080/asag.2019.03.0733",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "467168d15c8ea067ffd3d051c7c55b65f14a2069",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
31497138 | pes2o/s2orc | v3-fos-license | Production and Partial Characterization of an Extracellular Phytase Produced by Muscodor sp . under Submerged Fermentation
In most of the raw materials of plant origin used in animal feed, a portion of the phosphorus is stored as phytic acid or phytate. Phytate is the main storage form of phosphorus in vegetables but is not readily assimilated into food at low concentrations of the enzyme phytase. In addition to making phosphorous unavailable, phytate binds divalent cations such as calcium, copper, magnesium, iron, manganese and zinc, preventing the absorption of these nutrients in the gut of the animal. Phytase promotes the hydrolysis of the phytate phosphorus-releasing molecule, thereby increasing its bioavailability in feed. Phytase is distributed in plant and animal tissues and it is synthesized by some species of bacteria and fungi. The addition of this enzyme in the diet of animals is essential to promote greater uptake of phosphorus and also contributes to a decrease in the levels of phosphorus excreted by animals, thus reducing the pollution caused by excess phosphorus in the environment. This work aimed to select a fungus that stands out in the production of phytase among 100 isolates from Brazilian caves belonging to the genera Aspergillus, Penicillium and Cladosporium and 13 endophytic fungi of the aerial part of the coffee plant. For selection, the fungi were cultured in medium containing phytic acid as a sole source of phosphorus. After seven days at 25 ̊C, we evaluated growth and enzyme production by the presence of the phytic acid halo degradation (Enzymatic Index-EI) surrounding the colonies. Forty-seven species produced phytase, and the fungi Penicillium minioluteum (CF279) and Muscodor sp. (UBSX) showed higher degradation halos, 2.41 and 4.46, respectively. Considering the Muscodor sp. as the main source of phytase, Corresponding author.
Introduction
Phytic acid or phytate (myo-inositol hexakisphosphate) is the major source of phosphorus in animal feed, but animals cannot readily assimilate the element in this form.In addition to making the phosphorous unavailable, phytate binds to divalent cations (Ca + , Cu 2+ , Mg 2+ , Fe 2+ , Mn 2+ and Zn 2+ ), preventing the absorption of these nutrients in the gut of the animals [1] [2].
In animal feed industries, the feed is usually supplemented with inorganic phosphate to meet the phosphorous needs for proper growth and development of animals.However, the anti-nutritional effects of phytate remain unaffected [3].Excretion of indigestible phytate, resulting in a large amount of phosphorous in manure, leads to redistribution of phosphorous in the soil [4].It may leach into waterways and cause eutrophication that generates water quality issues.Hence, elevated phosphorous levels in water and soil also create several environmental problems.To avoid phytate-related issues, there is a need to introduce methods for degradation of phytate.The physical and chemical methods are expensive and reduce the nutritional value of the feed as well.Therefore, enzymatic degradation of phytate is an important alternative [5].
Phytases (myo-inositol hexaphosphate phosphohydrolase; EC 3.1.3.8 and EC 3.1.3.26) are enzymes belonging to the class of phosphatases that hydrolyze phytic acid to myo-inositol phosphates and inorganic phosphate through a series of myo-inositol phosphate intermediates.This process eliminates the anti-nutritional characteristics of phytic acid.Phytases have potential applications in the food and feed industries.In recent years, phytases have attracted attention from researchers and entrepreneurs in the areas of nutrition, environmental protection and biotechnology.The annual sales of phytase in the USA were estimated to be $150 million, which is one third of the entire enzyme market [5].
Microorganisms are the main sources of phytases with biotechnological potential with high performance, being used mainly in animal feed industry to eliminate the anti-nutritional properties of phytate [6] [7].The production by yeast has been described in Saccharomyces cerevisiae, Candida tropicalis, Candida torulopsis, Debaryomyces castelii, Kluyveromyces fragilis and Schwanniomyces castellii [8].They can also be produced by bacteria and filamentous fungi.The production by filamentous fungi has been described in Penicillium oxalicum PJ3, Schizophyllum commune, Rhizopus oryzae, Aspergillus ficcum and Rhizopus microspores var.microsporus [9]- [13].Many studies are focused in the production and characterization of phytase from some microorganisms, but information concerning enzyme characteristics as regulation, catalytic capacity, specificity and optimization of production, still need to be clarified to reduce costs and to facilitate the use of this enzyme under an industrial scale.Commercially, phytases are produced by a limited number of microorganisms [14], what justify the importance to search new fungal strain as phytase producers.Then, this manuscript describes the prospection of fungal strains to produce extracellular phytase, the selection of the best producer and the optimization of some culture parameters to achieve high enzymatic production by the endophytic fungus Muscodor sp.This work is seminal in the reporting of phytase production by this fungus.
Material and Methods
The experiments were conducted in the Filamentous Fungi Genetics and Bioprospecting Laboratory-BIOGEN from Federal University of Lavras-UFLA, Lavras, Minas Gerais, Brazil and in the Laboratory of Microbiology from Faculty of Philosophy, Sciences and Letters of Ribeirão Preto, University of São Paulo-USP, São Paulo, Brazil.
Microorganisms Evaluated and Selection of Phytase-Producing Fungi
One hundred filamentous fungi isolated from the Brazilian Caatinga caves and 12 endophytic fungi isolated from coffee plant (Coffea arabica L.) shoots, belonging to the Culture Collection of BIOGEN, were evaluated for their ability to produce phytases.The cultures were maintained in MilliQ water at 4˚C.The isolates were reactivated on potato dextrose agar (PDA) and incubated at 25˚C for 7 days.
Molecular Identification
The molecular identification of the selected fungi was carried out using sequences of the ITS region.The fungi were grown on PDA, and mycelium was scraped with a sterile toothpick.The extraction of total DNA was performed according to the "Mobio Ultra Clean Microbial ® kit".Amplification reactions were performed in a volume of 30 µL containing 15 µL of Quiagen kit, 12 µL•H 2 O, 10 pmol primer F, 10 pmol primer R and 10ng DNA.
Primers were ITS1 (5' TCCGTAGGTGAACCTGCGG 3') and ITS4 (5' TCCTCCGCTTATTGATATGC 5') and amplification conditions were as follows: 95˚C 2 min, 95˚C 1 min, 50˚C 1 min, 72˚C 1 min and 72˚C 7 min, programmed for 35 cycles.The amplifications were performed in a thermocycler "MULTIGENE", Labnet International Inc.The sequences were analyzed with the aid of the SeqAssem program, and alignment with other sequences available in the "GenBank" database was performed by the MEGA program.
After incubation, the cultures were filtered in vacuum Buchner funnel with Whatman filter paper N˚1 to give a cell free filtrate.The filtrate was dialyzed against distilled water for 24h at 4˚C and used for determining the extracellular enzymatic activity.
Phytase Activity Assay
Phytase activity was determined according to Gulati, Chadha and Saini [16].The reaction medium was composed of 50 μL of enzyme sample incubated with 50 μL of phytic acid solution (dodecasodium hydrated C 6 H 6 Na 12 O 4 P 6 .H 2 O Sigma ® ) 1% (w/v), dissolved in sodium acetate buffer 0.2 M, pH 5.0.The reaction was performed at 40˚C and then added with 100 μL of 15% TCA (trichloroacetic acid) and 300 μL distilled water in each test tube followed by addition of 0.9 μL of the chromogenic reagent (of sulfuric acid 0.76 M, 10% ascorbic acid and 2.5% ammonium molybdate; 3:1:0.5 v/v/v).The tubes were then incubated at 50˚C for 20 min and cooled down; after the reading was taken at 820 nm using a spectrophotometer.In each experiment we included inactivated enzyme controls to estimate the non-enzymatic hydrolysis of substrate.One unit of phytase activity was defined as the amount of phytase required to release one µmol of inorganic phosphorus (Pi) per minute under test conditions [13].
Determination of Total Protein
The total protein concentration was determined according to Bradford method [17] using bovine serum albumin (BSA) as standard.The amount of protein was expressed as mg/mL of sample.Total protein (mg total) was calculated by multiplying the protein in mg/mL for the sample volume.
Phytase Stability to Temperature and pH
The thermal stability was determined incubating the phytase samples in aqueous solution at 40˚C, 50˚C and 60˚C.After time intervals of 2, 5, 10, 15, 20, 30, 45 and 60 min, aliquots were removed and maintained in an ice bath for phytase activity determination.
The pH stability of phytase was observed incubating the phytase samplesin citrate buffer 100 mM (pH 3, 4, 5 and 6).The enzyme and buffer were added at 1:1 (v/v), and after incubation for 1 h at 25˚C, aliquots were removed and the relative enzyme activity was determined.
All experiments were performed in triplicate, using inactivated enzyme controls to estimate the non-enzymatic hydrolysis of substrate and taking into account the standard deviation for the construction of graphs.
Results and Discussion
Among the hundred fungi isolated from caatinga caves and twelve endophytic fungi qualitatively analyzed, 28% were able to hydrolyze the phytic acid in the medium producing a clear halo.Among these strains phytase positive, 64.5% showed an Enzyme Index (EI) lower than 2.0.Only six strains presented EI higher than 2, highlighting the endophytic fungus UBSX (EI = 4.41) and the cave fungus CF279 (EI = 2.41) (Table 1).These both strains were identified as Muscodor sp.(Xylariaceae, Ascomycetes) and Penicillium minioluteum, respectively, using molecular approach and the ITS region.
The enzymatic index of the enzyme produced by the endophyte Muscodor sp. had the highest value, so this isolate was chosen for phytase production under submerged fermentation.Muscodor sp. is a sterile (not producing spores) endophytic fungus possessing some interesting hyphal characteristics, including coiling, ropyness, and right angle branching.The mycelia of the fungus on most media are whitish and suppressed (Figure 1).The sequences were entered into "GenBank" as KJ425599.The Muscodor genus has been reported as a good producer of several other metabolites of interest.Muscodor albus produces a mixture of volatile organic compounds (VOCs) which are lethal to a variety of human and plant pathogens, such as other fungi and bacteria.It is also effective against nematodes and certain insects [18].Muscodor crispans (B-23) is an endophyte residing within the tissues of the stem of Ananas ananassoides, a wild pineapple of the Bolivian Amazon Basin.This fungus produces a number of esters, alcohols and acids of low molecular weight whose volatiles have antibiotic properties, making this a potentially useful organism in many contexts.Despite its 100% genetic similarity to regions of rDNA of M. albus, this organism is considered distinct because of the number and type of its unusual phenotypic characteristics [19].According to our knowledge, this is the first time that the phytase production is reported for the genus Muscodor.
Investigation of new fungal strains as phytase producers is very important, allowing alternatives to produce enzymes with different properties than that produced by the genus Aspergillus, mentioned as the main fungal source of phytases, such as A. ficcum [20], A. fumigatus [21], A. terreus [19] and A. niger NCIM 563 [22].
The enzymatic production by microorganisms can be influenced by different conditions, including the nutrient sources available in the culture medium.Carbon and nitrogen sources are essentials for microbial growth and are determinants in the enzyme production.According to Table 2, the use of all additional carbon sources resulted in phytase production by Muscodor sp., especially using wheat bran (4.10 U/mg of protein).Wheat bran is a byproduct of wheat grain, and its composition is rich in calories, proteins, vitamins, minerals, and other important elements for the development of the microorganism and for the enzyme production, making it an excellent substrate for phytase production.The microorganism Mucor racemosus NRRL 1994 produced phytase using wheat bran as substrate/support [23].Other additional complex carbon sources as crushed soy, as well as some saccharides as sucrose, glucose, fructose, galactose also have been investigated for phytase production [13].The use of alternative carbon sources for phytase production can reduces the cost of process, allowing the obtainment of a cheap product.
Influence of the Initial pH, Temperature and Agitation on Phytase Production
The enzyme production by microorganism was affected by different culture conditions (Figure 2).Considering the influence of the initial pH of the culture medium, the peak of production of phytase by Muscodor sp. was obtained at pH 5.0 (Figure 2(a)) as well as observed for production of phytase by Aspergillus niger ATCC 9142 [24] and A. niger FS3 [25].These results are in agreement with Vohra and Satyanarayana [26] that described the initial pH from 5.0 to 6.0 as characteristic for production of most microbial phytases.
According to the Figure 2(b), the agitation level also influenced the production of phytase by Muscodor sp. with maximal production obtained at 125 rpm.At 150 rpm, the fungal biomass was higher than that observed at 100 and 125 rpm, but the enzyme production was reduced, indicating that the increased biomass is not directly related to the enzyme production.It is known that the enzyme production is affected by the fungal morphology as well as the influence of agitation on this morphology, what can explain the result obtained for Muscodor sp.The production of phytase by Aspergillus niger under submerged fermentation was increased with stirring speed from 150 to 300 rpm.Moreover, the fungal morphology was influenced by the stirring with small pellets and tangles prevailing in the cultures at 150 rpm, while free filamentous form was obtained at 300 rpm [27].
The optimum temperature for the production of phytase by Muscodor sp. was 30˚C coinciding with the best temperature for the fungal growth as previously determined in our laboratory.Under this temperature the enzymatic production was 2-times higher than that observed at 25˚C.At 35˚C the enzyme production decreased 20% if compared to 30˚C (Figure 2(c)).Several studies have investigated the optimum temperature for reaction of the phytase and phytase reported with optimal activity between 40˚C and 60˚C, such as those obtained for the fungi A. fumigatus and A. niger NRRL 3135, with optimal activity at 37˚C [28] and 55˚C [29], respectively.However, phytases from Aerobacter aerogenes and Candida krusei WZ-001 showed optimum temperatures of 25˚C and 40˚C respectively [30] [31].Thus, this demonstrates that these enzymes can act on various temperatures can be applied in industrial processes which require low or high temperatures.
Kinetic of Phytase Production by Muscodor sp.
The production of phytase by Muscodor sp. as function of incubation period was evaluated according to the parameters previously established: temperature of 30˚C, pH 5.0, agitation at 125 rpm.The maximum phytase activity (26.51 U/mg) was achieved at 144 h of fermentation (Figure 2(d)).This period for enzyme production was minor than that mentioned for production of phytase by Rhodotorula mucilaginosa JMUY [32].On the other hand, it was higher than that observed for enzyme production by Rhizopus microsporus var.microsporus [13], by Aspergillus niger CFR 335 and by Aspergillus ficuum SGA [33].Under optimized parameters of cultivation (30˚C, initial pH 5.0, 125 rpm for 144 h), the specific activity of the phytase produced by the endophytic Muscodor sp.increased 11.32 times (from 2.34 U/mg to 26.51 U/mg).
Enzyme Stability
The stability of Muscodor sp.phytase was evaluated after storage in 100 mM citrate buffer (pH 3.0, 4.0, 5.0 and 6.0) for 1 h at 25˚C (Figure 3(b)).At pH 5.0, it was observed maximal enzyme stability, but at pH 4.0 and 6.0, the enzyme activity was 20% -30% from that observed at pH 5.0.The enzyme activity was fully inhibited at pH 3.0.Increase in the pH value reduced the phytase stability (data not shown).The pH stability is an important characteristic that should be considered aiming the phytase application.Efforts have been done to obtain enzymes with a wide pH range of stability as, for example, the recombinant phytases from Aspergillus japonicus BCC18313 and Aspergillus niger BCC 18081 [34].Phytase II of Aspergillus niger was stable from pH 3.5 to 9.0 at room temperature over 12 h [35].Other studies show that phytase I of Monascus was stable at pH 5.5 -6.5 and phytase II had a better stability from pH 6.0 to 7.0 [36].
Another important aspect that has technological importance is the thermal stability.The Muscodor sp.phytase was fully stable at 40˚C through the period analyzed.However, at 50˚C and 60˚C the enzyme stability was reduced with half-life (T 1/2 ) of 10 min and 1.5 min, respectively (Figure 3(a)).Most microbial phytases reported in the literature has shown similar stability at temperatures from 45˚C to 55˚C [37].However, according to Casey and Walsh [38], most fungal characterized phytases is stable at temperatures in the range from 30˚C -60˚C.
The enzyme produced by Aspergillus fumigatus was inactivated only at 70˚C [39].The phytase produced by biofilm R. microsporus var.microsporus purified was completely stable at temperatures of 30˚C and 40˚C for 120 min [13].
Influence of Different Compounds on Phytase Activity
The influence of different compounds on the phytase activity is presented in the Table 3.The phytase produced by Muscodor sp. was totally inhibited by EDTA, K + , Mg 2+ , Zn 2+ , Ca 2+ and urea (data not shown), and relatively inhibited by Na + , Hg 2+ , Cu 2+ , Al 3+ , Fe 2+ .On the other hand, in presence of FeCl 3 86% of phytase activity was maintained.Phytase produced by Aspergillus niger was also inhibited by Cu 2+ , Zn 2+ , Hg 2+ and Fe 2+ [8].The activity of phytase from Candida krusei WZ-001 was completely inhibited by Zn 2+ and strongly inhibited by Mg 2+ , but the increase of the concentration of Fe 2+ (5 mM) resulted in recovery of phytase activity [30] [31].Increasing the concentration of sodium, the enzyme activity from Muscodor sp. was drastically inhibited.Ullah, Sethumadhavan and Mullaney [40] reported that 500 mM NaCl had an inhibitory effect on the activity of phytase from Aspergillus niger.
Some phytases can be inhibited by metal ions, but it is difficult to determine whether inhibition is a result of metal binding to the enzyme or training complex phytic acid-poorly soluble ion.The formation of precipitated by the addition of Fe 2+ and Fe 3+ in the enzymatic assay suggests that there is a reduction in the concentration of active substrate complex of phytic acid formation poorly-soluble Fe [41].Maenz, Engele-Schaan, Newkirk and Classen [42] evaluated the inhibitory potential at neutral pH of various minerals on the activity of microbial phytases and described the following order of inhibition: Zn 2+ > Fe 2+ > Mn 2+ > Fe 3+ > Ca 2+ > Mg 2+ .EDTA (ethylenediaminetetraacetic acid) is a chelating compound capable of forming stable complexes with various metal ions and most enzymes are adversely affected by this compound reducing the catalytic activity [43].
Conclusion
The endophytic fungus Muscodor sp. was the best phytase producer among all fugal strains analyzed and the enzyme production was affected by different parameters in the culture conditions.The high levels of enzyme production was obtained using wheat bran as carbon source, an interesting alternative with low cost for phytase production.After culture conditions optimization, the enzyme production was increased 11-folds and the enzyme in the crude extract presented interesting properties for a possible application in the future.Then, this is the first report on phytase production by an endophytic fungus, an interesting alternative source of phytases.
Figure 1 .
Figure 1.Mycelium of isolate Muscodor sp.(UBSX) grown on PDA (A) and halo of hydrolysis of the phytate by the phytase produced by the endophytic fungus Muscodor sp.(UBSX).
Figure 2 .
Figure 2. Influence of the initial pH of the culture medium (a), temperature of culture incubation (b), orbital agitation (c) and time of incubation (d) on phytase production by the endophytic fungus Muscodor sp.Symbols: (•) phytase activity; (□) dried biomass.
Figure 3 .
Figure 3. Thermal stability (A) and pH stability (B) of phytase produced bythe endophytic fungus Muscodor sp.For pH stability, partially purified extract was also used, but with an activity amount of 7.36 U/mg (100% relative activity).
Table 2 .
Additional carbon sources for phytase production by Muscodor sp.
Table 3 .
Effect of different compounds on Muscodor sp.phytase activity. | 2018-07-18T20:51:37.779Z | 2016-01-21T00:00:00.000 | {
"year": 2016,
"sha1": "57d5bc8f41e5130fca27dba265461ce6ac194860",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=62944",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "57d5bc8f41e5130fca27dba265461ce6ac194860",
"s2fieldsofstudy": [
"Biology",
"Chemistry",
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
73474086 | pes2o/s2orc | v3-fos-license | ‘En route’ to precision medicine
Linked Article: https://doi.org/10.1111/bjd.17088.
'En route' to precision medicine In this issue of the BJD, McAleer et al. present interesting and important research on biomarkers measured in stratum corneum and plasma of infants with atopic dermatitis (AD). 1 Although AD is much more common in childhood, most biomarker research until now has focused on the disease in adults. With many new drugs for children with AD in different stages of development this research is timely.
There are many different uses for biomarkers in AD, 2 among these are the objective determination of disease severity and the prediction of treatment response. Until now, disease severity in patients with AD is mostly determined by using clinician-rated severity scores [e.g. Six Area, Six Sign Atopic Dermatitis, the Eczema Area and Severity Index (EASI) and the Severity Scoring of Atopic Dermatitis index (SCORAD)], each of which has advantages and disadvantages. The search for better clinician-rated disease severity measures in AD has resulted in more than 20 different scores being used in clinical studies, which hampers study comparability. Although the EASI and SCORAD are now the preferred measures, they also both have the problem of high inter-and intraobserver variability. 3 An objective biomarker for disease severity determined in blood or skin could greatly improve the way we measure disease severity in AD.
A recent systematic review showed serum CCL17/TARC levels to be the best objective biomarker for disease severity in adults with AD. 4 Now McAleer et al. have confirmed that AD plasma CCL17/TARC levels also correlates to disease severity in children. 1 Their study comprised the investigation of a set of potential biomarkers in stratum corneum. The user-friendly availability of less-invasive techniques for biomarker retrieval, such as tape stripping or the use of dried blood spots, can greatly expand their use, which will result in increased knowledge on processes involved in the early development of the disease. These developments may also help us to explain the heterogeneity of the disease and underlying pathways that will determine the specific path a child will follow in the atopic disease march better.
It is interesting to read about the comparison of blood and skin biomarkers in McAleer et al.'s study, as some biomarkers are known to be expressed highly in the skin, but difficult to measure in blood. Cytokines and chemokines produced in the skin can be measured in blood after diffusion from skin into blood or after expression (e.g. chemokines) on endothelial cells and subsequent shedding into the circulation. The concentrations of these skin-derived inflammatory biomarkers in blood gives an indication of the inflammatory activity encompassing the total skin surface.
In contrast, biomarker levels measured in tape strips only reflect the 'local level of inflammation' in the sampled area. Correlating disease severity with expression of biomarkers in tape strips is therefore less likely to correlate to a disease severity measure that encompasses the total skin area. Biomarkers measured in the stratum corneum may, however, be helpful in the identification of patients with different endophenotypes. Indeed, our research group have recently described different clusters of adults with AD based on serum biomarker expression levels, that may represent different endophenotypes. 5 Thus, the use of noninvasive biomarker sampling methods paves the way for large-scale endophenotyping in both children and adults. As patients with different endophenotypes are supposed to respond differently to new, highly targeted treatments, this may help us with the identification of the right patient for the right drug on our journey to precision medicine. Sunlight is essential for life. The sun's ultraviolet radiation (UVR) has many effects on human health and well-being. Some are beneficial such as cutaneous vitamin D synthesis and others harmful, such as skin cancer. Human sight is dependent on visible radiation (light) that is also important for setting circadian rhythms. Infrared radiation may have potential for the biomodulation of fibroblasts for treatment of cutaneous conditions. 1 We have a good understanding of the cellular and clinical consequences of direct photodamage by sunlight caused when the cutaneous chromophore 2 (radiation absorbing molecule) is the target molecule (e.g. DNA) but less so in the case of indirect damage when a chromophore generates free radicals such as reactive oxygen species (ROS) that can damage other molecules, or trigger gene expression for adverse effects.
Shining light on darker skins
Most photobiological research has been done on Fitzpatrick skin types (FST) I-IV and there is a lack of data on FST V and VI. 3 Furthermore most work has focused on the UVR component of sunlight. Thus, sunscreen photoprotection is directed towards UVR, with increasing emphasis on greater ultraviolet A (UVA) protection. One possible consequence of this is increased exposure to solar visible and infrared radiation. There is increasing evidence that these spectral regions have adverse effects on skin, 4 especially photoageing. 5 Albrecht et al., 6 in this issue, have extended our knowledge of skin types IV and V, the UVR (using 302-375 nm), visible (using 420-695 nm) and near infrared radiation (NIR; using 695-2000 nm) components of sunlight, and ROS production. They assessed free radical formation in vivo. Their main conclusions are given in Figure 2 of their paper. This shows that FST IV-V are more susceptible to visible + NIR-induced ROS than NIR or UVR alone. Figure 3 shows no skin type difference for ROS induced by visible + NIR, and that FST IV-V are | 2019-03-11T17:23:13.679Z | 2019-03-01T00:00:00.000 | {
"year": 2019,
"sha1": "d2b5ff481db9f9d561158baae9cc4c13a208355b",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/bjd.17411",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "b18855bc569ada0d13220f88e60d86faacb750ed",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237937154 | pes2o/s2orc | v3-fos-license | Invasive Fungal Disease in Patients with Newly Diagnosed Acute Myeloid Leukemia
This single-center retrospective study of invasive fungal disease (IFD) enrolled 251 adult patients undergoing induction chemotherapy for newly diagnosed acute myeloid leukemia (AML) from 2014–2019. Patients had primary AML (n = 148, 59%); antecedent myelodysplastic syndrome (n = 76, 30%), or secondary AML (n = 27, 11%). Seventy-five patients (30%) received an allogeneic hematopoietic cell transplant within the first year after induction chemotherapy. Proven/probable IFD occurred in 17 patients (7%). Twelve of the 17 (71%) were mold infections, including aspergillosis (n = 6), fusariosis (n = 3), and mucomycosis (n = 3). Eight breakthrough IFD (B-IFD), seven of which were due to molds, occurred in patients taking antifungal prophylaxis. Patients with proven/probable IFD had a significantly greater number of cumulative neutropenic days than those without an IFD, HR = 1.038 (95% CI 1.018–1.059), p = 0.0001. By cause-specific proportional hazards regression, the risk for IFD increased by 3.8% for each day of neutropenia per 100 days of follow up. Relapsed/refractory AML significantly increased the risk for IFD, HR = 7.562 (2.585–22.123), p = 0.0002, and Kaplan-Meier analysis showed significantly higher mortality at 1 year in patients who developed a proven/probable IFD, p = 0.02. IFD remains an important problem among patients with AML despite the use of antifungal prophylaxis, and development of IFD is associated with increased mortality in these patients.
Introduction
Invasive fungal disease (IFD) is a highly morbid complication in patients with hematologic malignancies, including acute myeloid leukemia (AML) [1]. Prior studies have demonstrated a benefit of mold-active antifungal agents for IFD prophylaxis in patients at high risk [2,3]. With widespread use of prophylaxis, breakthrough IFD (B-IFD) have become an increasing problem and have been reported in up to 18% of patients with AML [4][5][6][7][8][9][10]. Early studies suggested that the risk factors predisposing AML patients for B-IFD were similar to those for IFD, in general, and included underlying severe myeloid immunosuppression from leukemia, prolonged neutropenia, use of central venous catheters, mucositis from chemotherapy or from graft versus host disease (GVHD) after hematopoietic cell transplantation (HCT), and use of broad-spectrum antibiotics [10,11]. However, heterogenous populations were included and no standardized definition of B-IFD was available when those studies were performed, calling into question whether the results are generalizable [12].
The recent development of new antifungals and improved formulations of existing agents has prompted changes in antifungal prophylaxis strategies for patients with AML. Additionally, revised definitions for IFD have been developed by the European Organization of Research and Treatment of Cancer and Mycoses Study Group Education and Research Consortium (EORTC/MSGERC) [13], and a consensus definition for B-IFD has been published by the MSGERC and the European Confederation of Medical Mycology (ECMM) [14]. This new definition for B-IFD incorporates the newer antifungal agents and better defines antifungal exposure by including pharmacokinetic parameters of the prophylactic agents.
We sought to determine the effectiveness of newer strategies for the prevention of IFD in patients with AML and to better define the occurrence of B-IFD at our institution.
Patients and Setting
This retrospective cohort study was conducted at the University of Michigan Medical Center, a 1000 bed tertiary care center. Approval for this study was granted by the institutional review board. All adult patients at least 18 years of age who had newly diagnosed AML and who were admitted for induction chemotherapy between June 2014 and January 2019 were screened for eligibility. Patients who expired prior to initiation of induction chemotherapy and those who received umbilical cord blood HCT or more than one allogeneic HCT within 1 year after first induction chemotherapy were excluded. Study patients were followed for one year from the first day of induction chemotherapy unless death occurred before that time.
The electronic medical record was reviewed to collect data on patient demographics and comorbidities, AML status at baseline, at time of IFD diagnosis, and at the last follow up, chemotherapy regimens, HCT, graft versus host disease (GVHD), cumulative duration of neutropenia, cumulative prophylactic antifungal exposure, serum trough concentrations of prophylactic antifungals when available, outcome of IFD and B-IFD at 12 weeks from the date of diagnosis, and overall mortality at 1 year after first induction chemotherapy.
Definitions
AML was defined by WHO 2016 revised guidelines [15]. Induction chemotherapy regimens were determined by the primary hematology team in accordance with National Comprehensive Cancer Network guidelines and recommendations [16]. Cumulative neutropenia was defined as the total duration, in days, of an absolute neutrophil count (ANC) of <500 cells/µL for the year following first induction chemotherapy. Cumulative neutropenic days were normalized per 100 days of follow up to account for the variable follow up times for each patient. Cumulative exposure to antifungals was defined as the total number of days that prophylactic antifungal agents were given to each patient over the follow up period and was normalized per 100 days of follow up.
Proven and probable IFD, including Pneumocystis pneumonia, were defined by the EORTC/MSGERC 2019 revised consensus criteria [13]. The day of diagnosis of IFD was defined as the date when the diagnosis was first suspected based on clinical, radiological, and microbiological findings. Breakthrough IFD was defined based on the MSGERC/ECMM consensus definitions and included fungal infections that occurred at least 7 days after initiation of any antifungal agent or that occurred less than one day after discontinuing any antifungal agent [14].
Prophylaxis and Isolation Strategies
Antifungal prophylaxis for all AML patients undergoing chemotherapy was voriconazole (with therapeutic drug monitoring and a goal trough level of 1 to 5.5 µg/mL) when ANC fell to ≤1500/µL; this was continued until neutrophils were >500/µL for at least 3 consecutive days. Alternative regimens, including posaconazole, isavuconazole, fluconazole, or micafungin, were used at the discretion of the hematology team if intolerance or drug-drug interactions were present. Pneumocystis prophylaxis was reserved for patients receiving purine analogues; either inhaled pentamidine 300 mg monthly or double-strength trimethoprim-sulfamethoxazole (TMP-SMX) tablet three times a week was given throughout chemotherapy and for six months after the last dose of a purine analogue agent. Acyclovir was given throughout all chemotherapy cycles for antiviral prophylaxis. Bacterial prophylaxis with a fluoroquinolone was given only to patients with relapsed or refractory AML.
After allogeneic HCT, fluconazole was given for antifungal prophylaxis unless the patient met one of the following criteria: prolonged neutropenia (>21 days) after or preceding transplant; corticosteroid treatment for GVHD, engraftment syndrome or idiopathic pneumonia syndrome; calcineurin inhibitor therapy in combination with any other immunosuppressive agent; use of etanercept, alemtuzumab, or anti-thymocyte globulin for conditioning; or a history of IFD prior to transplant. Patients who met one or more of these criteria received voriconazole rather than fluconazole. Alternative regimens included posaconazole and micafungin and were used at the discretion of the transplant physician in settings of drug intolerance or drug-drug interactions. Antifungal prophylaxis continued until day +100 and until the specific risk factor for IFD was no longer present, whichever occurred later. Pneumocystis prophylaxis was a single-strength TMP-SMX tablet daily starting at day +30 and through day +180 or until immunosuppression was stopped, provided cell counts had recovered. Alternative agents in cases of persistent cytopenia included atovaquone or inhaled pentamidine. Acyclovir was given for antiviral prophylaxis for one year. Antibacterial prophylaxis was with a fluoroquinolone from day +1 until either engraftment or development of febrile neutropenia.
A protective environment consisting of a positive pressure room with 12 air exchanges per hour and HEPA filtration was used for allogeneic HCT patients who are in the peritransplant period and the first 100 days post-HCT, those who have an ANC < 1000 cells/mm 3 , and those who have acute GVHD. AML patients undergoing induction therapy with an ANC < 1000/mm 3 also are placed in this type of room.
Statistical Analysis
Univariable analysis accounting for competing risks (deaths) was conducted using cause-specific proportional hazards regression to evaluate the impact of cumulative days of neutropenia per 100 days of follow up, cumulative prophylactic antifungal exposure per 100 days of follow up, AML status and other patient characteristics at time of IFD. Hazard ratios and 95% confidence intervals were calculated. Further univariable analyses were performed using the Wilcoxon rank sums test. Kaplan-Meier survival analysis with a log-rank test was conducted to evaluate the impact of IFD on survival. SAS version 9.4 statistical software (SAS Inc., Cary, NC, USA) was used for all analyses.
Patient Characteristics
Altogether, 264 patients were screened and 251 patients were entered into the study. Excluded were eight patients who died before induction therapy was accomplished, four who received a cord blood HCT, and one who received a second HCT within the year after induction therapy. Of the 251 patients, 113 were women (45%), and the average age was 62 ± 14 years (Table 1). Most patients had primary AML (n = 148, 59%); AML related to antecedent myelodysplastic syndrome (MDS) was present in 76 (30%) patients, and 27 (11%) had secondary AML related to treatment for a prior malignancy. The most common comorbidities were diabetes in 37 patients (15%) and coronary artery disease in 31 (12%) ( Table 1).
Invasive Fungal Disease (IFD)
Among the 251 patients, 17 patients (7%) had a proven (n = 4) or probable (n = 13) IFD. Sixteen episodes of possible IFD occurred in 14 patients for whom the mycological criteria of proven or probable IFD were not met. With the exception of 3 patients who had pulmonary consolidation or cavitation, patients with possible IFD had only pulmonary nodules on CT scan. Five patients were too ill to undergo further diagnostic studies and died within 11 days from the diagnosis of possible IFD. Patients with possible IFD were excluded from further analysis.
By cause-specific proportional hazards regression, the risk for IFD increased by 3.8% for each day of neutropenia per 100 days of follow up. The risk for development of IFD was increased significantly in patients who had relapsed/refractory AML, HR = 7.562 (2.585-22.123), p = 0.0002.
Among the 17 patients with proven or probable IFD, five were undergoing primary induction or consolidation therapy, five had relapsed/refractory leukemia, and only one was in remission ( Table 2). Four patients had received an HCT, and two of these had relapsed leukemia after the HCT.
Nine of the 17 (53%) patients with proven and probable IFD were not receiving antifungal prophylaxis at the time of the IFD diagnosis. Two patients did not receive Pneumocystis prophylaxis despite having an indication for this. Two patients were involved in clinical trials of experimental agents for the treatment of AML and did not have antifungal prophylaxis prescribed because the trial protocol prohibited the use of these agents. One patient had numerous drug-drug interactions and was unable to tolerate any antifungal prophylaxis. Three patients were no longer neutropenic, and another patient was no longer receiving chemotherapy. Infections in these nine patients included invasive aspergillosis (n = 4), invasive candidiasis (n = 2), mucormycosis (n = 1), and Pneumocystis pneumonia (n = 2) ( Table 2). Table 3. Univariable analysis of risk factors for the development of proven/probable invasive fungal disease (IFD) in patients with acute myeloid leukemia 1 .
Risk Factor
No IFD n = 220 Eight of the 17 (47%) proven and probable IFD were B-IFD, including aspergillosis (n = 2), fusariosis (n = 3), mucormycosis (n = 2), and Pneumocystis pneumonia (n = 1) ( Table 2). Three patients receiving fluconazole prophylaxis developed invasive aspergillosis (n = 1) and fusariosis (n = 2), two patients who developed mucormycosis were taking voriconazole. The serum trough concentration of posaconazole (1580 ng/mL) and isavuconazole (3 µg/mL) demonstrated appropriate exposure when measured prior to occurrence of Fusarium pneumonia and invasive pulmonary aspergillosis, respectively. Exploratory proportional hazards regression analysis of cumulative antifungal exposure and development of IFD showed no significant differences among patients with IFD when compared with those without IFD, HR = 0.999 (0.983-1.015), p = 0.9. Using the Wilcoxon rank sums test, cumulative exposure to any specific antifungal agent was not associated with the development of B-IFD, p = 0.26, but cumulative exposure to non-mold active fungal prophylaxis (fluconazole) was associated with development of B-IFD (p = 0.012).
Outcomes
Overall mortality in our cohort of 251 patients was 37% (n = 92) within the first year after receiving first induction chemotherapy for the treatment of AML. The mortality rate in patients who had relapsed/refractory AML was 70% (n = 77) compared with 12% (n = 15) in those without relapse, p < 0.001. Kaplan-Meier survival analysis showed significantly higher mortality at 1 year from first induction in patients who developed proven or probable IFD (13/17, 76%) when compared with those who did not develop IFD (79/220, 36%) (log-rank test, p = 0.02) (Figure 1). Among the small number of patients who did not have relapsed/refractory AML, development of IFD was significantly associated with a higher mortality rate, p < 0.001. In those patients who had relapsed or refractory AML, no differences were noted among patients who developed IFD and those who did not, p = 0.26. All 13 patients with IFD who died did so within 12 weeks from the date of IFD diagnosis. Seven of nine patients (78%) with non-B-IFD and six of eight (75%) who had B-IFD died.
Discussion
Within a 1-year period from the time of initiation of induction therapy, 12% of patients with AML developed proven, probable or possible IFD. Of 17 patients (7%) who had proven or probable IFD, 8 (3%) were B-IFD in patients receiving antifungal prophylaxis. These rates are comparable with those reported from other centers [7][8][9]17,18] and slightly lower than we noted at our institution in the years 2010-2013 [10]. Mold infections accounted for 71% of proven and probable IFD, with Aspergillus the predominant organism, similar to prior reports [1,10]. Increasingly, however, non-Aspergillus molds are responsible for infections in patients with acute leukemia [19][20][21]. These more difficult-to-treat mold infections, including fusariosis and mucormycosis, have been reported more often in patients who were receiving antifungal prophylaxis, as we observed in our cohort. However, we did not isolate unusual and rare molds, such as Scopulariopsis, Lomentospora, Lichtheimia, and Trichosporon, as has been noted in other recent reports [8,[19][20][21]. It is likely that geographic and environmental factors are as important as the specific antifungal agent used for prophylaxis to explain differences in the molds that predominate in various different institutions.
Cumulative duration of profound neutropenia was a significant risk factor for development of IFD among patients receiving chemotherapy for the treatment of newly diagnosed AML. We used a proportional hazard regression model and long-term follow up to determine the contribution of neutropenia to risk for IFD. We found a 3.8% increased risk of developing IFD by each additional day of neutropenia per 100 days follow up. Other studies have utilized the D-index, based on the area over the neutrophil curve which is defined by the duration and severity of neutropenia, as a predictor of risk factor for invasive mold infections in leukemia patients [22][23][24]. Our calculation differs in that it includes the cumulative duration of neutropenia over the first year, rather than depth and persistence of neutropenia during a single neutropenic episode prior to the diagnosis of IFD. Our study confirms findings from prior studies that neutropenia is a key risk factor for IFD and breakthrough IFD in patients with AML [5,6,10,11,25].
Developing an IFD had a significant impact on 1-year-survival, and this was most notable among patients whose leukemia was controlled. Aggressive chemotherapy allowing control of the leukemia likely contributed to the net state of immunosuppression and increased the risk for development of an IFD. For those patients who had relapsed/refractory disease, it appeared that both IFD and uncontrolled AML contributed to their poor outcomes.
GVHD is a known risk factor for the development of IFD. However, in our cohort, GVHD was not associated with an increased risk for IFD. This finding could be explained by the small number of patients who underwent HCT and developed GVHD (14%) and by the use of antimould prophylaxis in patients with GVHD.
Typically, antifungal prophylaxis for AML patients undergoing chemotherapy is initiated on the first day of neutropenia and continued until neutrophils are >500/µL. In our study, cumulative days of neutropenia and chemotherapy failure significantly increased the risk of IFD. These findings suggest that factors contributing to the net state of immunosuppression, rather than only the absolute neutrophil count, play a central role in the development of IFD. Novel approaches to antifungal prophylaxis perhaps should be devised to include alternative endpoints given that recovery of neutropenia alone is not wholly adequate to define decreased risk for IFD.
Current evidence supports the use of mould active prophylaxis for patients at high risk for IFD, such as those receiving induction chemotherapy for AML. Results from a large open label trial support the use of posaconazole to prevent IFD, and this drug is licensed for this indication [2]. There are not large studies of voriconazole prophylaxis in AML populations and the drug is not licensed for this indication, but effectiveness of this agent can be inferred from studies of the pre-engraftment neutropenic phase in HCT patients [26]. Furthermore, decisions about specific agents for antifungal prophylaxis in this patient population must take into account factors, such as the local epidemiology, especially the local incidence of Mucorales and other resistant moulds, drug interactions with chemotherapy agents, and costs.
In our study, 14 patients had 16 possible IFD and were excluded from further analysis; this constitutes almost 50% of all episodes of IFD, and 87.5% occurred despite antifungal prophylaxis. Similarly, in a prior study, almost 70% of episodes were designated as possible and were excluded from further analysis [10]. Mortality among patients with possible IFD was similar to that observed in patients with proven/probable IFD (73% and 76%, respectively). The lack of mycological evidence to support the diagnosis of IFD in this patient population may be secondary to the low yield of cultures, especially in the setting of widespread use of prophylaxis, limitations of non-culture methods, and an inability to use invasive methods in these patients to obtain tissue for culture and histopathology [27][28][29]. Possible IFD pose a dilemma for both clinicians and researchers. In clinical practice, patients with possible IFD typically receive empiric antifungal therapy, but treatment endpoints are unclear. Excluding this group of patients from analysis in clinical trials results in a decreased number of evaluable patients. Further understanding the impact and outcomes of possible IFD could lead to an improvement in the management of these patients.
The strengths of this study are related to the relatively large sample size, the homogenous patient population, and the use of recently updated definitions of IFD and B-IFD.
Our study has several limitations, including its retrospective and single-center design. We have excluded patients with episodes of possible IFD, which accounted for almost half of all episodes of IFD within a year of AML diagnosis, and excluding those episodes might have resulted in an underrepresentation of the real impact of IFD in this patient population. Finally, while the overall rate of IFD was similar to that reported in other studies, the actual number of patients with B-IFD was low, precluding further analysis or meaningful conclusions regarding risk factors for specific IFD in this population.
In conclusion, IFD remains an important problem among patients with AML despite the use of antifungal prophylaxis. Increasing cumulative days of neutropenia, as well as re-lapsed/refractory AML, correlate with increased risk of developing IFD, and development of IFD significantly increases mortality among patients with AML. | 2021-09-28T05:31:04.657Z | 2021-09-01T00:00:00.000 | {
"year": 2021,
"sha1": "7662a115d483dec88e5db56c766696b94349d8f5",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2309-608X/7/9/761/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7662a115d483dec88e5db56c766696b94349d8f5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252188559 | pes2o/s2orc | v3-fos-license | A case of novel DYT6 dystonia variant with serious complications after deep brain stimulation therapy: a case report
Background DYT6 dystonia belongs to a group of isolated, genetically determined, generalized dystonia associated with mutations in the THAP1 gene. Case presentation We present the case of a young patient with DYT6 dystonia associated with a newly discovered c14G>A (p.Cys5Tyr) mutation in the THAP1 gene. We describe the clinical phenotype of this new mutation, effect of pallidal deep brain stimulation (DBS), which was accompanied by two rare postimplantation complications: an early intracerebral hemorrhage and delayed epileptic seizures. Among the published case reports of patients with DYT6 dystonia, the mentioned complications have not been described so far. Conclusions DBS in the case of DYT6 dystonia is a challenge to thoroughly consider possible therapeutic benefits and potential risks associated with surgery. Genetic heterogeneity of the disease may also play an important role in predicting the development of the clinical phenotype as well as the effect of treatment including DBS. Therefore, it is beneficial to analyze the genetic and clinical relationships of DYT6 dystonia.
Background
DYT6 dystonia is associated with mutations in the THAP1 gene. This gene encodes the DNA-binding transcription factor THAP1, which acts as a nuclear proapoptotic protein, but its exact role is still unknown [1]. DYT6 dystonia is characterized by autosomal dominant inheritance with incomplete penetration. It manifests in childhood, adolescence, or early adulthood (4-20 years) and usually begins in the craniocervical area in approximately half of the patients, with subsequent spread to the limbs. In the rest of the patients, the opposite propagation pattern is observed, with the symptoms starting in the hands [2]. Treatment includes anticholinergic therapy and local denervation with botulinum neurotoxin (BoNT). In cases of drug-resistant dystonia with severe functional impairment, DBS of the globus pallidus internum (GPi) can be considered [3].
Case presentation
We report the case of a 15-year-old boy with a normal perinatal history and psychomotor development. No neurological disease was observed in the family.
The patient's first difficulties began at the age of 9 years and manifested as graphospasm of the right upper limb. Therefore, the patient gradually started to write with his left upper limb, but here too, at the age of 11 years, he developed graphospasm, which made it impossible for Open Access The patient took the tests orally, and when written tests were required, he took them on a computer keyboard. At the age of 11 years, the dystonia extended to the distal parts of the lower limbs, where it was only mild degree and did not limit the patient in normal activities (such as walking). At the age of 15, cervical dystonia (CD) with the rotation of the head to the left began to develop. At the age of 18, oromandibular dystonia (OMD) was associated with a dominant impairment of the mimic muscles, mainly orbicularis oris (manifested as lip puffiness) and tongue (characterized by tongue protrusion). Mild jaw opening was also present, without spasmodic dysphonia and dysphagia.
The patient had an early score of 55 points on the Burke-Fahn-Marsden Dystonia Rating Scale (BFMDS). The patient's greatest handicap was speech impairment resulting from OMD and facial manifestations (resulting from dysfunction of the orbicularis oris muscle). The second severe handicap was CD. The limb dystonia was subjectively assessed by the patient as not severe and not limiting. Objectively, however, it was a dystonia of severe degree (inability to write, play a musical instrument) but with preserved ability to work with a mobile phone and PC. The patient had permanent resting dystonic postures of hands.
Apart from the dystonic manifestations described above, no other central and peripheral nervous systems manifestations were present. The patient's cognitive status and brain MRI was normal. We carried out genetic testing by means of whole exome sequencing performed on HiSeq4000 platform (Illumina, CA, USA). A novel heterozygous missense variant c.14G>A (p.Cys5Tyr) was detected in the THAP1 gene ( Fig. 1). Our finding was first reported by Zech and colleagues among results of multicenter whole-exome sequencing study focused on finding of monogenic causes of dystonia [4].
The presence of this mutation was also determined in the patient's sister (with focal dystonia of the upper limbs arising at the age of 11 years) and their mother (without dystonia).
The patient was treated with biperiden (6 mg per day) for one year; however without any observable effect on dystonia. Higher doses were accompanied by adverse effects, especially dry mouth which made it difficult for the patient to pronounce. The patient received repeated applications of abobotulinum toxin A for CD in the right sternocleidomastoid and the left splenius capitis muscle at a total dose of 500-1,000 IU every three to four months for a period of three years. Its effect, however, gradually weakened and the treatment was stopped. OMD was treated with abobotulinum toxin A to the orbicularis oris (40 IU), depressor anguli oris (10 IU), genioglossus (40 IU) and platysma (40 IU), with an average frequency of application of every three months. This treatment is of mild effect and the patient is still being treated with it.
The patient was indicated for DBS and was subsequently implanted with electrodes into the bilateral GPi (electrodes 3389, stimulator Activa PC, Medtronic). At the time of the surgery, the patient was 20 years old. The operation as such was without any complications. The CT scans of the brain performed immediately after surgery were within normal limits (Fig. 2). On the fourth postoperative day there was a sudden onset of mild expressive aphasia with no paresis of the limbs. The CT scan showed intracerebral hemorrhage along the left electrode, extending from the cortical area at the site (Fig. 3). The patient had no proven vascular malformation on preoperative MRI of the brain and cerebral vessels. As a result of this complication, DBS was not initiated until two months after implantation when the speech disorder was completely corrected and the regression of the hematoma was verified during a control MRI examination of the brain. After six months of DBS treatment, we observed a 30% improvement in BFMDS. The patient had an improvement in dystonia in the upper and lower limbs and a slight improvement in CD but OMD remained intact (speech disorder, protrusion of the tongue, facial expressions), which handicapped the patient the most. The patient therefore assessed the surgical outcome as unsatisfactory. Clinical evaluation was performed with DBS stimulation parameters: monopolar setup, frequency 130 Hz, pulse duration 180 ms, stimulation localization -distal contacts in both of the electrodes (type 3389, Medtronic, MN), stimulation intensity 2.9 V on both sides. Subsequently, bipolar setup was tested with different stimulation electrode contacts, high-and low-frequency stimulation (40, 130, 180 Hz), varying pulse durations (60, 90, 120 and 180 ms) and stimulation intensities (1.2 -3.5 V) and none of these settings achieved the desired effect on OMD and only a minimal effect on CD was noted. OMD partially responded to abobotulinum toxin A injections. Due to the insufficient effectiveness of DBS, we considered the possibility of posthemorrhagic structural changes of the tissue in the vicinity of the stimulation contacts of the left electrode and we performed a control MRI of the brain one year after implantation. The examination showed a posthemorrhagic cortico-subcortical pseudocyst located out of electrode contacts. (Figs. 4 and 5).
Two years after DBS implantation, the patient experienced the first generalized epileptic tonic-clonic seizure with an unknown onset. The control MRI scans of the brain were without any changes. An EEG showed episodes of bifrontal rhythmic delta waves with left-side amplitude accentuation and abortive spike-wave complexes (Fig. 6). One year later, the patient experienced a second generalized tonic-clonic seizure and subsequently treatment with levetiracetam was started at a dose of 1000 mg daily.
The patient is nowadays 23 years old (three years after DBS implantation) and his BFMDS score is 55 points (the same as preoperative). There is a severe degree of CD approximately in the same extent as before the implantation, severe OMD which has slightly progressed compared to the preimplantation condition and very mild dystonia at the distal region of the upper and lower limbs which have improved with DBS. The patient is treated with levetiracetam and abobotulinum toxin A injections into the oromandibular muscles with a partial effect. Speech impairment is still a major determinant of the patient's quality of life. Cognitive functions are normal.
Genotype-phenotype correlation
In our case report, we describe the case of DYT6 dystonia, with the detected heterozygous missense variantc.14G>A (p.Cys5Tyr) in theTHAP1 gene. The stated variant is located in an evolutionarily highly conserved region. Experiments indicate that the substitution of cysteine for alanine at the studied position leads to a loss of the ability of THAP1 to bind to DNA [5]. The cysteine at position five of the polypeptide chain participates in the structure of a zinc finger-type motif [6]. It is thus highly likely that the detected cysteine substitution for tyrosine leads to a disruption of the structure of the aforementioned binding motif. According to prediction programs, it is most likely a pathogenic mutation that causes the disease DYT6 in our proband.
Heterogeneous genetic background of DYT6 in terms of numerous likely pathogenic variants detected in THAP1 might be responsible for a very variable clinical phenotype as well as therapy outcome including DBS. This is in direct opposition with DYT1 dystonia, which is usually caused by single common GAG deletion, resulting in relatively uniform clinical phenotype and predictable response on DBS therapy [7].
DBS treatment
Postoperatively, there was a significant improvement in dystonia on the limbs; however, no improvement in the craniocervical area. Inconsistent results of DBS treatment have been observed with DYT6 dystonia, mostly showing slight improvement [8,9], although there are studies describing a significant improvement [10]. The average rate of improvement in motor skills after DBS ranged from 15% to 55% [11]. In most cases, dystonia a b in the limbs, neck and torso was alleviated, while OMD remained unaffected. The reasons for poorer and more variable DBS response in DYT6 dystonia are still not fully understood but may in part relate to prominent bulbar involvement which is a body region usually less responsive to DBS. We cannot exclude the influence of DBSrelated complications, but we did not find convincing fibroproductive changes in the vicinity of the stimulating electrode contacts on MRI, as well as the impedance values when adjusting the stimulation parameters were within normal intervals. Another possibility is genetic heterogeneity in THAP1 gene where many different pathogenic mutations have been described, whereas DYT1 dystonia is usually caused by a single common GAG deletion [7]. Despite the aforementioned limitations of DBS therapy in relation to the effect on OMD, we proceeded with this therapy because of the expected good effect on limb and cervical dystonia. Subjectively, the patient did not consider these difficulties to be the most significant, but objectively, a severe degree of progressive dystonia was present. Moreover, the patient was in favor of a surgical treatment, which he perceived as a chance to alleviate symptoms refractory to previous therapeutic approaches.
Intracerebral hemorrhage
On the fourth postoperative day, the patient experienced bleeding along the left electrode. Such complication has arisen only once in our experience (incidence 1.2% per lead). According to the literature, the incidence of intracerebral hemorrhage is 0.6 to 3.5% per lead. They are mostly small and asymptomatic, do not require surgery and they appear most often during a short time interval after implantation (3.7%) rather than during the surgery (1.1%) [12]. Various factors like age, sex, cognitive status, and target areas (GPi, subthalamic nucleus, ventral intermediate nucleus of the thalamus) were analyzed for the cause of hemorrhage but only number of trajectories for microelectrode recording proved to be statistically significant [13]. We performed microrecording from five trajectories in our patient. A possible explanation for delayed hemorrhage may be damage to small blood vessels or bleeding into venous ischemia with damage to cortical and dural veins. Implantation may result in venous outflow obstruction with subsequent venous hypertension and congestion, leading to hemorrhage and cerebral edema [12].
Epilepsy
Two years after implantation, the patient experienced the first generalized epileptic tonic-clonic seizure with unknown onset, which recurred one year later. We hypothesize that it was a focal epileptic seizure that generalized into a bilateral tonic-clonic seizure originating from the left frontal area from an area near the posthemorrhagic residuum. In a recent study, In 814 DBS electrode implantations (645 patients) Atchley et al. [14] reported the incidence of DBS implantation-related seizures as 2.8% per lead (in 3.4% patients). Of all cases with postimplantation-related seizure, epilepsy developed in 17.4% patients postoperatively; the risk of DBS-associated epilepsy was 0.5% per DBS electrode placement and 0.63% per patient. 39.1% implantation-related seizures had have associated postoperative radiographic abnormalities. Multivariate analyses suggested that age at surgery conferred a modest increased risk for postoperative seizures. Sex, primary diagnosis, electrode location and sidedness, and the number of trajectories were not significantly associated with seizures after DBS surgery. Postoperative seizures in 63.6% cases occurred less than 24 hours after placement [14]. The onset of an epileptic seizure 2 years after implantation has not been described in the literature. We also did not encounter the occurrence of postimplantation hemorrhage and epilepsy in the previously published case reports of patients with DYT6. However, we must note that the work analyzing complications of DBS worked with the diagnosis of primary dystonia and thus did not distinguish between different types of dystonia. DYT6 dystonia is a relatively common primary generalized dystonia with a variable genetic heterogeneity, clinical heterogeneity and variable effect of DBS. This information, along with the possible postoperative complications, must be discussed with the patient prior to the implantation itself. | 2022-09-12T15:26:56.206Z | 2022-09-12T00:00:00.000 | {
"year": 2022,
"sha1": "a00d54cf1ad7807b61f7889254318fe2efd7b02b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "a00d54cf1ad7807b61f7889254318fe2efd7b02b",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
32712193 | pes2o/s2orc | v3-fos-license | Impact on Radiation Dose and Volume V57 Gy of the Brain on Recurrence and Survival of Patients with Glioblastoma Multiformae
Abstract Background The aim of the study was to analyze impact of irradiated brain volume V57 Gy (volume receiving 57 Gy and more) on time to progression and survival of patients with glioblastoma. Patients and methods Dosimetric analysis of treatment plan data has been performed on 70 patients with glioblastoma, treated with postoperative radiochemotherapy with temozolomide, followed by adjuvant temozolomide. Patients were treated with 2 different methods of definition of treatment volumes and prescription of radiation dose. First group of patients has been treated with one treatment volume receiving 60 Gy in 2 Gy daily fraction (31 patients) and second group of the patients has been treated with “cone-down” technique, which consisted of two phases of treatment: the first phase of 46 Gy in 2 Gy fraction followed by “cone-down” boost of 14 Gy in 2 Gy fraction (39 patients). Quantification of V57 Gy and ratio brain volume/V57Gy has been done. Average values of both parameters have been taken as a threshold value and patients have been split into 2 groups for each parameter (values smaller/ lager than threshold value). Results Mean value for V57 Gy was 593.39 cm3 (range 166.94 to 968.60 cm3), mean value of brain volume has was 1332.86 cm3 (range 1047.00 to 1671.90 cm3) and mean value of brain-to-V57Gy ratio was 2.46 (range 1.42 to 7.67). There was no significant difference between two groups for both V57 Gy and ratio between brain volume and V57 Gy. Conclusions Irradiated volume with dose 57 Gy or more (V57 Gy) and ration between whole brain volume and 57 Gy had no impact on time to progression and survival of patients with glioblastoma.
Introduction
Glioblastoma is most common and most aggressive brain tumor with incidence of 2-3 per 100 000 population according to GLOBOCAN. 1 Incidence of glioblastoma accounts 12-15% of all intracranial tumors and approximately 50-60 of all astrocytic tumors. 2,3 Diagnosis, treatment and follow up of patients with glioblastoma and multidisciplinary approach and best results are achieved in specialized centers, which can offer all treatment modalities when it is needed and which are more experienced with larger volume of cases. 4 Mutual understanding and collaboration of team of professionals is of paramount importance for obtaining best medical care and best clinical results (tumor control and survival). During last two decades, major advantages have been made in enhancing precision of radiation treatment and shaping of radiation dose to increase dose distribution in target and to decrease radiation dose in organs at risk. Three-dimensional conformal radiotherapy and its derivates, Intensity Modulated Radiation Therapy (IMRT) and Volumetric Arc Therapy (VMAT) are now standard of treatment for patients with glioblastoma. [5][6][7] Standard postoperative treatment of patients with glioblastoma consist of postoperative radiotherapy with temozolomide followed by adjuvant temozolomide. [8][9][10][11] Radiotherapy is corner-stone of multimodality approach and it is considered as treatment with highest benefit of all three treatment modalities. Despite the major advantages in personalization and precision of the radiotherapy treatment, median survival of patients with glioblastoma is still between 12 and 16 months from diagnosis. 4 In general there are two major approaches in definition of gross tumor volume (GTV) in patients with glioblastoma. In studies conducted by EORTC (European Organization for Research and Treatment of Cancer) only one contoured gross tumor volume is used which is defined as an enhanced visible tumor on MR images prior the surgery expanded respectively to clinical target volume (CTV) and planning target volume (PTV) according ESTRO-ACROP Guidelines. 7 In contrast in studies conducted by RTOG (Radiation Therapy Oncology Group) definition of volumes in according to "cone-down" approach, which means that there are virtually two volumes defined on preoperative and/or postoperative MR, one initial (larger) volume and second "conedown" volume or boost volume (smaller). With "cone-down" approach in some clinical situation it is possible do decrease radiation dose to the brain which could have impact on survivorship of patients with glioblastoma. 12,13 Volume of the tumor measured as initial tumor size or preoperative tumor size and residual disease is generally considered as a prognostic factor for survival and recurrence in patients with glioblastoma. 14,15 There are various approaches and quantification of what is really visible tumor and consequently, what volume should be irradiated in order to minimize tumor recurrence, but reaching consensus between different research groups is still under debate. Definition of volume of the tumor, depends of imaging modality used, resolution of imaging modality, processing algorithms and various other variables. [15][16][17][18] After definition of gross tumor volume there is still debate, what is most appropriate clinical treatment volume (CTV). There are different approaches, which are evolving together with advances of imaging. CTV as concept in glioblastoma is difficult to define and different research groups have various definitions and no-one of these definitions is absolutely true or false. [20][21][22][23][24][25][26][27][28] There are more or less well defined criteria for definition of CTV and PTV for patients with glioblastoma specially treated in clinical trial setting, as AVAGlio and Centric trials, recently. 28,29 Also it is well known fact that in daily clinical practice clinicians are adopting volume delineation according to their clinical setting and capabilities, and using delineation according to RTOG, EORTC or institutional standards . 30 In our study we treated patients with two different approaches on delineation on treatment volumes using one phase treatment EORTC "like" and RTOG "like" approach, but randomization was used for assignment of patients in the group as a part of the standard protocol for treatment developed in the institution.
Patients and methods
This study was approved by the ethical committee of Medical Faculty at University "Ss. Cyril and Methodius" in Skopje and University Clinic of Radiotherapy and Oncology in Skopje (Number: 03-2455/2) and was carried out according to the Declaration of Helsinki.
Total of 70 patients, with glioblastoma multiforme has been included in this study. Patient accrual has been performed in the period from January 2013 to December 2015. All patients have been previously surgically treated with maximal safe resection of the primary tumor and definitive histological diagnosis has been established as a glioblastoma multiforme according to the last World Health Organization classification. 3 After referral to radiotherapy treatment, patients have been scheduled for computed tomography (CT) simulation in treatment position. CT scan has encompassed cranial region according to institutional protocol with slice thickness of 2 mm. For immobilization purpose thermoplastics masks and head rests have been used during simulation and treatment.
After computed tomography simulation, image fusion with preoperative and/or postoperative MR scan has been done using automatic MR-CT fusion using non deformable algorithm with manual correction, only if necessary, leaded by decision of radiation oncologist. MR-CT fusion has been done using transversal MR images using T1 with contrast and T2/FLAIR sequence. In the analysis has been included only patents who finished complete treatment, total of 70 patients from included 78 patients. Eight patients did not finish treatment, and were excluded from analysis.
Patients have been randomly assigned to one of the groups on the basis of referral to the department. Patients with odd hospital number have been assigned to the first group, and patients with even hospital number have been assigned to second group. In the first group of the patients (total of 31 patients), delineation was based on T2/FLAIR and T1 with contrast enhancements and only one GTV volume has been contoured. After delineation of GTV, CTV was expanded for 2 cm, taking in account anatomic boundaries and omitting, if possible, organs at risk. CTV was expanded to PTV with addition of margin of 5 mm, which is considered institutional standard. In the second group (total of 39 patients), "cone-down" approach has been used and delineation of target volume has been done in two phases and two GTV volumes have been delineated. First or initial volume GTV46, has been delineated based on MR images using T2/FLAIR abnormalities. Expansion of GTV to CTV has been done with margin of 2 cm, taking in account anatomic boundaries and avoidance of organs at risk, similar as in first group. Further expansion of CTV to PTV was with margin of 5 mm. Cone down volume or boost volume has been delineated on contrasts enhanced T1 MR Image set. CTV has been expanded for 2 cm, and PTV expanded further for 0.5 cm, as initial delineated volume.
Dose prescription for patients in the first group was 60 Gy in 30 daily fractions of 2 Gy and in second group, prescribed dose for initial volume was 46 Gy in 23 fractions and for the "cone-down" volume additional 14 Gy in 7 fractions. Initial and "cone-down" volumes have been treated with 2 Gy daily fractions. Treatment schedules for both groups were 5 fractions on each consecutive day in 7 days week. Treatment planning has been done using Varian Eclipse planning system version 10.0.45.0 and the most appropriate treatment plan have been selected in order to achieve dose distribution in target(s) and in organs at risk in order to fulfill QUANTEC criteria. [31][32][33] Together with radiotherapy all patients were treated according to "Stupp protocol" and received concurrent chemotherapy with temozolomide, followed by adjuvant temozolomide. 35 After treatment, patients undergo follow up which consisted of physical examination every month which corresponded with adjuvant chemotherapy cycle, MR every 3-4 months and other clinical examination if necessary. Follow up strategy was in line with ESMO clinical recommendations and modified according to specific clinical situation. 9 Two volumetric parameters were selected as relevant in order to predict exposure to brain as an organ at risk in our series of patients. First parameter is volume which received 57 Gy and more in cm 3 , "V57Gy", and second parameter is calculated as a ratio between brain volume and "V57Gy", calculated as numeric coefficient, decimal number.
"V57Gy" was calculated using TPS software and using build in algorithms for converting isodose level to structure. This function is standard in majority of modern treatment planning systems available in the market and it is available as a standard option in our institutional TPS.
Patients has been separated in 2 groups for both parameters and threshold values were estimated for "V57Gy" equal to 600 cm 3 and for ratio between brain volume and "V57Gy" equal to 2.4 presented as a decimal number.
Based on the first parameter, patents were split on 2 groups. The first group of patients with "V57Gy" up to 600 cm 3 consisted of 38 patients, and second group of patients with "V57Gy" more than 600 cm 3 consisted of 32 patients.
According to second parameter, ratio between brain volume and "V57Gy", patients were also split into 2 groups: patients with ratio less than 2.4 numeric value (40 patients) and patients with ratio of more than 2.4 (30 patients).
Results
Median follow up of all 70 patients was 12 months (range from 4 to 33 months). Median time to progression (recurrence) was 12 months and median survival was calculated as 24 months using Kaplan Meir method. 35 Survival analysis using Kaplan Meir method has been done on two parameters for both time to progression (recurrence) and overall survival. Comparison of survival has been calculated using Matel-Cox and Gehan-Breslow-Wilcoxon (logrank) tests. 36,37 For volumetric parameter of "V57Gy", comparison of patients with "V57 Gy" of less than 600 cm 3 (38 patients) and more than 600 cm 3 (32 patients) has been done. Time to progression for group of patients with "V57Gy" ≤ 600 cm 3 was 11.43 months and for group of patients with "V57Gy" > 600 cm 3 was 13.29 and for overall survival, 14.64 and 13.29 months. (Figure 1, 2). There was not any significant difference for both time to progression (p = 0.2065) and overall survival (p = 0.9970).
For second parameter, calculated as numeric value, ration between whole brain volume and "V57Gy" volume, comparison for time to progression and overall survival for patients with numerical value ≤ 2.4 (40 patients) and numeric value > 2.4 (30 patients) has been done. Median time to progression for group of patients with value ≤ 2.4 was 11.43 months and for group of patients with numeric value > 2.4 was 12.18 months. Overall surviv-al was 11.68 and 14.64 months respectively. There was no significant difference in time to progression (p = 0.2881) and overall survival (p = 0.8572) between this two groups (Figure 3, 4).
Discussion
Based on this data we concluded that these volumetric parameters did not have any impact on time to progression and overall survival on patients with glioblastoma, treated with postoperative radiochemotherapy. In general, in malignant tumors, size of the tumor is considered as an independent prognostic factor which is described as a T stage according to U ICC Classification 38 , but due to specific characteristics of brain tumors, TNM classification for prognostic values is not applicable, but rather WHO classification which do not correspond to size of the tumor. According to EORTC and NCIC nomogram for predicting outcome in patients with newly diagnosed glioblastoma, there are several factors which are predicting survival. Following parameters are suggested as potential prognostic factors, which should be reported in all clinical studies: MGMT promoter methylation status, age, performance status, extent of resection, and Mini Mental State Examination (MMSE). 39 Vol umetric parameters calculated in our study did not have impact on local control and overall survival. In our study threshold value was estimated as an average value from our series of the patients. In the future studies we are planning to include more patients for evaluation of volumetric parameters and to create more strict constraints with higher gradient. In this case we had approximation that there will be difference in both recurrence and survival for patients with smaller irradiated volumes compared with very large irradiated volume, which should be proven in future studies.
Radiation treatment of CNS tumors has been evolved in the past two decades with introduction of more precise imaging and treatment devices in radiation oncology followed by development of more precise treatment techniques. Despite the fact that modern treatment devices are able to deliver higher dose to specified tumor volume, using possibility to conform beams in order to protect critical organs there, are not positive studies to prove that escalation of radiation dose beyond 60 Gy with standard fractionation will have impact of the disease control. 40 There are some exceptions regarding dose and fractionation for patients with poor performance status. Recent studies showed that shortening duration of radiotherapy treatment with increasing daily fraction (40 Gy in 15 fractions or 25 Gy in 5 fractions) is with equivalent results regarding survival and quality of life. 42 In our study we showed that decreasing of treated volume with cone-down approach did not have any impact on marginal recurrence in glioblastoma patients treated with radiotherapy and concurrent and adjuvant temozolomide. These results are in line with recent published studies that reducing treated volume with careful delineation of visible tumor on MR, will not have any impact on marginal recurrence. [42][43][44][45] Finally, careful selection of imaging modalities, registration and selection of the most suitable treatment plan is of paramount importance for obtaining best results and obtaining best local control during radiation treatment of patients with glioblastoma. | 2018-04-03T04:55:27.264Z | 2017-11-01T00:00:00.000 | {
"year": 2017,
"sha1": "efabf6b31af05946c99813a450e987ef0f445dc7",
"oa_license": "CCBYNCND",
"oa_url": "https://content.sciendo.com/downloadpdf/journals/raon/51/4/article-p463.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "efabf6b31af05946c99813a450e987ef0f445dc7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
219948453 | pes2o/s2orc | v3-fos-license | “Monoclonal-Type” Plastic Antibodies for COVID-19 Treatment: What Is the Idea?
In late December 2019, an outbreak due to a novel coronavirus, initially called 2019-nCoV, was reported in Wuhan, China [...].
detection of low concentrations of the target compound in urine. Protein sensors based on electroactive MIPs were also fabricated by Zhao et al. employing bovine serum albumin and trypsin as model templates and a linear electro-polymerizable molecularly imprinted polymer as a macromonomer [10].
Some recent studies report the development of MIPs-based sensors for the selective detection of viruses such as Japanese Encephalitis Virus (JEV) and Hepatitis A Virus (HAV) through the Resonance Light Scattering (RLS) technique. In the first work, [11] a magnetic surface molecularly imprinted-resonance light scattering sensor was prepared using Fe 3 O 4 microspheres coated by silicon as imprinting substrates and aminopropyl-triethoxysilane (APTES) as functional monomers for fixing JEV through a polymerization process of tetraethyl-orthosilicate (TEOS). In the second one [12], molecular imprinting resonance light scattering nanoprobes able to selectively bind HAV were fabricated using pH-responsive metal-organic frameworks.
Most of the research studies on MIPs for biomacromolecules, such as proteins and viruses, are focused on the preparation of sensors and probes for the detection of these targets, while only a few works are devoted to the therapeutic use of these polymeric materials.
One example is given by Xu et al. [13], who presented molecularly imprinted polymer nanoparticles able to bind the highly conserved and specific peptide motif SWSNKS (3S), an epitope of the envelope glycoprotein 41 (gp41) of human immunodeficiency virus type 1 (HIV-1). The imprinted nanoparticles were produced by solid-phase synthesis and could find a potential application as artificial antibodies for immunoprotection against HIV.
At this time, Parisi et al. at the Department of Pharmacy, Health and Nutritional Sciences of the University of Calabria, are developing "monoclonal-type" plastic antibodies based on MIPs able to selectively bind a portion of SARS-CoV-2 spike protein to block its function and, thus, the infection process ( Figure 1) [14].
photonic crystals and molecularly imprinted polymers [9]. The resulting sensor exhibited optical properties that change upon detection of low concentrations of the target compound in urine. Protein sensors based on electroactive MIPs were also fabricated by Zhao et al. employing bovine serum albumin and trypsin as model templates and a linear electro-polymerizable molecularly imprinted polymer as a macromonomer [10].
Some recent studies report the development of MIPs-based sensors for the selective detection of viruses such as Japanese Encephalitis Virus (JEV) and Hepatitis A Virus (HAV) through the Resonance Light Scattering (RLS) technique. In the first work, [11] a magnetic surface molecularly imprinted-resonance light scattering sensor was prepared using Fe3O4 microspheres coated by silicon as imprinting substrates and aminopropyl-triethoxysilane (APTES) as functional monomers for fixing JEV through a polymerization process of tetraethyl-orthosilicate (TEOS). In the second one [12], molecular imprinting resonance light scattering nanoprobes able to selectively bind HAV were fabricated using pH-responsive metal-organic frameworks.
Most of the research studies on MIPs for biomacromolecules, such as proteins and viruses, are focused on the preparation of sensors and probes for the detection of these targets, while only a few works are devoted to the therapeutic use of these polymeric materials.
One example is given by Xu et al. [13], who presented molecularly imprinted polymer nanoparticles able to bind the highly conserved and specific peptide motif SWSNKS (3S), an epitope of the envelope glycoprotein 41 (gp41) of human immunodeficiency virus type 1 (HIV-1). The imprinted nanoparticles were produced by solid-phase synthesis and could find a potential application as artificial antibodies for immunoprotection against HIV.
At this time, Parisi et al. at the Department of Pharmacy, Health and Nutritional Sciences of the University of Calabria, are developing "monoclonal-type" plastic antibodies based on MIPs able to selectively bind a portion of SARS-CoV-2 spike protein to block its function and, thus, the infection process (Figure 1.) [14]. The coronavirus spike protein is a surface protein that mediates host recognition and attachment. It consists of two functional subunits: the S1 subunit which contains a receptor-binding domain (RBD) responsible for host cell receptor recognizing and binding, and the S2 subunit which The coronavirus spike protein is a surface protein that mediates host recognition and attachment. It consists of two functional subunits: the S1 subunit which contains a receptor-binding domain (RBD) responsible for host cell receptor recognizing and binding, and the S2 subunit which is involved in the fusion of the viral and host membranes [15]. The spike protein, thus, represents the common and primary target for the development of antibodies, vaccines and therapeutic agents. Therefore, polymeric imprinted nanoparticles could be potentially used as drug-free therapeutics in the treatment of the SARS-CoV-2 infection. Plastic antibodies targeting vulnerable sites on viral surface proteins, indeed, could disable receptor interactions and protect an uninfected host that is exposed to the virus. In vivo applications demand MIPs in the form of nanoparticles and there are evidences that nanoMIPs are not toxic in cell culture or when tested with mice [16].
Moreover, when loaded with antiviral agents, these nanoparticles could act as a powerful multimodal system combining their ability to block the virus spike protein with the targeted delivery of the loaded drug. In addition, the same nanoparticles can be further engineered to become an immunoprotective vaccine or an MIP-based sensor for diagnostic purpose.
Based on these considerations, Molecular Imprinting represents a very promising technology for the preparation of polymeric materials with high selective recognition abilities for a target molecule. On the other hand, the imprinting of biomacromolecules, including peptides, proteins, whole viruses or parts of them, presents several challenges due to the size, solubility, fragile structure and stability of these templates. Moreover, virus and viral components availability is also a key issue. Last but not least, sensitivity and selectivity of these polymeric matrices require further improvement to be comparable to those of natural antibodies.
The research work of Parisi et al. aims to overcome these limits to obtain MIP nanoparticles able to selectively recognize and bind the spike protein of the novel coronavirus and counteract the infection process. | 2020-06-18T09:06:16.259Z | 2020-06-01T00:00:00.000 | {
"year": 2020,
"sha1": "4dd628e3ee0e966806d11ece1ff5e736059185e8",
"oa_license": "CCBY",
"oa_url": "https://europepmc.org/articles/pmc7353480?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "673f1e2ff61d7bc6a942271a3fee37525a5b6a46",
"s2fieldsofstudy": [
"Medicine",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
87068189 | pes2o/s2orc | v3-fos-license | Fishes of Alto Jacuí sub-basin : a poorly studied sub-basin of northwestern Rio Grande do Sul , Brazil
The streams in the state of Rio Grande do Sul (mainly in the Alto Jacuí sub-basin belonging to the Laguna dos Patos system) have scarce information about their ichthyofauna. Thereby for providing information about stream species, the purpose of the present study was to inventory the ichthyofauna of the streams of the Alto Jacuí sub-basin, located in northwestern state of Rio Grande do Sul. The samples were taken bimonthly from June 2012 to June 2013 using electrofishing technique in 10 streams. A total of 13,247 specimens were collected belonging to 42 species, 10 families and six orders. We report the occurrence of five new species that have not yet been described by researchers.
INTRODUCTION
Fish are considered the most diverse group of vertebrates (Lowe-MacConnell 1999), with an estimated richness of 32,900 species (Froese and Pauly 2014).By December 2013, Pelayo-Villamil et al. (2014) had found 14,782 described species of fish that occur only in freshwater.Although there is a lack of a complementary information, current estimates of the icthyofauna in the Neotropical region are that there are about 6,000 to 8,000 species, totaling 13% of the vertebrate biodiversity in aquatic ecosystems worldwide, with Brazilian continental waters showing 21% of global diversity (Reis et al. 2003;Agostinho et al. 2005).
There is still a lack of knowledge of fish richness, mainly in South America, Africa and Asia, whichis due to a lack of sampling and databasingPelayo- Villamil et al. 2014).Brazil has the largest river networks in the world (Galves et al. 2009); however, many Brazilian basins and sub-basins have not yet been sampled (Agostinho et al. 2005), or there exists little information about their fish fauna, especially with medium-sized and small water bodies such as streams (Castro 1999).According to Langeani et al. (2007) streams are the environments that have the highest number of new species still to be discovered.But the small size of streams and headwater environments makes these places more susceptible to anthropogenic action and they may experience significant change in their population structure, leading to the disappearance of the most sensitive species (Galves et al. 2009).This situation makes it difficult to understand ecological, biological and biogeographical processes (Barletta et al. 2010).
Little information are available about sampling and studies of the ichthyofauna in streams for the Jacuí river basin and no studies of the upper region of the basin, called the Alto Jacuí sub-basin.Malabarba (1989) showed a list of freshwater fish present in the Laguna dos Patos system and cited species found in Jacuí River and its tributaries.Alves and Fontoura (2009) identified the distributive pattern of migratory fish of the Jacuí River basin, but the data were obtained through interviews, collections, literature and technical studies (EIA-RIMA Bonato and Fialho | Fishes of Alto Jacuí sub-basin Subtropical Ombrophilous Forest.The economy is based on agriculture (soybeans, corn, wheat and rice) and livestock.The basin is drained by the Jacuí, Jacuí-Mirim, Jacuizinho, Caixões, Ivaí and Soturno rivers (SEMA 2010).The Jacuí River is the main tributary of the basin and it is responsible for 85% of the waters forming the Lago Guaíba (FEPAM 2011).
Thus, this study was conducted in 10 streams (Figures 1 and 2-11) in northwestern Rio Grande do Sul, which corresponds to the Alto Jacuí sub-basin (Table 1).All streams flow into the Jacuí River, which is one of the main tributaries to the Laguna dos Patos system.
Data collection
Fish samples were collected with authorization number 34940 from register number 3196382 from Instituto Chico Mendes de Conservação da Bio diversidade (ICMBio).This study was approved by the Ethics Committee on Animal Use of the Universidade Federal do Rio Grande do Sul (permit number 24434) and was conducted in accordance with protocols in their ethical and methodological aspects for the use of fish.
The fish were collected in June, August, October and December 2012; February, April and June 2013.Each sampling event lasted four days.For the sampling, we used electrofishing with three stages of 30 min each, in stretches of 50 m per sampling stream.After sampling, fish were euthanized with 10% eugenol (Vidal et al. 2008;Lucena et al. 2013a), fixed in 10% formalin Estudo e Relatório de Impacto Ambiental) developed in the study region.Additionally there are some taxonomic reviews and descriptions of new species that are distributed on this drainage (Ottoni and Cheffe 2009;Menezes and Ribeiro 2010;Carvalho and Reis 2011).
We emphasize that to understand the ecological mechanisms in these little-explored environments we must use many tools, including ichthyofaunal studies.Streams are highly heterogeneous environments (Winemiller et al. 2008) and this allows for the establishment of numerous species of fish.Further, more studies of streams in south Brazil are necessary because some basins are not as well explored as the Alto Jacuí sub-basin.Therefore, the aim of this study is to inventory and provide more information about distribution and species richness of ichthyofauna in the Alto Jacuí sub-basin located in northwestern Rio Grande do Sul.
Study site
The Alto Jacuí sub-basin belongs to the large Laguna dos Patos system and is located in the state of Rio Grande do Sul in the northwestern Middle Plateau and Central Depression region.The Alto Jacuí has its headwaters located in the municipality of Passo Fundo and occupies an area of 16,062 km 2 with its rivers flowing into the Lago Guaíba (COAJU 2009).The basin's vegetation consists of Seasonal Deciduous Forest and some areas of Table 1.Bonato and Fialho | Fishes of Alto Jacuí sub-basin Lucena (2010), Ferrer and Malabarba (2013), Lucena et al. (2013b), Lucena and Soares (2016) and additional literature cited herein.Classification and nomenclature follows Reis et al. (2003), with additional changes made by Thomaz et al. (2015) for Characidae.The nomenclature for Cichlidae followed the new classification of bony fishes proposed by Betancur et al. (2013) that include this family in the order Cichliformes.The voucher specimens were deposited in the fish collection of the Departamento de Zoologia at Universidade Federal do Rio Grande do Sul (UFRGS; Table 2).
Five species are identified only to genus level and correspond to undescribed species: Australoheros sp.(Rícan and Kullander 2008), Bryconamericus sp.b (Silva 1998), Heptapterus sp.(Bockmann 1998), Ituglanis sp.(J.Ferrer, personal communication) and Bryconamericus sp. a which also seem to be a new species but, could not be described because it may just be a variation (as color and body shape) of Bryconamericus iheringii.According to Bonato and Ferrer (2013), the individuals of Phalloceros spiloura Lucinda, 2008 collected in the Alto Jacuí subbasin during the present study represent the first record of this species to the Laguna dos Patos system.
The highest species richness was found in RP, RT, RC and RQ with 28 species, 27 species, 25 species and 21 species, respectively.RM and RSC showed the lowest species richness with only 15 and 14 sample species.
DISCUSSION
According to Pelayo-Villamil et al. (2014) an average of 240.2 species of fishes were described per year in the last ten years worldwide.The five new species uncovered by this inventory (Australoheros sp., Heptapterus sp., Bryconamericus sp. a and b, and Ituglanis sp.) support the importance of this type of study.In addition, inventories are important in extending the distributional range of some species, such as Phalloceros spiloura that was previously only known from the coastal drainages of states of Rio Grande do Sul and Santa Catarina, Iguaçu and Uruguay river basins, and as part of this study, was found in the Alto Jacuí sub-basin representing a new record for the Laguna dos Patos system (Bonato and Ferrer 2013).Malabarba (1989) registred 25 species of the 42 species sampled in this study to Laguna dos Patos system.The most recent literature indicates a total of 160 species to the Laguna dos Patos system (Malabarba et al. 2009) including 35 species that were new species and yet not described in 2009.Of these 35 species listed by Malabarba et al. (2009), we have sampled five species that were described in recent years (Oligosarcus jacuiensis Menezes & Ribeiro, 2010;Hisonotus brunneus Carvalho & Reis, 2011; Astyanax procerus Lucena, Castro & Bertaco, 2013;Astyanax xiru Lucena, Castro & Bertaco, 2013;Trychomycterus poikilos Ferrer & Malabarba, 2013), indicating that a representative amount of the ichthyofauna of the upper Jacuí River was described in recent years.There are no comparable studies for the Alto Jacuí sub-basin.We can only make comparisons with other basins belonging to the Laguna dos Patos system.For stream environments Bozzeti and Schulz (2004) found 57 species in the Gravataí and Sinos subbasins, Hirschmann (2009) found 55 species in the Forqueta sub-basin (Taquari-Antas basin), and Becker et al. (2013) found 119 species for the Taquari-Antas basin but in respect to the last study, the high number of captured species is likely due to their larger sampling of 519 sites.
The number of species found in this study is lower compared to those cited by other studies mainly because it was conducted in streams from a headwater region and many of the streams (of lower species richness) are first-order.The highest occurrence of the orders Characiformes, Cichliformes and Siluriformes in studies is also well documented for the Laguna dos Patos system and for the Neotropical region (Castro 1999;Garcia et al. 2003;Buckup et al. 2007;Lévêque et al. 2008;Costa and Schulz 2010).Headwater streams do not have an exclusive fish fauna but, species that form populations residing in streams and also that occur in larger bodies of water with different characteristics (Castro 1999).The fish fauna of streams is based on small species and according to Castro (1999) it seems to be the only general pattern with real diagnostic value for stream environments.In this study, the streams with lower species richness are the first-order streams, which have a habitat of lower complexity as RD, RV and AA streams (see Table 1 for stream codes).This situation is expected in accordance with the River Continuum Theory (Vannote et al. 1980).Thus, the larger streams, with greater width between banks, areas with and without shading, and more heterogeneous environment showed the highest species richness (Fereira and Casatti 2006;Súarez and Petrere-Junior 2005) as occurred in the RT, RP and RC streams.
Despite the fact that we did not evaluate the degree of anthropic influences in the sampled streams, all streams sampled here showed some kind of human interference.Most streams are very close to agricultural areas with the presence of dairy cattle or pig livestock.In stream RP there was a considerable amount of waste coming from homes and sometimes we found dead animals within the stream.Probably the residents of the region slaughter animals for their own consumption and discard the remains of the animal in the river.However, this stream had considerable marginal vegetation and a heterogeneous environment with changing pools and areas of rapids, which led to the high amount of richness observed.
Due to a lack of data for streams of the studied subbasin it is difficult to say that the number of species found is representative of the streams belonging to Laguna dos Patos system.The checklist showed 42 species representing 26% of the species mentioned for the Laguna dos Patos system.This study is an important record for the region of the Alto Jacuí sub-basin due to the lack of extensive collecting effort in the region.The expansion of the sampled streams in Jacuí River basin may increase the records of species and information about endemic species.the species figures; the Orlandi and Bonato families for help and support in field work; and Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES, Proc.1104786, to KOB) for financial support.We also thank to Adam J. Taylor for reviewing the English of this manuscript.
Figure 1 .
Figure1.Sampling streams in the Alto Jacuí sub-basin.For stream code see Table1.
Table 1 .Table 1 .
Geographic coordinates, elevation and localization of the sampled streams and their respective codes in the Alto Jacuí sub-basin.Bonato and Fialho | Fishes of Alto Jacuí sub-basin Rodriguez and Reis (2008)0% alcohol for conservation.The taxonomic identification was carried out in the laboratory usingRodriguez and Reis (2008),Bertaco and
Table 2 .
List of fish species collected at each sampled stream in the Alto Jacuí sub-basin.See Table1for stream names.Asterisk indicates the endemic species to Laguna dos Patos system. | 2019-03-31T13:43:51.934Z | 2016-04-04T00:00:00.000 | {
"year": 2016,
"sha1": "dfa4baac9fb3638c16a68102c96085a6d17be632",
"oa_license": "CCBY",
"oa_url": "https://checklist.pensoft.net/article/19469/download/pdf/",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "dfa4baac9fb3638c16a68102c96085a6d17be632",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Geography"
]
} |
261543564 | pes2o/s2orc | v3-fos-license | Quantification of domestic cat hepadnavirus DNA in various body fluid specimens of cats: the potential viral shedding routes
Domestic cat hepadnavirus (DCH) belongs to the Hepadnaviridae family together with human hepatitis B virus (HBV) that remains to be a major health problem worldwide. The transmission of HBV infectious virion has been one of the essential factors that contribute to high number of HBV infection in humans. It has been long known that various body fluid specimens of human with chronic HBV infection contain HBV DNA and demonstrated to be infectious. In contrast to this knowledge, the detection of DCH in various body fluid specimens of cats, has not been reported. This study explored the detection of DCH DNA in various body fluid specimens of cats by quantitative polymerase chain reaction (qPCR) and investigated whether the detection of DCH DNA from broader routes was correlated with any genomic diversity by phylogenetic analysis. A total of 1,209 body fluid specimens were included, and DCH DNA was detected not only in 4.70% (25/532) of blood samples; but also in 12.5% (1/8), 1.14% (1/88), 2.54% (10/394), and 1.65% (3/182) of auricular swab (AS), nasal swab (NS), oral swab (OS), and rectal swab (RS) specimens, respectively. Furthermore, the level of DCH DNA detected in the blood was significantly correlated with DCH DNA detection in OS (P = 0.02) and RS (P = 0.04) specimens. Genomic analysis revealed that there was no notable genomic diversity within the complete genome sequences obtained in this study. In conclusion, this study highlighted the presence of DCH DNA in various body fluid specimens of cats, and the potential role of these specimens in DCH horizontal transmission within the cat population warrants further studies.
Domestic cat hepadnavirus (DCH) belongs to the Hepadnaviridae family together with human hepatitis B virus (HBV) that remains to be a major health problem worldwide.The transmission of HBV infectious virion has been one of the essential factors that contribute to high number of HBV infection in humans.It has been long known that various body fluid specimens of human with chronic HBV infection contain HBV DNA and demonstrated to be infectious.In contrast to this knowledge, the detection of DCH in various body fluid specimens of cats, has not been reported.This study explored the detection of DCH DNA in various body fluid specimens of cats by quantitative polymerase chain reaction (qPCR) and investigated whether the detection of DCH DNA from broader routes was correlated with any genomic diversity by phylogenetic analysis.A total of , body fluid specimens were included, and DCH DNA was detected not only in .% ( / ) of blood samples; but also in .% ( / ), .% ( / ), .% ( / ), and .% ( / ) of auricular swab (AS), nasal swab (NS), oral swab (OS), and rectal swab (RS) specimens, respectively.Furthermore, the level of DCH DNA detected in the blood was significantly correlated with DCH DNA detection in OS (P = .) and RS (P = .) specimens.Genomic analysis revealed that there was no
Introduction
Human hepatitis B virus (HBV), a relaxed circular DNA virus, belongs to the Hepadnaviridae family and remains a major health problem worldwide as it is responsible for more than 800,000 deaths each year (1,2).Despite the availability of an HBV vaccine, it is estimated that 360 million peoples suffer from chronic HBV infection with increased risk of life-threatening diseases such as cirrhosis and hepatocellular carcinoma (HCC) (3).In 2018, domestic cat hepadnavirus (DCH), an HBV relative, was detected in cats, and since then, several studies have reported the global detection of DCH in the blood and liver tissue of cats, with prevalence ranging from 0.78 to 18.5% (4)(5)(6)(7)(8)(9)(10).A concern regarding the possible role of DCH in the development of chronic liver disease in cats has been raised and investigated in several studies, and strikingly, an early study revealed that DCH was detected and localized only in the liver tissue of cats with chronic hepatitis and HCC, and no detection was made in the liver biopsy of healthy cats and cats with cholangitis and biliary carcinoma (5).Moreover, recent studies agreed that DCH was strongly associated with increased liver enzyme activities suggestive of hepatitis (7,11), even after ruling out other possible viral infections that commonly cause hepatopathy in cats (9).The possible role of DCH in the development of hepatic disturbances and chronic hepatitis and HCC in cats, therefore, cannot be neglected.
The transmission of an infectious virion, specifically from chronically infected individuals that are asymptomatic serves as a challenge that contributes to the high HBV infection rate in humans.HBV is mainly transmitted horizontally by blood-related infection and sexual contact and vertically from the infected mother to her newborns (12).These days, however, there is growing evidence of HBV DNA detection from other sources, such as the saliva, sweat, tears, feces, and cerumen of infected individuals, with saliva and tears having been demonstrated to be infectious in human or animal models (12)(13)(14)(15).In contrast to the growing knowledge about HBV transmission routes in humans, the detection of DCH in various body fluid specimens of cats has rarely been reported.Capozza et al. investigated the detection of DCH from broader routes, including oral, conjunctival, preputial, and rectal swabs from a cat, but no positive result was obtained, despite the prolonged DCH viremia status of the cat that remained for 11 months (16).With the increasing trend of multi-cat settings where close contact and fighting is inevitable, it is crucial to assess the potential transmission of DCH from various body fluid specimens other than blood.Therefore, the aim of this study was to explore the presence of DCH in the blood and other body fluid specimens, including auricular swab (AS), nasal swab (NS), oral swab (OS), rectal swab (RS), and urine obtained from cats from various provinces in Thailand and to determine whether the presence of DCH DNA in various body fluids is correlated with any genotypic difference using full-length genomic characterization.
Sample collection
To investigate the presence of DCH in various clinical samples, we collected several body fluid specimens including blood, AS, NS, OS, RS, and urine from animal hospitals and cat shelters in Thailand.Clinical samples were collected from cats that underwent examination for a general health checkup or were appointed to receive chemotherapy or continuous treatment for chronic diseases.All specimens were collected with the prior owner's consent.The details of specimens collected from each cat are presented in Supplementary material S1.
Blood samples were collected in EDTA blood collection tubes, whereas AS, NS, OS, and RS specimens were collected using sterile cotton swabs and immersed in 0.6 mL of sterile 1× PBS in 1.5-mL Eppendorf tubes.Urine was collected using a noninvasive procedure by the free-capture method.All specimens were subsequently stored at −80 In addition, a longitudinal detection was performed for five cats that showed positive result in the initial DCH screening, namely PK-71, PK-74, PK-83, PK-91, and CU-38.The longitudinal study was done with a convenience-based sample collection of practitioners, and the duration of sample collection ranged from 13 to 111 days, depending on each cat's follow-up examination.
Viral nucleic acid extraction and molecular screening for DCH
Approximately 200 µL of each specimen was subjected to a viral nucleic acid extraction procedure using the IndiSpin R Pathogen Kit (QIAGEN R GmbH, Germany) according to the manufacturer's protocol.The quality and quantity of the extracted nucleic acid were measured using a spectrophotometer (Nabi-UV/Vis Nano Spectrophotometer, Korea) at a 260/280 absorbance ratio.The extracted nucleic acid was then stored at −80 • C until it was used further for viral molecular screening.DCH screening was performed by a qPCR assay targeting the conserved overlapping region of P-and S-ORFs as previously described (11).Briefly, PCR master mix was prepared using the KAPA SYBR R FAST qPCR Master Mix kit (Kapa Biosystems, Sigma-Aldrich, South Africa) with the addition of a set of primers: DCH-qF (5 ′ -CGTCATCATGGGCTTTAGGAA-3 ′ ) and DCH-qR (5 ′ -TCCATATAAGCAAACACCATACAAT-3 ′ ).For the amplification of DCH DNA, the thermocycling conditions were set in the QIAGEN real-time PCR cycler Rotor-Gene Q (Qiagen, Germany) with Taq polymerase activation at 95 • C for 3 min, followed by 35 cycles of denaturation at 95 • C for 10 s and annealing at 60 • C for 20 s.The amplification condition was then followed by the increasing the thermal cycler temperature from 70 to 95 • C to acquire the melting curve from each amplicon.Samples that showed amplification curve above the threshold with single melting peak ranging from 80 to 85 • C in accordance to the standard plasmids were considered as positive.The viral DNA copy number was obtained by comparing the acquired fluorescence signal from the samples with the standard plasmid containing the L-gene fragment of DCH (TOPO TM TA Cloning TM Kit with One Shot TM TOP10 Chemically Competent E. coli; Invitrogen, USA) designed using the available DCH sequences on GenBank (https://www.ncbi.nlm.nih.gov/genbank/) as previously described (6).
Statistical analysis
All statistical analyses in this study were conducted using SAS statistical software version 9.4 (SAS Inst., Cary, NC, USA).
Initially, the MEANS procedure was performed to analyze the descriptive statistical data including the mean, standard deviation (SD), minimum, and maximum DCH DNA level presented in each group of specimens.For cats with multiple DCH detections during the longitudinal sample collection and testing, only the single time point at which the level of DCH DNA in the blood was the highest was included for statistical analysis.Multiple analyses of variance were carried out using a general linear model procedure to compare the level of DCH DNA among groups of specimens.
In addition, 14 DCH viremic cats were further classified based on the DCH DNA level present in the blood according to the correlation between HBV DNA level in the blood and HBV DNA positivity in other specimens as previously described (15).Only 14 of 25 DCH viremic cats could be included in this analysis due to the lack of other body fluid specimens available from the remaining 11 cases.In this study, we divided the tested cats into three different groups: (1) low viral load group, with a DCH DNA level < 5 log copies (LC)/mL; (2) high viral load group, with a DCH DNA level between 5 and 7 LC/mL; and (3) very high viral load group, with a DCH DNA level > 7 LC/mL.The association between DCH DNA level in the blood and the detection in other body fluid specimens was subsequently analyzed using Fisher's exact test, and relative risk (RR) was estimated using the FREQ procedure.A P-value < 0.05 was considered statistically significant for all tests.
Complete genome sequencing and phylogenetic analysis
Genomic characterization and analysis were conducted to compare the complete genome sequences obtained from blood to those obtained from other body fluid specimens.We also compared the complete genome sequences obtained from cats for which DCH DNA was detected only in single specimen to those obtained from cats for which DCH DNA was detected in multiple specimens.A total of nine complete genome sequences were retrieved from seven blood samples, one OS sample, and one RS sample.These cases were selected based on the highest number of viral loads present in the specimens and represented the cases for which DCH DNA was detected from only one route and those with DCH DNA detected in multiple routes.Conventional PCR with the addition of three different sets of primers was employed to retrieve the DCH complete genome as previously described (4,6).Retrieved sequences were then aligned and compared with the reference DCH sequences obtained from GenBank, using the freely available Molecular Evolutionary Genetic Analysis (MEGA) 7.0 software (http:// www.megasoftware.net/).All sequences were then subjected to the construction of a maximum likelihood phylogenetic tree using bootstrap analysis with 1,000 replications.The HKY+G+I model was implemented as the best model based on the lowest Bayesian information criterion number from the best-fit model algorithm in the MEGA 7.0 software.A set of primers for DCH complete genome amplification and the previously described DCH sequences used for constructing a phylogenetic tree are presented in Supplementary Tables S2, S3.
DCH DNA quantification
The level of DCH DNA detected in the blood ranged from 3.6 to 9.9 LC/mL, with a mean (±SD) of 7.4 ± 1.7 LC/mL.In the OS and RS specimens, DCH DNA was detected with copy numbers ranging from 4.2 to 9.9 LC/mL and 6.2 to 6.9 LC/mL, respectively, whereas the means (±SD) were 6.7 ± 1.9 LC/mL and 6.5 ± 0.3 LC/mL.As for the AS and NS specimens, a positive result was found in only one sample of each specimen, with a DNA level of 6.8 and 9.2 LC/mL, respectively (Figure 1).The multiple analyses of variance employed in this study revealed that there was no significant difference in the DCH DNA level between the blood, OS, RS, NS, and AS specimens.
Within the 25 DCH viremic cats, various body fluid specimens, in addition to the blood samples, were available only in 14 cats that were grouped evenly into the high viral load (7/14; 50%) and very high viral load (7/14; 50%) groups.Strikingly, a level of DCH DNA in the blood higher than 7 LC/mL (i.e., very high viral load group) was significantly correlated with DCH detection in the OS and RS specimens (P = 0.02; RR = 3.5 and P = 0.04; RR = 4, respectively), whereas there was no significant correlation between the level of DCH DNA in the blood with the presence of DCH DNA in the AS specimens (Table 2).NS specimen was excluded from the statistical analysis due to the lack of sample availability.
Longitudinal detection of DCH
A total of 4/5 cats included in the longitudinal DCH detection study (PK-71, PK-74, PK-83, and PK-91) showed viremia throughout the collection period (13-111 days), whereas one cat (CU-38) showed DCH viremia only on the first sampling (day 0) and was negative on the next sampling (day 90).The DCH DNA level detected in the blood from all five cats during the longitudinal study ranged from 4.2 to 9.9 LC/mL.Broader detection of DCH in the specimens other than blood was observed from 3/5 cats (PK-71, PK-83, and PK-91), whereas no broader detection was observed from cats PK-74 and CU-38.
Furthermore, cat PK-71 showed DCH positivity in the OS on day 0 with a viral copy number of 4.8 LC/mL and was negative on the next sampling (day 23).On day 57, however, recurrent DCH positivity in the OS was observed, with a DCH DNA level of 5.5 LC/mL, and on day 111, the OS specimen tested negative.Other than the OS specimen, DCH DNA was also detected in the RS on day 57, with a viral copy number of 6.2 LC/mL.DCH detection was negative in the AS at three sample collection times (day 23, day 57, and day 111), and there were no NS and urine specimens available from cat PK-71.
Cat PK-83 was screened twice, and only blood and OS specimens were available at the first screening (day 0).Other than blood, DCH positivity was observed in the OS specimens throughout the sampling collection period.The viral copy numbers detected in the OS specimens were 6.6 and 6.4 LC/mL on day 0 and day 13, respectively.DCH DNA was also detected in the RS specimen on day 13 at a level of 6.5 LC/mL, and no detection was found in the AS and urine specimens.An NS specimen was not available during the longitudinal sample collection of cat PK-83.Cat PK-91 was screened three times, on day 0, day 7, and day 55.DCH DNA was not detected in body fluid specimens other than blood on day 0 and day 7.In contrast, all available specimens, including blood, AS, OS, and RS, showed DCH positivity on day 55.The viral copy numbers detected in the AS, OS, and RS specimens on day 55 were 6.8, 6.4, and 6.5 LC/mL, respectively.No NS specimen was available from cat PK-91 throughout the sample collection period.The overall results of longitudinal DCH detection in this study are presented in Table 3. showed a high percentage of nucleotide identity (96-100%) among themselves, and no distinctive sequence was observed (data not shown).
Complete genome and phylogenetic analysis
The constructed phylogenetic tree revealed that nine sequences from this study were grouped into two different lineages (Figure 2).Six sequences (OQ362107, OQ362108, OQ362109, OQ362111, OQ362113, and OQ362112) shared the same lineage with previous DCH strains from Thailand (Accession Nos.MT506042.1,MT506043.1,MT506044.1,and MT506045.1).In contrast, three other sequences (OQ362106, OQ362110, and OQ362114) were grouped into different lineages together with some DCH strains from Thailand (Accession Nos.MT506047.1 and MT506041.1)and DCH strains from Malaysia, Australia, Japan, Italy, and Hong Kong (Accession Nos.MK902920.1,MH307930.1,LC668427.1,OK574326.1,and OP643851.1-OP643862.1,respectively).In addition, the phylogram demonstrated that the complete nucleotide sequences obtained from cases where DCH DNA was detected only in the blood (Case Nos.KB-18 and PK-74; Accession Nos.OQ362110 and OQ362111) were not distinct from those cases that expressed DCH DNA in the blood and other specimens (Case FIGURE Phylogenetic tree constructed from the complete nucleotide sequences of DCH retrieved in this study.The maximum likelihood phylogenetic tree was constructed with the addition of available DCH sequences in GenBank and demonstrated that nine sequences obtained in this study (labeled with ) were grouped into two di erent lineages.
Discussion
Hepadnaviruses have been long described to be associated with chronic hepatic diseases in some species, including humans and woodchucks (17,18).In 2018, a novel hepadnavirus was first discovered in cats, tentatively named DCH (4).To date, several studies have been conducted to investigate the pathogenicity potential of DCH in feline species, and an association between DCH infection and liver disturbances in cats has been suggested (5,6,9,11).The potential role of DCH in domestic cats, therefore, cannot be neglected.
In humans, horizontal transmission plays a contributive role in the increasing number of HBV infections, and concern about HBV transmissibility from various body fluid specimens other than blood has been raised in the last decade (12,19).This concern was especially highlighted in settings where close contact with chronically infected patients that showed no apparent clinical signs was inevitable, such in children's daycares or preliminary schools (12,13).Correspondingly, the detection of HBV DNA in various body fluid specimens other than blood, including cerumen, sweat, nasopharyngeal swab, urine, saliva, and tears, has been reported in numerous studies (3,12,14,15,20,21).In contrast to the broad evidence demonstrating the detection of HBV DNA in various body fluid specimens from infected patients, to date, there has been no report on the detection of DCH DNA in body fluid specimens other than blood in cats, although such an investigation was attempted in an earlier study (16).In this study, therefore, we investigated the detection of DCH DNA in blood and other body fluid specimens of cats.Notably, our result showed that DCH DNA could be detected in various body fluid specimens, including the blood, AS, NS, OS, and RS, whereas no positive detection was observed in urine.The low number of urine samples availability might explain the absence of DCH positivity from urine specimen in this study, however, the detection of other hepadnavirus from urine specimens has been documented in several studies (22,23).
The prevalence of DCH DNA detected in clinical samples in this study ranged from 1.14 to 4.70%, with the lowest prevalence observed in the NS specimens and highest prevalence obtained from blood.In HBV, broader detection of viral DNA in various body fluid specimens has been evidenced in several studies, with saliva and semen first experimentally demonstrated to be infectious (20).Later, in vivo experiment showed that the tears of chronically infected patients are also considered to be highly infectious (12).In to the risk of horizontal transmission of HBV from various routes other than blood, with the addition of the increasing trend of multi-cat settings where fighting and close contact between cats are inevitable, the detection of DCH DNA in various specimens in this study is enthralling.To what extent the detectable DCH DNA from various routes plays a role in infectivity and horizontal transmission in cat populations, however, requires further experimental studies.
The infectious potential of hepadnaviruses is also directly correlated with expressed viral load in the specimens (12).In this study, the mean DCH viral load detected in the blood, AS, NS, OS, and RS was >6 LC/mL, whereas in HBV, it has been described that body fluids containing HBV DNA > 5 LC/mL have the potential to be transmission vehicles, especially in highly endemic areas (12,14,15).The infectious risk of body fluids such as saliva and nasopharyngeal swab specimens containing high titers of HVB DNA has also been highlighted in multi-individual settings, such as those in childcare vicinities where horizontal transmission among children has been concerning (24).Although the result in which DCH DNA was detected in high titers from various body fluid specimens is intriguing, the clinical implications is yet to be explained because the DCH prevalence in this study is also lower compared with that observed in previous studies in Thailand (6).
In this study, it must be noted that although the sample collection procedures were performed with extra caution to prevent any trauma, the presence of blood from previously existing trauma and/or microtrauma could not be ruled out.However, during longitudinal sample testing, the OS specimens collected from cat number PK-71 revealed recurrent DCH DNA positivity (on the fourth testing) together with the increased DNA level in the blood and was then negative again on the next testing with a decreased DNA level in the blood (Table 3).Therefore, the presence of DCH DNA in body fluid specimens other than blood in this case might indicate the hematogenous spreading of DCH DNA.This result agrees with the evidence in humans showing that HBV viral DNA can be independently present in cerumen specimens without any blood contamination, as they ruled out the presence of blood in the collected specimens using a specific Meyer test (21).
It is also worth noting that of 14 cats in which various specimens in addition to the blood were available, the detection of DCH DNA in the OS, RS, NS, and AS was only observed in the group containing more than 7 LC/mL in the blood, whereas no detection was observed in the group where DCH DNA in the blood ranged between 5 and 7 LC/mL or less.A previous study investigating the presence of DCH DNA in oral, conjunctival, preputial, and rectal swabs also revealed negative results in all specimens collected from cats with DCH DNA levels in the blood ranging from 5.2 to 6.3 LC/mL (16).Correspondingly, there were positive correlations between the level of DCH DNA in the blood and the detection in OS and RS specimens.Statistically, this study revealed that DCH viremia cats with DCH DNA levels in the blood higher than 7 LC/mL are 3.5 and 4 times more likely to express DCH DNA in OS and RS specimens, respectively.The result in this study raised supporting evidence that mirrors the characteristic of HBV, where a correlation between the presence of high levels of HBV DNA in the serum with the presence of HBV DNA in other body fluids such as cerumen, saliva, nasopharyngeal swab, tears, and sweat has been documented in chronically infected humans (12,21,(24)(25)(26).The increasing level of HBV DNA in the blood of HBV chronically infected patients is commonly seen during the HBV reactivation period due to various underlying causes that promotes immunosuppressive conditions, such as prior human immunodeficiency virus infection, chronic diseases, therapeutic intervention and/or chemotherapy, and inadequate host immune response (27).In this study, the direct correlation between the clinical presentations and immunosuppression status of the cats with DCH DNA levels in the serum could not be drawn due to the limited history at the time of sample collection and warrants further investigation.
In contrast to the OS and RS, there was no positive correlation between the viral load in the blood with the detection of DCH DNA in the AS specimens.This result might represent the real number of detections in the specimens, which was very low, and/or could be due to the limited number of samples.As for the association between DCH viral load in the blood with the presence of DCH DNA in NS specimen, no correlation could be determined due to the lack of sample availability.Because this is the only study describing DCH DNA detection in various body fluid specimens other than blood in cats, broader epidemiological studies are needed to further elucidate the correlation between the presence of high viral titers in the blood and the expression of DCH DNA in other body fluids in cats.
In addition, this study revealed that four out of five cats included in the longitudinal study remained positive for DCH DNA detection in the blood for the whole timeline of sample collection, ranging from 1 to 4 months.Longitudinal detection of DCH DNA in the blood was also documented in a previous study by Capozza et al., who detected DCH DNA in the blood of a DCH-positive cat for as long as 11 months (16).HBV DNA detection in serum implies some clinical importance in humans, including the diagnosis of reactivity in chronic infection, as well as an independent risk predictor of HCC development when coupled with the observation of alanine aminotransferase and aspartate aminotransferase (28,29).
Finally, the phylogenetic analysis in this study revealed that all complete genome sequences obtained from blood, OS, and RS specimens showed a high percentage of identity among themselves.Moreover, there was no significant genomic distance between the complete genome sequences obtained from cases where DCH DNA was present only in the blood, compared with cases with DCH detection in blood and other routes.This result suggests that genomic diversity would not likely play a role in the DCH DNA positivity in other body fluid specimens.The result in this study supports a previous result where HBV DNA was detected in saliva and nasopharyngeal swabs of children with chronic infection, but an HBV genotypic difference was not presented in these patients (24).Although different HBV genotypes promote different disease progression, they are unlikely to affect the expression of viral DNA in the body fluid specimens (30).
In conclusion, this study revealed the detection of DCH DNA in various body fluid specimens of cats, including the blood, AS, NS, OS, and RS.The risk of cats expressing DCH DNA in the OS and RS was significantly higher when the DCH DNA level presented in the blood was >7 LC/mL.The relatively high level of DCH DNA in various specimens in addition to the blood might represent true expression of the viral DNA or the presence of blood due to trauma during or prior to sample collection.However, in accordance with the direct correlation between the DNA level in the specimens and transmissibility potential, the infectious risk from various specimens, if any, cannot be neglected.The detection of DCH DNA in broader routes was not associated with any genomic diversity.To what extent the expression of DCH DNA in various body fluid specimens of cats plays role in horizontal transmission in cat populations warrants further investigation.their affiliated organizations, or those of the publisher, the editors and the reviewers.Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
FIGURE
FIGURE DCH DNA levels detected in various specimens.The level of DCH DNA detected in blood, oral swab, rectal swab, nasal swab, and auricular swab showed no significant di erence.In this figure, the means and SDs are indicated by horizontal and vertical bars, respectively.
TABLE DCH detection in various body fluid specimens with the details of sample's availability.In detail, the clinical samples consisted of blood specimens collected from 532 cats, OS from 394 cats, RS from 182 cats, NS from 88 cats, AS from 8 cats, and urine from 5 cats.A total of 30/921 (3.26%) cats showed a positive result for DCH DNA detection by qPCR from at least one body fluid specimen.Within these 30 DCH-positive cats, 20 cats showed a positive result only in Frontiers in Veterinary Science frontiersin.orgthe blood (in 11/20 cats, only blood specimens were available; and in the other 9/20 cats, various body fluid specimens additional to the blood were available, but showed negative result).In addition to these 20 cases that showed positive only in the blood, four cats revealed positive detection only in the OS (blood specimens were not available in these cases), and six cats showed a positive result in more than one body fluid specimen.Of the six cats with DCH detection in more than one specimen, two cats (Case Nos.PK-95 and PK-98) revealed a positive result in the blood and OS; two other cats (Case Nos.PK-71 and PK-83) showed a positive result in the blood, OS, and RS; one cat (Case No. PK-91) revealed a positive result in the blood, OS, RS, and AS; and one cat (Case No. S-191) showed a positive result in the OS and NS (blood specimen was not available).As for the clinical specimen level, the qPCR for DCH detection employed in this study revealed a positive result in 40/1,209 (3.31%) of the collected samples.Specifically, DCH DNA was detected in 25/532 (4.70%), 10/394 (2.54%), 3/182 (1.65%), 1/88 (1.14%), and 1/8 (12.5%) blood, OS, RS, NS, and AS specimens, respectively.No DCH-positive result was found in the urine specimens.The overall result of DCH detection in cats and the details of sample availability from every DCH-positive cat are presented in Table 1.
aCases where specimens additional to the blood were available.bCaseswhere DCH DNA level in the blood were >7 log copies/mL.c Total of positive cases in each specimen.
TABLE The associations between DCH DNA level in blood with the detection in other specimens.
TABLE Longitudinal detection of DCH DNA from five di erent cases. | 2023-09-06T15:11:04.971Z | 2023-09-04T00:00:00.000 | {
"year": 2023,
"sha1": "a33f26948390069c2391baabcf900035e2cf8492",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fvets.2023.1248445/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3b84d8bc4924e4721a072ad7b64a85763cf186e6",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
139837191 | pes2o/s2orc | v3-fos-license | New simulation and analysis fiber Bragg grating: narrow bandwidth without side lobes
The purpose of this paper is to simulate and analyze the spectral characteristics of the fiber Bragg grating (FBG) to obtain narrow bandwidth and minimization side lobes in reflectivity. Fiber Bragg grating has made a big revolution in telecommunication systems. The existence of fiber Bragg grating is needed when an optical fiber amplifier and filter are used. They can be used as band reject filter or band pass filter for optical devices. The model equations of the cascaded uniform fiber Bragg grating and different cascaded apodization functions such as, Hamming apodized fiber Bragg grating, Barthan apodized fiber Bragg grating, Nuttal apodized fiber Bragg grating, Sinc apodized fiber Bragg grating and Proposed apodized fiber Bragg grating are numerically handled and processed via specially cast software to achieve maximum Reflectivity, narrow bandwidth without side lobes.
Introduction
FBGs are typically used as a selective wave-length reflector. Fiber Bragg grating nuts are spectral filters based on the Bragg reflection principle. The light usually reflects the narrow wavelength and sends all other wavelengths. When light is spread by periodic rotation of regions of the upper and lower refractive index, it is partially reflected in each interface between those regions [1]. The power of coupling, and hence the reflection and transmission spectra at an angle of inclination, fiber geometry, and the refractive index (RI) of the surrounding medium are affected [2]. There are a number of parameters in which FBG spectra have been shown, such as change in refractive index, bending of fibers, period of grating, excitation conditions, temperature, and length of tree fibers [3]. The fiber Bragg grating separator (FBG) is an optical device that periodically changes the refractive index along the direction of propagation at the heart of the fiber. The basic property of FBGs is that they reflect the light in a narrow band centered around Brag Bragg wave length. There is a different structure of FBG such as uniform, wet, peep, slanted and long period. When light diffuses through FBG in a narrow band of wavelength, the total reflection occurs at the Bragg wavelength and the other wavelength is not affected by the Bragg derivation except for some side lobes present in the reflection spectrum. These side lobes can be suppressed using the coding technique. The reflection range depends on the length and force of the refractive index formation. Wave reflection also depends on temperature and voltage [4]. In order to achieve high-efficiency long-range fiber connections, WDM is introduced. Scattering is a key factor limiting the design of long-distance optical links. Several techniques have achieved effective dispersion compensation (DC). The widely used technologies are DCF and broken glass panels (CFBG). Although DCF is a large-scale unit, CFBG is superior to many faces [5]. Due to its excellent multicast capabilities, the fiber Bragg grating (FBG) sensors are particularly attractive for applications where a large number of sensors are desirable such as industrial process control, fire detection systems, and temperature conversion of power, since sensor FBG occupies a narrowband bandwidth that is very narrow and can easily create a distributed sensor matrix by writing several FBG sensors on a single fiber at different locations [6]. The optical performances of the all-optical switching based on Yb 3+ -doped fiber Bragg grating (FBG) are investigated under the case of self-phase modulation (SPM) and cross-phase modulation (XPM). For the SPM case, the optical bistability of FBG under different parameters is investigated. It Original content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence.
Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
shows that the width of the hysteresis loop and threshold switching power are strongly dependent on the fiber grating length, fiber grating detuning, and coupling constant [7]. The refractive index of the nucleus is permanently changed. Germanium doped silica fiber is used in the manufacture of FBG because it is sensitive which means that the refractive index of the nucleus changes through exposure to light. The amount of change depends on the intensity and duration of exposure. It also depends on optical fiber sensitivity, so the level of fission with germanium should be high for high reflectivity [8]. Fiber Bragg grating nuts are spectral filters based on the Bragg reflection principle. The light usually reflects the narrow wavelength and sends all other wavelengths. When light is spread by periodic rotation of regions of the upper and lower refractive index, it is partially reflected in each interface between those regions [9]. Nazmi A Mohammed, Ayman W Elashmawy and Moustafa H Aly choice the raised cosine profile DFB-F with grating length (L=30 mm) and modulation depth (no=10 −5 ) this provides a Full Width at Half Maximum (FWHM) of 0.044 nm [10]. Before 2005, most of literatures uses CFBG as a narrow band unit covering a maximum full width half maximum (FWHM) of 2 nm of C-Band [5]. Yobani Mejía using an optical setup that includes a square array of 3×3 holes, he used nine meridional rays to measure the effective focal length of a lens, also he observed the selected meridional rays as a spot pattern on a diffuse screen. First, he generated a regular square spot pattern (reference pattern) without a lens to test, and then he generated two spot patterns in two different axial positions when the lens being tested refracts the rays. By selecting two sets of four rays of each spot pattern, he was able to measure the difference of the longitudinal (primary) spherical aberration in two positions [11]. Y Chen et al designed eight narrow band FBGs each with a maximum FWHM of 0.45 nm [12]. Zhi-Gang Zang and Yu-Jun Zhang propose a new optical bistable device (OBD), which is constructed by connecting two symmetrical fiber Bragg gratings with a ytterbium-doped fiber to form a nonlinear Fabry-Perot cavity. The principle of this new OBD is described using the transfer-matrix method, and the two groups of transmitted and reflected optical bistability loops under different parameters are investigated symmetrically [13].
In this paper the model equations of the cascaded uniform fiber Bragg grating and different cascaded apodization functions are numerically handled and processed via specially cast software to achieve maximum Reflectivity, narrow bandwidth without side lobes. For better performance the proper values for grating length and refractive index modulation must be chosen to achieve maximum reflectivity and narrow bandwidth, also minimization in side lobes achieved by increasing number of cascaded units from FBG.
Basic model and analysis
In the present section, the basic model, governing equations and the analysis of the fiber Bragg grating are investigated to obtain a maximum reflectivity and minimum bandwidth, we discuss more than one case of different models of fiber Bragg grating; we will discuss in this section two models of fiber Bragg grating: 1. Uniform fiber bragg grating 2. Apodized fiber bragg grating 2.1. Uniform fiber bragg grating 2.1.1. Fiber bragg grating structure The basic structure of the uniform fiber Bragg grating is illustrated in figure 1 [9] [10]. As shown in figure 1, the refractive index of the core is modulated by a period of Λ. When light is transmitted through the fiber which contains a segment of FBG, part of the light will be reflected. The reflected light has a wavelength equals to the Bragg wavelength so that it is reflected back to the input while others are transmitted. The term uniform means that the grating period, Λ, and the refractive index modulation, δn, are constant over the length of the grating. A grating is a device that periodically modifies the phase or the intensity of a wave reflected on, or transmitted through it [5]. The equation relating the grating spatial periodicity and the Bragg resonance wavelength is given by λ B =2 n eff Λ. Where n eff the effective mode is index and Λ is the grating period [11].
Theory and principle of operation
The study of the spectral characteristics of the uniform fiber Bragg grating is carried out by solving the dual mode equations. Dual mode theory is an important tool for understanding the design of fiber dividers from fiber Bragg grating [9]. FBG can be considered as a weak wave structure so that the pair mode theory can be used to analyze light propagation in a weak waveguide structure such as FBG. Dual-mode equations that describe the propagation of light can be obtained in FBG using the couple mode theory. The theory of marital status was first introduced in the early 1950s to microwave devices and later applied to optical devices in early 1970 [11].
For maximum reflectivity [12]: Reflective bandwidth, l D of uniform FBG is defined as wavelength bandwidth between the first zero reflective wavelength of both sides of peak reflection wavelength. It can be calculated by a general expression of the approximate bandwidth of the grating is: where l B is the Bragg (center) wavelength, s is a parameter indicting the strength of the gratings (∼1 for strong gratings and ∼0.5 for weak gratings), N is the number of grating planes, Dn ac is the change in the refractive index and n eff is the effective refractive index. The forward propagated light is reflected at Bragg wavelength [13]: Where l B is the Bragg wavelength (wavelength of the reflection peak amplitude), n is the effective refractive index of optical mode propagating along the fiber and Λ is the period of FBG structure. For a uniform Bragg grating formed within the core of an optical fiber with an average refractive index n o . The index of the refractive profile can be expressed as [6] [8]: where Δn is the amplitude of the induced refractive index perturbation, Λ is the nominal grating period and z is the distance along the fiber longitudinal axis. Using coupled-mode theory the reflectivity of a grating with constant modulation amplitude and period is given by the following expression [6,12,14]: In the case where the grating is uniformly written through the core, M power can be approximated 2 is the normalized frequency of the fiber, a is the core radius, n CO and n CL are the core and cladding indices respectively. At the center wavelength of the Bragg grating the wave vector detuning is Δβ=0, therefore the expression for the reflectivity becomes: The reflectivity increases as the induced index of refraction change gets larger. Similarly, as the length of the grating increases, so does the resultant reflectivity.
Apodized fiber Bragg grating
In the present section, we cast the basic model and the governing equation to apodized fiber Bragg grating. Apodized FBG offer significant improvement in side lobe suppression but on the expense of reducing the peak reflectivity. Apodized gratings have variations along the fiber in the refractive index modulation envelope (Δnαc) with constant grating period and constant DC refractive index function. The index of the refractive profile of Apodized can be expressed as [6]: Where n c0 is the core refractive index, Dn 0 is the maximum index variation, nd (z) is the index variation function and (z) is the Apodization function. Apodization profiles are [6] [13,15]: where, 0 z L Barthan function:
The proposed system
The proposed system aims to achieve maximum Reflectivity, narrow bandwidth without side lobes by using cascaded FGBs. It consists of four cascaded FBGs with four units. The reflectivity of each unit is derived related to of the first unit, where the reflected signal of each unit is the input signal for the new unit. This section shows a proposed model for cascaded n stages of FBGs. Analysis of this model is done by coupling theory [16]. T Matrix 2×2 where FBG is divided into sections. Each section is shown in figure 2 where T is the length of each section and Λ is the space between reflected planes of each section where: a : 0 is the incident optical signal.
b : 0 is the reflected optical signal. a : m is the output optical signal. b : m is the reflected optical signal at output of grating. m :is number of sections. The transfer matrix can be expressed by small multiplied matrixes as in: The transfer matrix can be written as: Figure 3 shows the connection between two FBGs where the output of the first on is connected to input of the second. In this case, the input optical signal for the second stage of the grating from (7) is:
( ) ( )
From (23) we can prove that the reflectivity of three cascaded FBG at the same parameters is equal to cubic reflectivity of the first one. At n stages of cascaded FBGs have the same parameters and each of them have a reflectivity R; the reflectivity of all n groups is equal to R n .
Simulation results and discussion
In this section we will display the simulation results of the cascaded uniform and cascaded different apodized fiber Bragg grating to obtain narrow bandwidth without side lobes and maximum reflectivity.
Cascaded uniform fiber bragg grating
We will simulate the spectral characteristics of the cascaded uniform fiber Bragg grating as in figure 4. In this simulation the modulation index, = d 0.0003 n and grating length L=5 mm. from figure 1 we noted that as the number of cascade of fiber Bragg grating is increased the bandwidth is decreased and the side lobes are also decreased but reflectivity is decreased.
We have obtained from simulation result for one unit of fiber Bragg Grating the reflectivity, R=99% and bandwidth=0.22 nm but the side lobes is high, for two cascaded units from fiber Bragg grating the reflectivity, R=98% and bandwidth=0.17 nm but side lobes in this case is decreased on one unit of fiber Bragg grating, for three cascaded units from fiber Bragg grating the reflectivity, R=97% and bandwidth=0.168 nm but side lobes is decreased and for four cascaded units from fiber Bragg grating the reflectivity, R=96% , bandwidth=0.16 nm and approximately no side lobes. Then we concluded that Reflectivity, R=96%, bandwidth=0.16 nm and the minimum side lobes is achieved at the fourth unit of cascaded fiber Bragg grating also, minimization in side lobes achieved by increasing number of cascaded units from FBG.
Hamming apodized cascaded fiber Bragg grating
We will simulate the spectral characteristics of the cascaded Hamming apodized fiber Bragg grating as in figure 5.
In this simulation the modulation index, = d 0.0003 n and grating length L=5 mm. from figure 2 we noted that as the number of cascade of fiber Bragg grating is increased the bandwidth is decreased and the side lobes are also decreased but reflectivity is decreased.
We have obtained from simulation result for one unit of fiber Bragg Grating the reflectivity, R=98% and bandwidth=0.026 nm but the side lobes is high, for two cascaded units from fiber Bragg grating the reflectivity, R=96% and bandwidth=0.02 nm but side lobes in this case is decreased on one unit of fiber Bragg grating, for three cascaded units from fiber Bragg grating the reflectivity, R=93% and bandwidth=0.018 nm but side lobes is decreased and for four cascaded units from fiber Bragg grating the reflectivity, R=92% , bandwidth=0.017 nm and approximately no side lobes. Then we concluded that Reflectivity, R=92%, bandwidth=0.017 nm and the minimum side lobes is achieved at the fourth unit of cascaded fiber Bragg grating.
Barthan apodized cascaded fiber Bragg grating
We will simulate the spectral characteristics of the cascaded Barthan apodized fiber Bragg grating as in figure 6.
In this simulation the modulation index, = d 0.0003 n and grating length L=5 mm. from figure 3 we noted that as the number of cascade of fiber Bragg grating is increased the bandwidth is decreased and the side lobes are also decreased but reflectivity is decreased. We have obtained from simulation result for one unit of fiber Bragg Grating the reflectivity, R=100% and bandwidth=0.083 nm but the side lobes is high, for two cascaded units from fiber Bragg grating the reflectivity, R=99% and bandwidth=0.069 nm but side lobes in this case is decreased on one unit of fiber Bragg grating, for three cascaded units from fiber Bragg grating the reflectivity, R=99% and bandwidth=0.066 nm but side lobes is decreased and for four cascaded units from fiber Bragg grating the reflectivity, R=99% , bandwidth=0.064 nm and approximately no side lobes. Then we concluded that Reflectivity, R=99%, bandwidth=0.064 nm and the minimum side lobes is achieved at the fourth unit of cascaded fiber Bragg grating.
Nuttal apodized cascaded fiber Bragg grating
We will simulate the spectral characteristics of the cascaded Nuttal apodized fiber Bragg grating as in figure 7. In this simulation the modulation index, = d 0.0003 n and grating length L=5 mm. from figure 4 we noted that as the number of cascade of fiber Bragg grating is increased the bandwidth is decreased and the side lobes are also decreased but reflectivity is decreased. We have obtained from simulation result for one unit of fiber Bragg Grating the reflectivity, R=99% and bandwidth=0.08 nm but the side lobes is high, for two cascaded units from fiber Bragg grating the reflectivity, R=99% and bandwidth=0.063 nm but side lobes in this case is decreased on one unit of fiber Bragg grating, for three cascaded units from fiber Bragg grating the reflectivity, R=98% and bandwidth=0.061 nm but side lobes is decreased and for four cascaded units from fiber Bragg grating the reflectivity, R=98% , bandwidth=0.059 nm and approximately no side lobes. Then we concluded that Reflectivity, R=98%, bandwidth=0.059 nm and the minimum side lobes is achieved at the fourth unit of cascaded fiber Bragg grating.
Sinc apodized cascaded fiber Bragg grating
We will simulate the spectral characteristics of the cascaded Sinc apodized fiber Bragg grating as in figure 8. In this simulation we choose modulation index, = d 0.0003 n and grating length L=5 mm. from figure 5 we noted that as the number of cascade of fiber Bragg grating is increased the bandwidth is decreased and the side lobes are also decreased but reflectivity is decreased.
we have obtained from simulation result for one unit of fiber Bragg Grating the reflectivity, R=99% and bandwidth=0.013 nm but the side lobes is high, for two cascaded units from fiber Bragg grating the reflectivity, R=97% and bandwidth=0.01 nm but side lobes in this case is decreased on one unit of fiber Bragg grating, for three cascaded units from fiber Bragg grating the reflectivity, R=96% and bandwidth=0.0098 nm but side lobes is decreased and for four cascaded units from fiber Bragg grating the reflectivity, R=95% , bandwidth=0.0096 nm and approximately no side lobes. Then we concluded that Reflectivity, R=95%, bandwidth=0.0096 nm and the minimum side lobes is achieved at the fourth unit of cascaded fiber Bragg grating.
Proposed apodized cascaded fiber Bragg grating
We will simulate the spectral characteristics of the cascaded Proposed apodized fiber Bragg grating as in figure 9.
In this simulation the modulation index, = d 0.0003 n and grating length L=5 mm. from figure 6 we noted that as the number of cascade of fiber Bragg grating is increased the bandwidth is decreased and the side lobes are also decreased but reflectivity is decreased. We have obtained from simulation result for one unit of fiber Bragg Grating the reflectivity, R=96% and bandwidth=0.013 nm but the side lobes is high, for two cascaded units from fiber Bragg grating the reflectivity, R=93% and bandwidth=0.0092 nm but side lobes in this case is decreased on one unit of fiber Bragg grating, for three cascaded units from fiber Bragg grating the reflectivity, R=90% and bandwidth=0.0084 nm but side lobes is decreased and for four cascaded units from fiber Bragg grating the reflectivity, R=87% , bandwidth=0.0081 nm and approximately no side lobes. Then we concluded that Reflectivity, R=87%, bandwidth=0.0081 nm and the minimum side lobes is achieved at the fourth unit of cascaded fiber Bragg grating. Table 1 show different cascaded apodized fiber Bragg grating variations of reflectivity, R and bandwidth with the increase of number of stages.
Comparisons between reflectivity and bandwidth for different cascaded units of apodized fiber Bragg grating
From table 1 we noted that as the number of cascaded units of fiber Bragg grating is increased the reflectivity is slightly decreased and the bandwidth decreased with minimum side lobes also, we can cancelled side lobes by increasing number of cascaded units from FBG more than fourth units..
Conclusions
In this work the model equations of the cascaded uniform fiber Bragg grating and different cascaded apodization functions are numerically handled and processed via specially cast software to achieve maximum Reflectivity, narrow bandwidth and minimum side lobes. For better performance the proper values for grating length and refractive index modulation must be chosen to achieve maximum reflectivity and narrow bandwidth. The minimization in side lobes achieved by using cascaded units from FBG. From this study we concluded that: Uniform FBG in case one unit R=99%, BW=0.22 nm and exist side lobes but for fourth unit R=96%, BW=0.16 nm with minimum side lobes.
For Hamming, Barthan and Nuttal FGB achieved high Reflectivity and narrow bandwidth with minimum side lobes in fourth unit FBG.
For Sinc and Proposed FBG achieved narrow bandwidth without side lobes.
High Reflectivity achieved in case of Barthan apodization function where, R=99% in fourth unit.
Narrow bandwidth achieved in case of Proposed apodization function where, BW=0.0081 nm in fourth unit without side lobes.
We can delete side lobes by using more than fourth units from FBG in all apodized fiber Bragg grating. | 2019-04-30T13:09:12.386Z | 2020-07-21T00:00:00.000 | {
"year": 2020,
"sha1": "f1aae290b6bcbed481eed2dcd3d82a22cd0e67f3",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/2399-6528/ab0600",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "2454896587c2cbda8548a57326dc30dee2634760",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
92836513 | pes2o/s2orc | v3-fos-license | Isolation and Identification of Antagonistic Bacterium against Pathogens of Bacterial Tuber Rot of Amorphophallus muelleri
Rhizosphere bacteria have the ability to protect the host plants from the infection of pathogenic microorganisms. This study aimed to identify rhizosphere bacteria that were capable of inhibiting the growth of bacterial isolates that cause tuber rot of Amorphophallus muelleri. Rhizosphere bacteria were isolated using Nutrient Agar medium by pour plate method. Isolates were subjected to antagonistic assay against several bacterial isolates from the rotten tuber of A. muelleri using dual culture method. The potential isolate was identified based on 16S rDNA sequence. Isolate R7 showed the strongest inhibition to the growth of bacterial isolates from rotten tuber with an inhibition zone diameter of 19.66 mm. The 16S rDNA sequence of isolate R7 R7 was 99.7% similar to Delftia tsuruhatensis PCL1755. The isolate was potential to be developed as phytopathogen control agent.
INTRODUCTION 1 Tubers of Amorphophallus muelleri Blume.contain a high concentration of glucomannan.Glucomannan is a starch that can be used as a thickener agent in foods such as noodles [1].Glucomannan in A. muelleri tubers has a high economic value and is expected to improve Indonesian economy.Therefore, it is necessary to increase the production of A. muelleri tuber.Nowadays, the problem is pathogenic microorganisms that attack the tubers either under the ground or even post-harvest.The common pathogenic microorganisms that attack the tubers are Erwinia caratovora and Pectobacterium caratovora on Amorphophallus konjac tuber [2,3] and Dickeya dadantii on Amorphophallus rivieri [4,5].
To control the tuber rotting, farmers use chemicals such as pesticides.Continuous application of synthetic pesticides caused negative impacts on the environment [6].The residue of the pesticides in the soil as well as on the plant parts (fruits, leaves, and tubers) [7,8] were indirectly or directly toxic to humans [9,10].
Therefore, biological agents are required as antagonistic agents to control the growth of bacterial rot pathogens.One alternative that can Correspondence address: Rodiyati Azrianingsih Email : rodiyati@ub.ac.idAddress : Dept.Biology, Faculty of Mathematics and Natural Science, University of Brawijaya, Veteran Malang, Malang 65145.
be used as an ecologically safe and effective antagonistic agent against pathogens is the rhizosphere bacteria [11,12,13].Rhizosphere bacteria are present in the soil around plant roots.They have many benefits for plants such as promoting nitrogen fixation, phosphate and potassium solubilization, production of phytohormones and antibiotics [14,15,16].
Some rhizosphere bacteria that act as antagonistic agents are Bacillus, Pseudomonas, Pantoea, and Lactobacillus.One of the antagonistic bacteria that can inhibit the growth of E. caratovora (one of the causes of tuber rot bacteria) is Bacillus subtilus.It is able to produce antibiotic compounds such as bacitracin, bacillin, bacillomycin B, difficidin, oxydifficidin, lecithinase, and subtilisin.These compounds cause shrink in cells so that bacterial cells of E. carotovora will lose water and experience plasmolysis [17,18].In addition, Bacillus amyloliquefaciens is also able to inhibit the growth of Erwinia bacteria which causes postharvest tuber rot [19].This research aims to analyze the potency of isolated rhizosphere bacteria to inhibit A. muelleri tuber rot bacteria and to identify the potential rhizosphere bacteria based on 16S rDNA sequence.
MATERIAL AND METHOD Isolation of Rhizosphere Bacteria
Soil samples were obtained from the rhizosphere of A. muelleri from Rejosari Village, Bantur City, East Java Province, Indonesia.The soil was taken at a depth of 5-10 cm of topsoil and kept in plastic bags in the isotherm box [20].At each sampling point, abiotic factors including 166 Antagonistic Bacterium against Pathogens of Bacterial Tuber Rot of A. muelleri (Arfani et al) the ambient and soil temperature, and light intensity were measured directly at the field; while moisture, pH, and organic matter of soil were measured in the Laboratory of Microbiology and Laboratory of Ecology, University of Brawijaya.The data of abiotic factors were analyzed using ANOVA and Tukey test with five percent significant differences.
Twenty five grams soil sample was diluted at 10 -1 -10 -7 in sterile physiological saline solution.Each suspension of 0.1 mL was transferred into Nutrient Agar (NA) medium in the Petri dishes according to pour plate method and incubated at room temperature for 72 hours.Each bacterial colony was enumerated and purified according to the spread plate method.The pure culture of rhizosphere bacteria in the NA medium was stored at 4°C [21].The diversity of bacterial communities was determined based on the Simpson Diversity index according to the equation 1 [22][23][24][25].
Isolation of Pathogenic Bacteria from Rotten Tuber of A. muelleri
Bacterial pathogens of A. muelleri tubers were isolated according to Ashmawy et al. [26] with modifications.The rotten of tuber was cut into a dimension of 1.0 x 0.5 x 0.2 cm 3 .It was sterilized by soaking in 1.0% NaOCl solution for two minutes and rinsed two times with sterile ddH2O.The pieces of sterilized rotten tuber were weighed to 25 g.They were blended with 225 mL sterile physiological saline solution and diluted at 10 -1 -10 -7 .Sample suspension of 0.1 mL was inoculated into NA medium according to pour plate method and incubated at room temperature for 48 hours.The bacterial colony was purified according to the spread plate method and pure cultures were stored at 4°C.
Antagonist Assay of Rhizosphere Bacteria Against A. muelleri Tuber Rot Bacteria
The antagonistic assay among bacterial isolates was done using dual culture method [27].The 100 µL suspension of isolated tuber rot bacterium with 10 6 cells.mL - density was spread on NA medium and directly incubated at 4°C for 4 hours.The NA agar plates were perforated to make 6 mm wells.The wells were inoculated with 60 μL of 10 7 cells.mL - density of antagonistic rhizosphere bacteria.The cultures were incubated at room temperature for 72 hours.The growth inhibition of tuber rot bacteria was indicated by the clear zone around the well.The diameter of the inhibition zone were measured and the data was analyzed using ANOVA and Tukey test with five percent significant differences.
Identification of Potential Rhizosphere Bacteria
Rhizosphere bacteria with the highest potency to inhibit tuber rot bacteria was identified based on phenotypic and phylogenetic characters.Phenotypes of bacteria were characterized based on Bergey's Manual of Systematic Bacteriology [28,29].The phenotype of bacteria consists of the colony and cell morphology, biochemical, and physiological characters.Phylogenetically, the bacteria isolate was identified based on 16S rDNA sequence similarity.The genomic DNA of the selected isolate was extracted using Heat Treatment method [30].The sequence of 16S rDNA was amplified using universal primer of: The composition of 50 μL PCR reaction was 25 μL PCR master mix, 19 μL ddH2O, 2 μL of each primer, and 2 μL of DNA template.The components were homogenized and 16S rDNA was amplified using the PCR program at 35 cycles includes: predenaturation at 94°C for 5 min, denaturation at 94°C for 30 s, annealing at 55°C for 30 s, and extension at 72°C for 90 s; and followed by post extension at 72°C for 5 minutes.
The amplicon of 16S rDNA was verified with electrophoresis on 1.5 % agarose gel.The amplicon of 16S rDNA was purified and sequenced at First Base, Malaysia using Automatic Sequencer Analyzer ABI 3130.The sequence of 16S rDNA was edited using the Sequence Scanner V.1 program and the sequences were combined using the BioEdit V.7.2.5 program.The 16S rDNA sequence of the isolated bacteria was aligned together with 16S rDNA reference that obtained from the NCBI database.The phylogenetic tree was constructed based on Neighbor-Joining with bootstrap 1000 using the MEGA 6.00 program [31,32,30].
RESULT AND DISCUSSION Density and Diversity of A. muelleri Rhizosphere Bacteria
A total of isolates of A. muelleri rhizosphere bacteria were obtained from three locations.Based on Simpson's diversity index, community diversity of the rhizosphere bacteria was in the range of 0.84 -0.87 (Fig. 1).It indicated that the community was highly diverse and there were no dominant species [25].However, the density of rhizosphere bacteria was relatively low in the range of 3.54 -3.56 Log 10 CFU.g -1 (30.0 -59.0 x 10 2 CFU.g -1 soil (Fig. 2).The low density might be caused by the low organic matter and low moisture of the soil.Since those parameters are limiting factors for the growth of several rhizosphere bacteria.Soil bacteria require a minimum of 2% soil organic matter and 60% soil moisture for support of the optimal growth [33,34], while in this experiment the organic matter and soil were less than 0.2 and 32%, respectively.Environmental for the three sampling locations were presented in Table 1.The soil parameters especially plant and soil type, and farm practice affects the diversity and density of soil microorganisms and plants growth [34].The low nutrient and water availability in the soil may inhibit metabolism and growth of micro-organisms.Soil organic matter plays an important role in soil structure and texture, microaggregate stability, soil moisture and pH, nutrient availability, and microorganism density and diversity [25,35].In all locations, soil moisture was low due to low content of organic matter.Organic matter will increase soil moisture and decrease soil pH.The increase of soil organic matter will increase the content of organic carbon which utilized by bacteria as carbon and energy source [36].Soils of A. muelleri plantation were acid, with pH value 3.78 -4.13.In general, bacteria grow in the pH range 5-7 as optimum conditions [37].The acidity of the soil may be caused by contamination of metals derived from the use of chemicals (fungicides and pesticides), pollution, organic fertilizers, and household waste disposal [38].Soil organic matter (%) 0.16 ± 0.02 0.17 ± 0.03 0.16 ± 0.02
Antagonistic Potency of Rhizosphere bacteria
Rhizosphere bacteria consisting of nine isolates with a density of 10 7 CFU.mL - were tested for their inhibition against three tuber rot bacteria of A. muelleri (PT4, PL9, and PR11).The rhizosphere bacterial isolates had different potency in inhibiting and only four isolates can inhibit tuber rot bacteria (Fig. 3).Isolate R3 was not able to inhibit isolate PT4 but it inhibited isolate PL9 and PR11 with inhibition zone diameter 12 and 10 mm, respectively.Isolate R5 was able to inhibit PT4, PL9, and PR11 with inhibition zone diameter of 2.07, 5.93, and 6.96 mm respectively.Isolate R7 was able to inhibit the three isolates of tuber rot bacteria, PT4, PL9, and PR11 isolate with inhibition zone diameter of 19.66, 11.24, and 12.42 mm.Isolate R9 was only able to inhibit isolate PT4 and PL9 with inhibition zone diameterof 2.00 and 6.21 mm, respectively.
Isolate R7 had the highest inhibition potency among the other A. muelleri rhizosphere bacteria.The isolate was able to inhibit the three isolates of tuber rot bacteria and categorized in the high potential with inhibition zone more than > 10 mm [39].Based on previous experiment [40], Bacillus circulans rhizosphere bacterium was able to inhibit Escherichia coli, Bacillus subtilis, and Serratia marcescens with inhibition zone diameter of 11, 12, and 6 mm, respectively.Antagonistic bacteria have a mechanism to inhibit the growth of pathogens.The inhibition is performed by producing antimicrobial compounds such as enzymes capable of attacking the main cell components of pathogens [41,42].Antimicrobial compounds produced by antagonistic bacteria cause damage to the cell membrane and shrink the cell.Furthermore, the activity of the bacteria becomes disturbed and causes it to die.Another antibiotics compounds are responsible for inhibition of protein synthesis process.The synthesis is inhibited when exposed to antibiotic compounds and cause cell death of pathogens [43,44].The inhibition mechanism of tuber rot bacteria by isolate R7 was antibiosis.Antibiosis is the ability of antagonistic isolates to produce secondary metabolites of antibiotics, siderophores, and several enzymes such as chitinase, protease, and cellulase enzymes that inhibit the growth of target cell [45,46].The activity of antibiosis was determined by clear zones; it proved that rhizosphere bacteria could produce antibiotics that inhibited the growth of tuber rot bacteria [47,48].In addition, inhibition potency of antagonistic bacteria may also be caused by antagonism such as competition of root colonization and nutrient [49][50][51][52].Antagonism is the ability of antagonistic microorganisms to produce antibiotics that can kill pathogenic microorganisms [53].Competition of space and nutrients cause limitation of nutrition and space for growth of pathogens [37,54,55,56].
Identification of A. muelleri Rhizosphere Bacterium as Antagonist of Tuber Rot Bacteria
The isolate R7 (Fig. 4) had a 99.7% similarity of 16S rDNA sequence with Delftia tsuruhatensis PCL1755.The phenetypic data of isolate R7 was used as additional data to support the results of phylogenetic identification (Table 2).The strain is isolated from various soil types such as eggplant, tomato, pepper, and avocado and it showed a widespread inhibition to the growth of Fusarium oxysporum f. sp.radicis lycopersici [57,58].
The D. tsuruhatensis is one of the rhizosphere bacteria that act as a biocontrol agent or plant growth promoting rhizobacteria (PGPR) [59].Some strains of these bacteria have the ability to degrade the inorganic pollutants [60].The natural habitat of these bacteria are dispersed in soil, activated sludge, and also in contaminated environments.Bacteria D. tsuruhatensis was first isolated from active sludge and acted as degradation of terephthalate or plastic (environmental pollutants) [57,58,61].One strain of Delftia is D. tsuruhatensis HR4 had the ability to control disease in rice caused by Xanthomonas oryzae, Rhizoctonia solani, and Pyricularia oryzae [60,62,63].In some studies, although it has the ability as an antagonist agent against pathogens, the mechanisms of synthesis and antimicrobial compounds owned by these bacteria is still unclear.This is due to the lack of research on these bacteria.The strain of D. tsuruhatensis Pathogens of Bacterial Tuber Rot of A. muelleri (Arfani et al)
Figure 1 .
Figure 1.The Bacterial Diversity at rhizoSphere of A. muelleri Plantation.The same notation show diversity index does not significantly different among the sampling location (p> 0.05).
Figure 2 .
Figure 2. The Bacterial Density at Rhizosphere of A. muelleri Plantation.The same notation show cell density does not significantly different among the sampling location (p> 0.05) Pathogens of Bacterial Tuber Rot of A. muelleri (Arfani et al)
Figure 3 .
Figure 3.The Potency of A. muelleri Rhizosphere Bacteria to inhibit tuber rot bacteria (PT4.PL9, and PR11).The same notation show diversity index does not significantly different among the sampling location (p> 0.05) MTQ3 has the ability to inhibit the growth of Ralstonia solancearum and Phytophtohora nicotinae [58].
Figure 4 .
Figure 4. Phylogeny Tree of Rhizosphere Bacteria and Reference Isolates Based on 16S rDNA Sequence according to Neighbor-Joining Algorithm
Table 1 .
Enviromental Parameters at Sampling Locations | 2019-04-03T13:08:55.315Z | 2018-10-31T00:00:00.000 | {
"year": 2018,
"sha1": "82522a29f82673ccd00ab3b49f96be57f628f60b",
"oa_license": "CCBY",
"oa_url": "https://jels.ub.ac.id/index.php/jels/article/download/297/280",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "03d8ace0858ac2512e0d28b9f2b3e1559b885130",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
259900064 | pes2o/s2orc | v3-fos-license | Physical Activity and Self-Efficacy on Exercise among Elderly Retired Philippine Army: Basis for Policy-making on Physical Activity Program
— This study aimed to investigate the physical activity and the level of self-efficacy in relation to exercise among the retired Philippine Army based in the province of Zamboanga del Norte, Philippines during the months of May and June 2023. A quantitative descriptive-correlational design was utilized in the study. Data were gathered from 19 retired Philippine Army personnel during the Armed Forces Veteran Association-Zamboanga del Norte’s monthly meeting. Frequency count, weighted mean and Pearson Correlation was used to analyze the collected data. The study found out that there was no significant relationship between physical activity on age, gender and length of service. It was also noted that there was a positive correlation between the variables age, gender and length of service this suggests that if there were changes in the variable physical activity, all other variables would change on the same direction. Moreover, Results showed that age and gender had a significant relationship as well as age and self-efficacy as both relationships showed a positive correlation. However, length of service and self-efficacy does not show significant relationship because of its negative correlation. Furthermore, results show no significant relationship between physical activity and the level of self-efficacy among Armed Forces veterans in Zamboanga del Norte. The results revealed a positive correlation which indicates that when the variable self-efficacy changes, the variable physical activity also changes on the same direction.
I. INTRODUCTION
According to the Philippine Statistics Authority (PSA, 2020) people aged 60 and up are considered vulnerable, as are those who suffer from conditions known as comorbidities, such as a) immunocompromised diseases, like cancer and HIV/AIDS; b) diabetes; c) chronic cardiovascular diseases; and d) chronic respiratory conditions including Coronavirus disease 2019 . Based on the 2018 record from PSA's Philippine National Health Accounts, the overall population aged 60 and above spent a total of PhP 171.5 billion, of which PhP 44.4 billion was spent by the demographic segment with comorbidities. Total health expenses of PhP 101.2 billion by the vulnerable group of 60 years old were paid out-of-pocket by households, accounting for roughly 60% of total health spending. Approximately 15.7 percent of current health expenses were spent on the treatment/management of illnesses associated with severe and serious COVID-19 patients, wherein expenditures on cardiovascular diseases were the highest. This could have been prevented if the government has made physical activity programs to keep elderly population active that would definitely prevent health problems, especially those that are categorized as modifiable. Given these facts of Policy-making on Physical Activity Program Int. Ru. Dev. Env. He. Re. 2023 2 Vol-7, Issue-3; Online Available at: https://www.aipublications.com/ijreh/ economic burden among the elderly and knowing that the most expensive medical conditions are usually those that occur near the end of a person's life, has motivated the researchers to conduct the study.
The World Health Organization (WHO, 2022) defines "healthy ageing as developing and maintaining the functional ability that enables well-being in older age". Individual intrinsic capacity (i.e., physical and mental capacities), the environment in which he or she lives (considered in the broadest sense and including physical, social, and policy contexts), and their interactions define functional ability.
Physical activity is "any bodily movement produced by skeletal muscles that requires energy expenditure -including activities undertaken while working, playing, carrying out household chores, travelling, and engaging in recreational pursuits". Adults of all ages should exercise for at least 150 minutes at a moderate intensity level or 75 minutes at a vigorous level per week. Activity should be done for at least 10 minutes at a time to be beneficial to cardiovascular health (World Health Organization {WHO}, 2018).
Walking, cycling, wheeling, sports, active recreation, and play are all popular activities to be active, and they may be done by anyone of any ability level. Physical activity has been shown to aid in the prevention and management of noncommunicable diseases (NCDs) such as heart disease, stroke, diabetes, and a variety of cancers. It also aids in the prevention of hypertension, the maintenance of a healthy body weight, and the enhancement of mental health, quality of life, and overall well-being (WHO, 2018).
Every citizen in the globe, in every country, should be able to enjoy a long and healthy life. For many older persons, a significant percentage of the normal age-related reduction in functional ability is due to "deconditioning," because most older adults do not engage in enough physical activity and exercise to get the health advantages (Avers & Wong, 2020).
The move to retirement is a significant life adjustment that affects people's lifestyles and activities, especially those related to physical activity, which is a crucial component of active ageing. It is seen as a big life event that has an impact on people's daily routines, lifestyles, and health habits (Socci et al., 2021).
The number of people aged 60 and up is on the rise, as is their proportion in the population. The number of persons aged 60 and up reached one billion in 2019. According to WHO statistics, this number will have risen to 1.4 billion by 2030, and by 2050, it will have risen to 2.1 billion. This rise is occurring at an unprecedented rate, and it is expected to intensify in the next decades, especially in emerging countries (WHO, 2022) The Philippines, like many other countries, will see an increase in the number of Filipinos aged 65 and up. Currently, the country is unprepared to handle the influx of elderly Filipinos. The Philippine government must acknowledge that national programs for older Filipinos, such as the Senior Welfare Act, must be revised to meet the special needs of the country's senior population. More aged Filipinos should be covered by social welfare and wellness programs, which should be increased. To promote the well-being of older individuals across the provinces, disparities in access to services must be addressed (Bandana & Andel, 2018).
The Healthy and Productive Ageing program of the Department of Health (DOH, n.d.), as mandated by Republic Acts 9257 (The Expanded Senior Citizens Act of 2003) and 9994 (The Expanded Senior Citizens Act of 2010), focuses on promoting senior citizens' health and wellness and alleviating the conditions of older people suffering from degenerative diseases. This program primarily aims to improve the quality of life for older people and contribute to the development of the country by ensuring fair access to high-quality healthcare.
Based on the researchers' perspective, the elderly retired Philippine Armies are among those individuals who should be given utmost importance in their declining years. The Philippine Army (PA), also known as Hukbong Katihan ng Pilipinas, is the primary and oldest branch of the Philippine Armed Forces (AFP), headquartered in Taguig City, Metro Manila. The PA is tasked with defending the country through land warfare and operations, with 11 divisions and special forces spread over the islands of Luzon, Visayas, and Mindanao. The Philippine Army has acted as a protector of Filipinos with a significant role in nation-building for 122 years. To serve the people and secure the land, the Philippine Army continues to innovate its forces and emerge victorious in all battles and challenges (Philippine Army {PA}, 2022).
The researchers were able to retrieve data from the Philippine Army Attrition Branch on the retired army personnel from 2016 to 2021 totalling to 13,929 February 8, 2022). Yet, the researchers were not able to gather any information based on studies that are available in the country pertaining to a program that promotes physical activities for elderly Filipinos, specifically the retired Philippine Army (who during their active service, was engaged in strenuous physical activities). This has led the researchers to conduct the said study.
Theoretical Framework
The Social-Cognitive Theory (SCT), which employs tactics to foster behavioral change, is an effective model for increasing physical exercise (Merryman, 2017). Behavioral change is influenced by environmental effects, personal factors, and the characteristics of the behavior itself, according to social learning theory, which was later renamed social cognitive theory. This means that the person must have confidence in his or her capacity to do the behavior (self-efficacy) and perceive an incentive (positive expectations must outweigh negative expectations) in order to be able to do so.
Retirement can be essential in influencing PA behavior because of the multiple changes involved with this transition (e.g., decreased income, fewer social contact, loss of daily routine). The belief in one's ability to effectively conduct the desired behavior (i.e. selfefficacy) is a crucial determinant of PA adoption, especially in older persons, according to SCT. The findings of the study of Kosteli et al. (2016) are consistent with SCT and point to self-efficacy as one of the underlying reasons for people in their retirement years participating in PA. Figure 1 describes the conceptual framework of the study wherein the independent variables consist of the following demographic profile: Age, Gender and Length of Service. On the other hand, the dependent variables are the level of Physical Activity and the level of Self-efficacy on Exercise.
Conceptual Framework
This study aimed to find out if there is a significant relationship between the demographic variables and the level of physical activity, as well as a significant relationship between the demographic variables and the level of self-efficacy on exercise among elderly retired Philippine Army of Zamboanga del Norte. Further, results will serve the desired output of the study which is the amendment of the retirement benefits of the elderly retired Philippine Army, a Physical Activity Program.
Statement of the Problem
This paper aimed to identify and explain the level of physical activity and self-efficacy on exercise among the respondents as a basis for policy-making on physical activity program.
This study specifically seeks to answer the following questions: 1. What is the demographic profile of the respondents in terms of:
II. LITERATURE
Despite the fact that older Filipinos appear in national reports, empirical studies including older individuals appear to be rare in the Philippines. The Philippines' major universities house research institutes that study a wide range of topics; however, the University of the Philippines Manila is currently the only major institution with a dedicated center for aging research. The majority of research on older Filipinos appears to be focused on aging perceptions, older Filipinos' quality of life, and older adults in the workforce (Bandana & Andel, 2018).
According to one survey, persons over the age of 65 volunteer more hours per year than any other cohort. Older adult volunteers have a plethora of personal and professional experiences that can be used to help the organization or community with whom they volunteer. Furthermore, volunteering contributes to healthy aging by: (1) improving quality of life, strengthening social networks, increasing physical activity, and lowering mortality rates; (2) increasing social support; (3) improving life satisfaction and wellbeing, sense of purpose, self-confidence, and personal growth; and (4) providing a fulfilling way to use valuable skills, give back to communities, and mentor others (Philippine National Volunteer Service Coordinating Agency {PNVSCA}, 2021) Workplace practices have a substantial longterm impact on well-being, including chronic physical health issues resulting from lack of protective clothing among UK Armed Forces (AF) veterans. Due to logistical issues or musculoskeletal ailments (believed to be caused by very strenuous AF physical activity), some difficulties sustaining physical activity were experienced after leaving the AF. This explains why some AF veterans may be more prone to obesity once they leave the service. When they leave the military, working-age veterans are more likely to have hearing impairments, musculoskeletal disorders, and arthritis (Williamson et al., 2019).
Physical Activity
Physical inactivity (sedentary lifestyle) is a major public health concern for people of all ages. A sedentary lifestyle accelerates the rate of age-related functional decline and lowers the capacity for sustained exercise to reestablish physiological reserve following an injury or illness. It was reported that only 22% of elderly persons engage themselves in regular leisuretime physical activities (Guccione et al., 2012).
Physical activity as defined by the American College of Sports Medicine (ACSM) and the Centers for Disease Control and Prevention (CDC) is "any bodily movement produced by the contraction of skeletal muscles that result in a substantial increase over resting energy expenditure (Kisner et al., 2018). Physical activity promotes cardiorespiratory fitness and has a variety of benefits for the cardiovascular system. All healthy adults, those with coronary risk factors, and patients with chronic heart illnesses should participate in aerobic physical exercise, which is the most researched modality with a positive dose-response effect on prognosis (Makar & Siabrenko, 2018 2019), discovered compelling evidence that physical activity lowers the risk of breast, colon, endometrium, bladder, stomach, esophagus (adenocarcinoma) and kidney cancers and moderate evidence for an association with lung cancer risk, with 10% to 20% reductions in relative risk.
The effects of PA on the brain and its functions, including social cognition, are more than sufficient to justify the need to promote PA among the elderly, given that our physical, cognitive, and social well-being are all dependent on it (Alarcon-Jimenez et al., 2020). Physical activity enhances cognition, particularly executive functioning and memory in mild cognitive impairment (MCI), independent functioning in MCI and dementia, and mental health in dementia patients (Nuzum et al., 2020).
Increases in moderate to vigorous PA has reduced psychological distress that improves the quality of life in older adults (Awick et al., 2017). Depression in older people is linked to a variety of unfavorable health consequences, and it is more chronic than depression in younger people. Physical activity has been considered Physical activity is linked to increased independence in later life. The biggest advantages on ADL physical performance may come from moderate physical activity levels combined with high levels of mental, physical, and social demands. However, promoting milder levels and simpler forms of physical exercises may still be beneficial for older persons with mobility restrictions (Roberts et al., 2017).
In young, middle-aged, and older persons, physical activity was linked to life satisfaction and happiness. Furthermore, as people grow older, their life satisfaction and happiness increase. Wherein the amount of physical activity was more important than the type of physical activity with emphasis on lifestyle modification (An et al., 2020).
The Physical Activity Guidelines for Americans provides physical activity recommendations for persons of different subgroups, including adults and older adults (65 years and older). It was published in 2008 by the US Department of Health and Human Services. Physical activity recommendations are identified for each age group to achieve the most health benefits. Adults and older adults should: participate in moderate intensity physical activity for a minimum of 150 minutes or vigorous intensity for 75 minutes per week, episodes of at least 10 minutes count toward daily total and at least 2 days per week of muscle strengthening exercises. Additional activity specific for older adults is incorporating balance exercises to reduce risk of falls (Kisner et al., 2018).
Exercise
Exercise is any planned and structured physical activity designed to improve or maintain physical fitness (Kisner et al., 2018). Regular exercise can aid in weight loss, decreasing triglycerides and raising HDL levels, as well as lowering blood pressure. Exercise training, greater cardiorespiratory fitness, or both, appear to enhance the metabolic syndrome's underlying factors, according to single-center trials and current metaanalyses. Aerobic or resistance exercise, or a combination of the two, has a substantial impact on one's health (Meyers et al., 2019).
In the senior population, sarcopenia is a health issue linked to aging. Sarcopenia lowers physical performance, muscle strength, and muscle mass which can be prevented by engaging in physical exercise. The majority of older adults exercise intervention studies found that the participants had good results, but muscle strength maintenance appeared to be dependent on continuing to do particular sorts of physical activities on a regular basis (Lee et al., 2018).
According to the current US Department of Health and Human Services, the standard for physical exercise in older individuals that everyone should follow are these five-part activity routines: aerobic activity (cycling, jogging), muscle strength training, balance, flexibility, and avoidance of inactivity -which are all important components. A 30-minute interval of moderate to intense activity, with the goal of achieving a minimum of 150 minutes of activity every week is recommended on most days of the week incorporating balance training to help reduce fall risks (Orkaby & Forman, 2018).
With the ever-increasing number of older people in South Asia and Southeast Asia, factors including exercise frequency, consistency, and length appear to have a good impact on the mental health and wellbeing of older persons. Similarly, any intervention that encouraged older adults to be more active was found to be beneficial to their mental health and wellbeing as long as it was done in a fair amount of time.
Self-efficacy and Social-Cognitive Theory
Positive self-efficacy has been highlighted by many researchers as the key to successful physical activity engagement (Kisner et al., 2018). One of the most studied personality attributes that contributes to exercise adherence is self-efficacy, or the belief in one's ability to do something. It was found out in a study that self-efficacy played a significant role in determining the course of exercise activity, and those with higher selfefficacy were more likely than those with lower selfefficacy to continue participating in PAs for a longer Within the psychological literature, beliefs about capability are frequently referred to as selfefficacy. Self-efficacy theory, a core component of social cognitive theory, one of the primary theories of motivation, claims that those who feel effective are more likely to engage in learning-enhancing cognitive and behavioral activities. Self-efficacy in physical The social cognitive theory's core concept is that learning occurs in a social context, with a dynamic and reciprocal interaction between cognitive processes, environment, and behavior. A person must believe that he or she has the ability to change his/her habit and that modifying that habit will result in beneficial outcomes that surpass the possibility of negative outcomes one may go through. (Kisner et al., 2018).
Studies of Retired Military
The majority of people believed that serving in the military was good for one's physical health. The majority of veterans attributed their improved physical health later in life to the exercise they gained while serving in the military. When they left the military, however, numerous veterans expressed difficulties in sustaining their desired level of physical activity due to new obligations and limited sports facilities (Williamson et al., 2019).
The members of the Royal British Legion ages 60 and above who participated in a study, recognized the importance of regular physical activity, but their perceptions were typically based on the 'felt' limitations of aging bodies, which were often in stark contrast to their earlier 'disciplined' active military bodies (Williams et al., 2017).
In a study conducted by Fisher et al., (2021), it was mentioned that after transitioning to military retirement, participants reported that their physical activity was negatively impacted. Not working out or exercising was one of the negative adjustments, decreasing one's exercise routine (e.g., not jogging as often or as far), avoiding the same physical activities (e.g., organized sports teams or attending to the gym), or being less active and sedentary are all examples of sedentary behavior.
Marciniak et al., (2021) discovered three new things about veterans and the chance of falling. First, non-injurious falls were 11% higher than injurious falls among veterans, while non-injurious falls were 28% lower than non-veterans. Second, among veterans, the risk of an NIF increased more with age than for nonveterans, with the oldest veterans having a 10-15 percent higher relative risk. Third, veterans who engage in at least one day per week of moderate or vigorous physical activity have a ~10% lower risk for NIF than nonveterans. Therefore, engagement in physical activity may be a particularly effective way for veterans to reduce the risk of falling and injury as they get older.
Gerofit is a continuing workout that began in 1986 at the Veterans Affairs Medical Center (VAMC) in Durham, North Carolina. Gerofit's purpose is to promote physical activity among older adults who have served in the military for their health and wellness. Veterans aged 65 and up can participate in an intervention program such as strength and aerobic exercises or participate in group classes like tai chi, dancing, walking, and balance. According to the study of Brown et al., (2021), the preservation of their health and well-being, and the social opportunity to spend time with others were all factors that motivated veterans to join Gerofit. Interest/enjoyment was rated second highest in terms of intrinsic motivation, indicating that future research is needed to see if these and other factors (such as competence and attractiveness) have an impact on older individuals' engagement in group fitness programs.
Method Used
This study employed a quantitative descriptivecorrelational design which applies correlational statistics to measure and describe the degree of association among variables or sets of scores.
This study was conducted to determine the level of physical activity and self-efficacy on exercise among elderly retired Philippine Army and the relationship between the level of physical activity and the level of self-efficacy on exercise among elderly retired Philippine Army, the relationship between the level of physical activity and the length of service, age and gender among elderly retired Philippine Army and the relationship between the level of self-efficacy and the length of service, age and gender among elderly retired Philippine Army in Zamboanga del Norte.
Research Instrument
There were two instruments used in this study. One instrument was adapted from the Godin and Shephard Leisure-Time Physical Activity Questionnaire, which was developed by Godin Godin, 1987;Jacobs et al., 1993;Miller at al., 1994;Sallies et al., 1993) tested and confirmed the questionnaire's validity to assess leisure-time physical activity. Using the kappa index, the questionnaire's reliability was found to be 0.94 for the strenuous activity score and 0.74 for the total leisure-time physical activities (LTPA) score when it was designed. The Leisure-Time Exercise Questionnaire are classified into three activities: "strenuous," "moderate," and "light". Multiplying activities performed for more than 15 minutes in a week with their coefficients yield the scores corresponding to the energy expenditure (metabolic equivalent {MET}). The numbers reflect MET intensity ratings (strenuous/exhausting workouts are 9 METs, moderate exercises are 5 METs, and light exercises are 3 METs) (Sari and Erdogan 2016). The Godin Scale Score interpretation are as follows: Active (Substantial benefits) = 24 units or more, Moderately Active (Some benefits) = 14-23 units and Insufficiently Active (Less substantial or low benefits) = less than 14 units .
On the other hand, the Self-Efficacy and Exercise Habits Survey comprises 12 activities that people might do to urge themselves to increase or maintain regular exercise. There was a five-response Likert scale to choose from, ranging from I know I cannot (1) to I know I can (5) with an additional option of does not apply. The two factors "(1) sticking to it" and "(2) making time for exercise" for the Self-Efficacy and Exercise Habits Survey are scored as follows: Sticking to it: mean items 2, 3, 5, 6, 8, 9, 10, 11 and Making time for exercise: mean items 1, 4, 7, 2. (Sallis et al., 1988).
Statistical Treatment of the Data
In this study, a percentage frequency distribution was employed to show the relative frequency with corresponding percentages of survey responses and other information gathered that was presented in tabular or graphic form. A correlation statistic was used to show the relationship between variables. The Pearson-product moment correlation coefficient measured the relationship between two variables and their association with each other. It calculated the influence of a change in one variable on a change in the other variable. The Jeffreys's Amazing Statistics Program (JASP) software was used in analyzing all the gathered data.
IV. RESULTS AND DISCUSSIONS
There was a total of 19 responses retrieved from the 25 questionnaires distributed during the Armed Forces Veteran Association-Zamboanga del Norte's monthly meeting. Some questionnaires were excluded due factors like incomplete data and some others declined to participate. Figure 2 shows the distribution of respondents according to gender. There was a total of 19 respondents, 11 males (57.90%) and 8 females (42.1%) who qualified for the study. According to Strong, et.al (2017), over 2 million women served as veterans in 2014, or about 10% of all veterans. Many female veterans must not only balance their personal and professional obligations but also learn to live with the physical and/or mental health issues they develop after returning from deployment. How well a woman transitions to civilian life after deployment may be influenced by the biological, psychological, and social factors present in her household and neighborhood. The following elements-(a) the availability of gender-specific Veterans Affairs policies and services; (b) accessibility to education and employment; (c) supports tailored to mental health and/or sexual trauma suffered in service; and (d) social stigmas associated with being a female veteran-can facilitate or obstruct the reintegration of female veterans. Figure 2 offers a convenient visual representation of demographic distribution of respondent's age and military tenure according to gender. Data shows that the age ranges from 66-75 were all represented by both gender, female being dominant in this age bracket. The most senior respondents were represented by male. Jonathan E. Vespa (2020) highlighted that woman make up 9% (1.7 million) among veterans and by 2040 that number is expected to rise to 17%. Across all age ranges, male respondents were represented.
Fig 2. Distribution of Gender
The frequency of physical activities among respondents of age range 60-65 years old is displayed in Table 2. The physical activities were categorized as strenuous, moderate, and mild. Strenuous activity is when the respondent feels the heart beats rapidly; moderate as not exhausting and mild activity when the respondent exerts effort only minimally . The activities listed include walking, and Zumba. Respondents engage frequently in Zumba than walking and consider it a strenuous exercise which they performed four to five times a week. The study of Ljubojevic et al., (2023) imply that the Zumba fitness workout is an effective exercise technique for enhancing pulmonary function in sedentary women in addition to reducing body fat percentage. All respondents agree though that walking is their moderate and mild exercise with varying frequencies a week. An increase of the time spent in brisk walking may increase intestinal Bacteroides in association with improved cardiorespiratory fitness in healthy elderly women (Moreta et al., 2019). Tai Chi 22% --
Dahonan-Cuyacot et al./ Physical Activity and Self-Efficacy on Exercise among Elderly Retired
The table above (Table 3) provides a clear overview on the frequency of physical activities of respondents with age range 66-70 years old. Unlike the previous table, this age group categorizes walking as strenuous, moderate, and mild activity at the same time. Many respondents (89%) executing it more than 5 times a week, while 11% of them chopped wood four to five times a week and consider it a strenuous activity. 56% of the respondents were involved in slow walking as an exercise that does not require them to exert effort. Other respondents tried other activities such as bicycling (11%) two to three times a week, and Tai Chi (22%). A meta-analysis study on the effects of Tai Chi reveals that there is strong evidence the exercise lowers blood pressure, total cholesterol, triglycerides, LDL-C, and blood glucose and significantly increases the quality of life (Hao Liang et al, 2020). Bicycling on the hand might be as useful as walking in patients with peripheral arterial disease (Haga et al., 2020). Table 4 demonstrates the level of physical activities for the age range 71-75 years old. Walking is the physical activity of their choice, with most of them executing it two to three times a week. For the exhausting activity, 100% of the respondents engaged in bicycling four to five times per week and 50% were involved in zumba. Bicycling helps Parkinson's disease (PD) patients with their motor function and enhances key aspects of their gait Tiihonen, et. al (2021). The table also reveals that the frequency of respondent's physical activity had reduced to mostly two to three times a week. Table 6 shows the self-efficacy level of the age group 60-65 years old. There were twelve items in the survey to assess how confident they are that they could really motivate themselves to the tasks consistently for at least six months. These items will evaluate their selfefficacy and confidence in performing physical exercises. In the context of exercise, confidence level typically refers to an individual's belief or certainty in their ability to successfully perform a specific exercise or physical activity, it reflects their self-assurance and mental state regarding their physical capabilities. Respondents believed that they are extremely confident that they "can get up early in the morning" (4) to exercise", "stick to their exercise program even through a stressful life change" (6) and "stick to the program even when there are household chores to attend to" (9), respectively. Item 11 scores the lowest, with the respondents slightly confident that they "can stick to their exercise program when social obligations are time consuming". Physically active adults in the Established Populations for Epidemiological Studies of the Elderly (EPESE) were more likely to survive to age 80 or beyond and had approximately half the risk of dying with disability than their sedentary peers (Izquierdo et al., 2021). The self-efficacy level among age group 66-70 years old is demonstrated in Table 7. Results suggest that out of all the items, they are very confident that they can "get up to exercise" and "can stick to exercise program even they have excessive demands at work" with a weighted mean of 3.44. It is also noted that they are not at all confident when asked if they can "stick to your exercise program when your family is demanding more time from you" with a weighted mean value of 1.44. It is also further noted that these age group are not extremely confident in any of the tasks. Confidence in exercise can be influenced by various factors, such as previous experience, knowledge of proper form and technique, physical fitness level, and perceived barriers or challenges. When someone has a high confidence level, they are more likely to approach exercise with enthusiasm, motivation, and a positive mindset, which can lead to better performance and adherence to a workout routine (Chan et.al, 2018). 5.Continue to exercise with others even though they seem too fast or too slow for you.
2.33
Slightly confident 6.Stick to your exercise program when undergoing a stressful life change (e.g., divorce, death in the family, moving). Age group 71-75 years old is extremely confident that they can "continue to exercise with others even though they seem too fast or too slow for you" as shown in Table 8. It is also noted that younger respondents are not as confident as this age group when it comes to their commitment given the same task. Although, they are not at all confident if they can "attend a party only after exercising" and "stick to your exercise program when social obligations are very time consuming" with a weighted mean of 1.75 and 1.5 respectively. Furthermore, the respondents also demonstrated that they could set aside time for physical activity program for at least 30 minutes, three times a week very confidently (WM=3.5). Exercise training positively impacts mental health (Hall, et al., 2020), and the motivation to perform such is dependent on the person's commitment and motivation Extremely confident-4.0-5.0 Table 9 shows that overall, respondents in this age group's exercise confidence level and self-efficacy have dropped. None of them is extremely confident or very confident in performing any of the tasks. Data also reveals that they are not confident at all to perform tasks 7 ("Attend a party only after exercising), 10 (Stick to your exercise program even when you have excessive demands at work") and 11 ("Stick to your exercise program when social obligations are very time consuming"). It is well observed in the table that task 5 ("Continue to exercise with others even though they seem too fast or too slow for you") where age group 71-75 years is extremely confident that they can commit to performing, this age group suggests that they are slightly confident on their commitment to the same task. It is further manifested in the table (9) that there are several tasks an above 75 years old are moderately confident of performing such as tasks 1, 2, 4, 6, 9 and 12 respectively. Increasing chronological age was associated with decreased participation in organized sport (particularly team-based) and increased nonparticipation according to Smith, et al. (2022). Results reveal the relationship of physical activity on age, gender, and length of service of respondents in Table 10. Using Pearson's correlation, the relationship between physical activity, gender (p= 0.106) and length of service (0.731) is not statistically significant. It is also noted that there is a positive correlation between the variables age (r = 0.894), gender (r = 0.894) and length of service r= 0.269) respectively. These results suggest that if there are any changes in the variable physical activity, all other variables will change in the same direction. 10.Stick to your exercise program even when you have excessive demands at work. Table 11 shows the relationship between selfefficacy with age, gender, and length of service using Pearson's correlation. Data suggest that the relationship between age and gender is statistically significant (p=.001) as well as self-efficacy and age (p=0.002). Both relationships imply a positive correlation value of r= 1.00 and r=0.998 respectively. However, on the relationship between self-efficacy with length of service is not significant (p=0.624) with a negative correlation value of r= -0.376. This result suggests that if there are changes in the self-efficacy variable, the changes of variable length of service will be on the opposite direction (Bhandari, 2022). Level of significance= 0.05 Table 10 shows the relationship between physical activity and self-efficacy among Armed Forces veterans in Zamboanga del Norte. Using Pearson's correlations, the result suggests that there is no significant correlation between physical activity and the level of self-efficacy with a p-value of 0.144, higher than the alpha value set at 0.05. Pearson's r value of 0.856 also reveals a positive correlation, which indicates that when the variable self-efficacy changes, the variable physical activity will also change in the same direction (Shaun Turney, 2022).
V. CONCLUSION
This study concluded that respondents aged 60-65 considers walking as their mild and moderate exercise and that Zumba is their strenuous exercise which they frequently attended four to five times a week. This study also found out that elderly people aged 66-70 years old considered walking as their strenuous, mild and moderate exercise. 89% of the total respondents of this age group, does walking more than five times a week. For the respondents aged 71-75 years old, walking was their activity of choice, with most of them executing it two to three times a week. For this age group, 100% were engaged in bicycling four to five Result shows that people aged 75 years old and above has a decreased on the frequency of physical activity to only two to three times a week. Mostly, people of this age considered walking as their primary exercise that consisted 25% and that 50% of the respondents aged 75 and above were engaged in mild activities and the remaining 25% were engaged in house cleaning.
Moreover, in the level of self-efficacy, respondents aged 60-65 and 71-75 years old were extremely confident while elderly respondents aged 66-70 and 70 and above were very confident and moderately confident, respectively, in getting up early. In sticking to their exercise program only respondents aged 60-65 years old were very confident while the remaining respondents were moderately confident. The same level of self-efficacy in terms of doing exercise while feeling depressed is weas recorded for the age group 60-65 years old, which is very confident while the rest were only slightly confident. For setting aside time for physical activity, 60-65 years old respondents were extremely confident, age group 71-75 years olde were very confident while the rest of the group were moderately confident. Respondents from the 71-75 years old age group were extremely confident in continuing doing exercise with others despite of their phasing while respondents aged 60-61 were moderately confident and the rest of the groups were just slightly confident in doing so. For sticking to their exercise program while undergoing a stressful life changing event, elderly respondents aged 60-65 were extremely confident; aged 71-75 were very confident; aged 76 and above were moderately confident while aged 66-70 were slightly confident.
Furthermore, in attending a party only after doing exercise, respondents aged 60-65 were very confident, aged 66-70 were moderately confident and the rest of the groups were not confident at all. For sticking to their exercise program while their family is demanding more time from them, respondents age 60-65 and 71-75 were very confident while age group 70 and above were slightly confident and the group age 66-70 were not confident at all. Respondents whose age were 60-65 were very confident in sticking with their exercise program while having household chores while the 71-75 age group were very confident and the rest of the groups were moderately confident.
Additionally, the age group 66-70 were very confident in sticking to their exercise program even when having excessive demands at work. Both the 60-65 and 71-75 age group were moderately confident while the 76 and above respondents were not confident at all. The age group 66-70 years old were moderately confident in sticking to their program even when social obligations were very time consuming. Age group 60-65 were slightly confident while groups with age range of 71-75 and 76 and above were not confident at all. In reading or studying less in order to exercise, the 60-65 years old respondents were very confident while the rest of the groups were moderately confident.
In regards to this, it was found out that there was no significant relationship between physical activity on age, gender and length of service. It was also noted that there was a positive correlation between the variables age, gender and length of service this suggests that if there were changes in the variable physical activity, all other variables would change on the same direction. Results showed that age and gender had a significant relationship as well as age and self-efficacy as both relationships showed a positive correlation. However, length of service and self-efficacy does not show significant relationship because of its negative correlation.
Lastly, the study showed no significant relationship between physical activity and the level of self-efficacy among Armed Forces veterans in Zamboanga del Norte. Using the Pearson's Correlation, the results revealed a positive correlation which indicates that when the variable self-efficacy changes, the variable physical activity also changes on the same direction. | 2023-07-15T15:33:39.944Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "ad770329658747f360b7b563b21bbf0aff9405a5",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.22161/ijreh.7.4.1",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "70c803c4fb5f292b46b61bed58cd2f2fe1793ef0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
232324702 | pes2o/s2orc | v3-fos-license | Cardiothoracic imaging findings of Proteus syndrome
In this work, we sought to delineate the prevalence of cardiothoracic imaging findings of Proteus syndrome in a large cohort at our institution. Of 53 individuals with a confirmed diagnosis of Proteus syndrome at our institution from 10/2001 to 10/2019, 38 individuals (men, n = 23; average age = 24 years) underwent cardiothoracic imaging (routine chest CT, CT pulmonary angiography and/or cardiac MRI). All studies were retrospectively and independently reviewed by two fellowship-trained cardiothoracic readers. Disagreements were resolved by consensus. Differences between variables were analyzed via parametric and nonparametric tests based on the normality of the distribution. The cardiothoracic findings of Proteus syndrome were diverse, but several were much more common and included: scoliosis from bony overgrowth (94%), pulmonary venous dilation (62%), band-like areas of lung scarring (56%), and hyperlucent lung parenchyma (50%). In addition, of 20 individuals who underwent cardiac MRI, 9/20 (45%) had intramyocardial fat, mostly involving the endocardial surface of the left ventricular septal wall. There was no statistically significant difference among the functional cardiac parameters between individuals with and without intramyocardial fat. Only one individual with intramyocardial fat had mildly decreased function (LVEF = 53%), while all others had normal ejection fraction.
Scientific Reports
| (2021) 11:6577 | https://doi.org/10.1038/s41598-021-86029-0 www.nature.com/scientificreports/ (NHGRI), Bethesda, MD, USA and was compliant with the Privacy Act (comparable to HIPAA). Individuals were included in this study if they met the following criteria: (1) had a clinical diagnosis of Proteus syndrome and (2) had cross-sectional imaging of the thorax (CT/CTA chest and/or cardiac MRI) available for review. Clinical diagnosis of Proteus syndrome was determined by both the presence of a mosaic AKT1 pathogenic variant as well as meeting clinical diagnostic criteria for Proteus syndrome detailed in Table 1. Individuals who lacked a mosaic AKT1 pathogenic variant or who only met clinical diagnostic criteria for AKT1-related overgrowth spectrum were excluded. Individuals who were clinically diagnosed with Proteus syndrome, but did not have cross-sectional imaging of the thorax were also excluded. Informed consent was obtained from all the participants or their parents/legal guardian for minors (under the age of 18). A subset of the current study population was previously reported by Hannoush et al. and Jamis-Dow et al. 17,18 . Participant imaging. Individuals referred to our institution for Proteus syndrome underwent baseline cardiothoracic imaging of the thorax to assess the degree of cardiothoracic involvement. Subsequent followup exams were also obtained if individuals became symptomatic (e.g. CTA for pulmonary embolism). CT and cardiac MRI scanner details are in Supplemental Table S1. The cardiac MRI protocol included cine images in the three long axes and a volumetric short axis cine stack for anatomic and functional analysis. Cardiac optimized, multi-echo Dixon fat-water separation images were also obtained at the same slices as the cine images as previously described 19,20 . This optimized fat-water sequence was used for easy detection of intramyocardial fat, even in thin structures such as the right ventricular (RV) free wall. Gadolinium was administered in only three of the cardiac MR studies and late gadolinium enhancement images were obtained in each of them. 11 . Proteus syndrome: a score of 10 or more points with a mosaic AKT1 variant or 15 or more points without an AKT1 mosaic variant. AKT1-related overgrowth spectrum: a score of 2-9 with an AKT1 mosaic variant.
Positive clinical criteria www.nature.com/scientificreports/ Image analysis. All chest CT studies were evaluated on a PACS workstation (CareStream, Version 12.1.6.1005, Carestream Health, NY) independently by two readers who were fellowship-trained in thoracic and body imaging and who were aware of the Proteus syndrome diagnosis but were blinded to all other clinical information. The readers (A.S., 10 years of experience and E.B.T., 13 years of experience) evaluated the lungs, heart, pleura, thoracic vasculature, chest wall and mediastinum for cardiothoracic imaging findings that were previously reported in the literature (Supplemental Table S2) 7,12,[14][15][16][17][18][21][22][23] . The entire thorax was assessed for the presence of nodules or masses suspicious for malignancy 8,17 and any other additional thoracic findings. Malignancy was determined via biopsy of suspicious lesions. Two fellowship-trained cardiac MRI readers (A.S., 10 years of experience and A.J.B., 3 years of experience) independently reviewed the cardiac MRI studies. Global and regional cardiac function were assessed using cardiac MR cine images. Functional cardiac MRI data (left ventricular ejection fraction (LVEF), left ventricular (LV) end-diastolic volume, LV end-diastolic volume index, LV end-systolic volume, LV end-systolic volume index, LV stroke volume, LV stroke volume index, cardiac output, cardiac index, anteroseptal wall thickness and posterolateral wall thickness) was analyzed using postprocessing software (Argus, Siemens Healthineers, Erlangen, Germany). The presence, location, and distribution of intramyocardial fat was evaluated on cardiac fat-water separation images using the 17 segment model of the American Heart Association 24 . Segmental findings were combined into the distribution of LV walls: anterior, septal, inferior and lateral. The presence of fat was described as epicardial (outer portion of the wall), endocardial (inner portion of the wall), midwall (middle of the wall), transmural (full thickness involvement of the wall), diffuse (entire wall involved), or focal (small, < 5 mm area of a wall). Consensus read was performed to resolve any disagreements between the readers for both chest CT and cardiac MRI studies.
Statistical analysis. Statistical analyses were performed using GraphPad Prism version 8.4.1 (676) for Windows (GraphPad Software, San Diego, California USA, http:// www. graph pad. com). The Shapiro-Wilk test was used to assess the normality of the data. Percentage was used as the descriptive index for the qualitative variables and mean ± standard deviation (SD) were used to describe the quantitative variables. Differences in cardiac functional parameters between subjects with and without intramyocardial fat were compared with the unpaired t-test and nonparametric tests, as appropriate. Fisher's exact test was used to compare the difference in the presence of fat location in LV and RV. A two-tailed P < 0.05 was considered to indicate a significant difference.
Results
Of 53 individuals referred to our institution, 53 were diagnosed with Proteus syndrome and no individuals were diagnosed with AKT1-related overgrowth spectrum. Of these, 15 of the 53 individuals with Proteus syndrome did not have any cross-sectional cardiothoracic imaging. Thus, the remaining 38 individuals with Proteus syndrome (mean age: 23; range: 9-61 years) who had cross-sectional cardiothoracic imaging available for review were included in the study. This group comprised 23 males (mean age: 24 years; range: 9-61 years) and 15 females (mean age: 23 years; range: 10-54 years).
Of the 38 individuals with cross-sectional cardiothoracic imaging, 18 underwent only chest CT imaging, 16 chest CT imaging and cardiac MRI, and four only cardiac MRI. Thus, of the 38 individuals with cardiothoracic imaging, 34 had chest CT imaging and 20 had cardiac MRI. Among the 34 individuals who underwent chest CT, ten had a routine chest CT without contrast, four had a routine chest CT with contrast and 20 had CT pulmonary angiography.
Cardiac involvement. Of the cohort of 38 patients who underwent imaging of the thorax, a subset of 20 individuals underwent dedicated cardiac MRI assessment for myocardial fat. Nine of the 20 individuals (45%) showed intramyocardial fat on the fat-water separation sequence ( Fig. 1). Within the LV, the intramyocardial fat was more commonly present in the septal wall (n = 9, 45%) compared to other walls of the LV (P = 0.0029). The RV was involved less often than the LV (n = 6 and n = 9, respectively). Within the RV, the RV free wall was the most common location for the intramyocardial fat (n = 6, 30%), however this was not significantly different from other RV locations (P > 0.05). Intramyocardial fat distribution patterns were variable and included endocardial, midwall, epicardial, transmural, diffuse, and focal ( Table 2). Some individuals had more than one pattern of intramyocardial fat within the same LV segment. In the LV, intramyocardial fat was most commonly located in an endocardial distribution. In the RV, intramyocardial fat most commonly was present diffusely within the RV free wall.
Chest CTs were obtained in 34 individuals and also evaluated for the presence of intramyocardial fat ( Fig. 1, Table 4). 16 of the 34 individuals, which included seven of the nine individuals with intramyocardial fat on cardiac MR, also had a chest CT. However, intramyocardial fat was missed on CT in one of these seven individuals by both readers who agreed this was most likely due to cardiac motion on the non-gated chest CT study. There were a total of 18 patients who only had chest CTs and no dedicated cardiac MRI to assess for intramyocardial fat. Evaluation of these 18 individuals who only had chest CTs for intramyocardial fat only detected one individual with intramyocardial fat. Thus, evaluation of the chest 34 chest CTs only detected a total of 7 (21%) cases of intramyocardial fat.
There was no statistically significant difference among any of the cardiac functional parameters in the individuals with intramyocardial fat compared to those without ( Table 3). The average LVEF of the individuals who underwent cardiac MR was normal (60 ± 5) with only one individual (female, age = 10 years-old) who had intramyocardial fat that was associated with a mildly decreased LVEF of 53% and mild diffuse hypokinesis without a focal regional wall motion abnormality. No wall motion abnormalities were found in any of the other individuals. Only three individuals received gadolinium and only one individual had a small patch of Airway and lung parenchymal involvement. Among those who underwent chest CT (n = 34), the most common findings in the lung parenchyma were hyperlucent lung parenchyma (n = 17, 50%), pulmonary nodules (n = 17, 50%), and bandlike areas of scarring (n = 19, 56%). Hyperlucent lung parenchyma usually involved < 50% of the lobe and was most common in the right upper lobe (n = 11, 32%). Lung cysts nearly always involved < 50% of a lobe and were most commonly involved the right lower lobe (n = 6, 18%). Both calcified and non-calcified nodules were present. Nine individuals had only non-calcified nodules, four individuals had only calcified nodules and four had both calcified and non-calcified nodules. All nodules were well-circumscribed, smoothly marginated, and round or oval in shape. The calcified nodules were completely calcified and ranged in size from 1 to 6.5 mm. Non-calcified nodules ranged from 1 to 17 mm in size and were completely soft tissue in attenuation. Large non-calcified nodules (10 to 17 mm) were observed in only one individual of our cohort. Interlobular septal thickening was present in four individuals (12%). Incidental, acute lung parenchymal findings in 4 individuals (12%) who had pneumonia at the time of scanning included consolidation and ground glass opacity. Minimal bronchiectasis was present seen in one individual (3%). Tracheal diverticulum and tracheal bronchus were incidental findings in two individuals (6%). One individual (38 year-old non-smoker) had a perihilar, left upper lobe mass (3%), which was biopsied and findings were compatible with primary lung adenocarcinoma. These findings are illustrated in Figs. 2 and 3 and summarized in Table 4.
Vascular involvement.
More than half of the individuals with chest CTs had pulmonary venous dilation (n = 21, 62%), as shown in Fig. 3. Affected pulmonary veins were diffusely dilated. Right and left pulmonary veins were involved similarly (n = 17 and n = 16, respectively). Ten individuals also had abnormal enlargement of systemic veins: internal jugular vein, subclavian vein, brachiocephalic vein, superior vena cava and the azygos vein. Pulmonary embolism was detected in two symptomatic individuals (6%) on CT angiography (Fig. 3). Mild aneurysmal dilation (diameter 42-43 mm) of the ascending aorta was present in two individuals (6%). The vascular findings are summarized in Table 4.
Pleural involvement.
Pleural effusion and thickening were detected in three (9%) and five (15%) individuals, respectively (Table 4). Pleural effusion was found incidentally in individuals acutely ill with pneumonia and likely unrelated to Proteus syndrome. No individuals had pleural nodules.
Chest wall involvement. Skeletal findings included scoliosis, asymmetric vertebral body growth, overgrowth of posterior elements of the spine, ribs, and scapulae. Scoliosis secondary to asymmetric overgrowth of various portions of the spine was the most common chest wall manifestation (n = 32, 94%) in this study. Skeletal findings of Proteus syndrome are depicted in Fig. 5. Overgrowth of chest wall fat was seen in ten individuals (29%). One individual (3%) had a well-circumscribed area of focal fat overgrowth in the right axilla suspicious for lipoma and three individuals (9%) had asymmetric breast size and/or asymmetric breast tissue/fat composition related to fat overgrowth. In individuals who had both deformed skeletal anatomy (e.g. scoliosis) as well as www.nature.com/scientificreports/ soft tissue asymmetry, the soft tissue asymmetry was not associated with the areas of skeletal deformity. Chest wall adipose overgrowth findings are depicted in Fig. 6 and summarized in Table 4.
Discussion
In this study, we were able to further define the prevalence of cardiothoracic imaging findings in Proteus syndrome. Intramyocardial fat was common (45%) on cardiac MRI and its severity had no association with age. Although present in many places in both ventricles, the intramyocardial fat was always present in the septal wall. This intramyocardial septal wall fat is on the mild end of a spectrum of abnormal fat growth that includes the myocardial septal lipoma described within the clinical diagnostic criteria for Proteus syndrome (Table 1) at the more severe end of the spectrum. Interestingly, intramyocardial fat was also detected on the routine, non-cardiac gated chest CTs about half of the time (21%). This illustrates the importance of looking at the heart on routine chest CT exams, but also confirms that chest CT is an imperfect technique for intramyocardial fat detection as it missed cases detected by cardiac MRI. The intramyocardial fat does not appear to correlate with wall motion abnormalities or significantly impact LV function. Although a previous study reported arrhythmia (right bundle branch block) associated with intramyocardial fat in Proteus syndrome 25 , we observed no arrythmia associated with the presence of intramyocardial fat or any other adverse cardiac complications. As Proteus syndrome is an overgrowth syndrome, we hypothesize that this intramyocardial fat may represent overgrowth of areas of physiologic intramyocardial fat that has been well described in healthy individuals. In healthy individuals, intramyocardial fat is commonly seen in the RV up to 85% of the time, but is only present in the LV or in both ventricles < 20% of the time 26,27 . Thus, Proteus syndrome differs by commonly having intramyocardial fat in both ventricles, with LV predominance over the RV. However, intramyocardial fat in Proteus syndrome is associated with no wall motion abnormality, a feature it shares with the intramyocardial fat present in healthy individuals. Unlike Proteus syndrome, pathologic conditions with intramyocardial fat tend to have associated ventricular chamber enlargement, dysfunction and wall motion abnormalities. These include arrhythmogenic RV cardiomyopathy (ARVC), healed myocardial infarction, dilated cardiomyopathy, and Duchenne's muscular dystrophy. It should be noted that myocardial fat is no longer part of the diagnostic criteria for ARVC 28 . Further follow-up of the individuals with intramyocardial fat will be necessary to characterize its behavior over time and determine if it behaves similarly to a hamartomatous anomaly or more similarly to a fatty neoplasia.
Pulmonary nodules, areas of hyperlucent lung, and bandlike areas of scarring were present in at least 50% of individuals, which is much greater than the 8-13% described in prior reports 8,12,17,29 . Although we did not have pathology available for the pulmonary nodules in this study, they are suspected to be hamartomas based on prior published pathology 12 . In regards to the hyperlucent lung parenchyma, previous studies with pathology found that the areas of hyperlucent lung parenchyma corresponded to panlobular emphysema 12,15 . However, additional pathological evaluation is needed to further validate this correlation.
Pulmonary venous dilation was present in over half of our cohort. However, systemic venous dilation was less common. Venous dilation in Proteus syndrome is secondary to vascular overgrowth, but it remains uncertain as to why the pulmonary veins are affected more commonly than systemic veins. Vascular anomalies (tumors, malformations) have been previously reported 6,8,30 , however we did not observe any.
Chest wall abnormalities related to dysregulated skeletal and fat overgrowth are extremely common in Proteus syndrome 1,5,7,17 , and were also commonly observed in the current study. Scoliosis secondary to overgrowth of www.nature.com/scientificreports/ various vertebral elements is almost universally present and tends to be progressive in Proteus syndrome, leading to significant deformity of the chest wall and eventual respiratory compromise that may necessitate orthopedic intervention. Additionally, we observed significant asymmetric breast size and/or composition related to an abnormal ratio of fat and parenchymal tissue in the affected breast. To our knowledge, this finding has not been previously reported.
The main thoracic complications of Proteus syndrome include pulmonary thromboembolism, respiratory failure, pneumonia, and malignancy. Pulmonary thromboembolism in Proteus syndrome, which was observed in our cohort, is elevated relative to the general population and is thought to be due to decreased anticoagulation proteins, venous stasis in vascular malformations, as well as the effects of the pathogenic AKT1 variant on endothelial cells [1][2][3]5,8,10 . Proteus syndrome is associated with an increased risk of developing malignancy of any kind which is already well documented in the literature, with meningioma and ovarian cystadenoma being among the more common 1,2,6-8 . In our population of individuals with Proteus syndrome, we found one left upper lobe mass that was found to be primary lung adenocarcinoma in an individual with no other risk factors other than Proteus syndrome.
Limitations
Our study has several limitations. The main limitation is that it is retrospective in nature. Although we have a large cohort of individuals for such a rare disease, not all individuals had both cardiac MRI and chest CT imaging performed. Second, the majority of the cardiac MRI studies and several of the chest CTs were performed without contrast. Lack of contrast limited our ability to assess for myocardial fibrosis on cardiac MRI and to assess for pulmonary embolism on the non-contrast chest CTs. Third, although many individuals in our cohort had lung parenchymal findings, pulmonary function testing was not available for correlation. This is an important clinical question and will be looked at in a future study. Finally, pathology was not available for all of the imaging findings we observed.
Conclusion
Proteus syndrome is a rare disorder with many cardiothoracic imaging findings that are much more common than previously described in the literature. The most common areas of involvement include the chest wall (skeletal and adipose tissue), lung parenchyma, thoracic vasculature, and myocardium. Additionally, it is already known that this syndrome has an increased risk of complications that include pulmonary thromboembolism, respiratory failure, pneumonia, and malignancy-findings which we also observed in our cohort of individuals with Proteus syndrome. | 2021-03-24T06:16:52.061Z | 2021-03-22T00:00:00.000 | {
"year": 2021,
"sha1": "ec1ced75a4b1ed36cd18ac3a4c4db107666e51ec",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-021-86029-0.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "06477f82e233174f7ee6c98c4addd81ad7ebdf9a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256863143 | pes2o/s2orc | v3-fos-license | Characteristics of bibliometric analyses of the complementary, alternative, and integrative medicine literature: A scoping review protocol
Background: There is a growing body of literature on complementary, alternative, and integrative medicine (CAIM), which offers a holistic approach to health and the maintenance of social and cultural values. Bibliometric analyses are an increasingly commonly used method employing quantitative statistical techniques to understand trends in a particular scientific field. The objective of this scoping review is to investigate the quantity and characteristics of evidence in relation to bibliometric analyses of CAIM literature. Methods: The following bibliographic databases will be searched: MEDLINE, EMBASE, PsycINFO, AMED, CINAHL, Scopus and Web of Science. Studies published in English, conducting any type of bibliometric analysis involving any CAIM therapies, as detailed by an operational definition of CAIM adopted by Cochrane Complementary Medicine, will be included. Conference abstracts and study protocols will be excluded. The following variables will be extracted from included studies: title, author, year, country, study objective, type of CAIM, health condition targeted, databases searched in the bibliometric analysis, the type of bibliometric variables assessed, how bibliometric information was reported, main findings, conclusions, and limitations. Findings will be summarized narratively, as well as in tabular and graphical format. Conclusions: To the best of our knowledge, this scoping review will be the first to investigate the characteristics of evidence in relation to bibliometric analyses on CAIM literature. The findings of this review may be useful to identify variations in the objectives, methods, and results of bibliometric analyses of CAIM research literature.
Introduction
Complementary, alternative, and integrative medicine (CAIM) is a complex term referring to three distinct concepts related to the use of non-conventional medicine. 1,2"Complementary medicine" describes non-conventional therapeutic approaches that are used together with conventional therapies. 1"Alternative medicine" describes non-conventional therapeutic approaches used in replacement of conventional therapies. 1"Integrative medicine" describes the combined use of both conventional and non-conventional therapies in a coordinated manner. 1,2For the purpose of this study, each of these approaches may also incorporate elements of "traditional medicine" which is the "knowledge, skills and practices based on the theories, beliefs and experiences indigenous to different cultures, used in the maintenance of health and in the prevention, diagnosis, improvement or treatment of physical and mental illness". 3All these concepts will be collectively referred to as "complementary, alternative, and integrative medicine" (abbreviated as CAIM).
CAIM practitioners often emphasize a holistic approach to health, including the consideration of cultural and social values. 2,4Clients often perceive CAIM as better at providing individualized, person-centred care, compared to mainstream health approaches. 5,6Prevalence of CAIM use is increasing worldwide, and, accordingly, the body of literature on CAIM research has grown immensely, with the steepest increase in CAIM publications observed between the mid-2000s and mid-2010s. 7,8It is of interest to determine broad research trends of CAIM research literature, and identify specific CAIM topics explored (e.g., acupuncture, aromatherapy).While some CAIM therapies (e.g., yoga for depressive symptoms, 9 exercise therapy for reducing falls in older people 10 ) have been shown to be safe and effective, many other therapies have insufficient evidence to demonstrate their effectiveness or safety. 11,12Furthermore, even when basic effectiveness and safety are established, questions often remain about key characteristics such as intervention dose and implementation or applicability to different patient populations and settings.Bibliometric analyses can be used to detect knowledge gaps and to identify research trends that help predict whether such knowledge gaps are likely to be met.Bibliometric analysis involves the application of quantitative statistical techniques to bibliometric data (e.g., total number of citations, total number of publications) and can be used for a variety of purposes, such as identifying patterns in a given field of research. 13Bibliometric analysis techniques can broadly belong to categories of performance analysis (i.e., techniques measuring contributions of research constituents) or science mapping (i.e., techniques measuring relationships between research constituents). 13Examples of research constituents include authors, countries, institutions, and topics. 13Performance analysis techniques can further be divided into publication-related metrics (e.g., total number of publications), citation-related metrics (e.g., average citations, total number of citations), and citation-and-publicationrelated metrics (e.g., h-index, g-index, proportion of cited publications). 13Science mapping techniques can include methods such as citation analysis, co-citation analysis, bibliographic coupling, co-word analysis, and co-authorship analysis. 13For instance, co-citation analysis examines the frequency of publications being cited together, which may reveal thematic clusters. 13Enrichment techniques of network metrics (i.e., quantitative measures of research constituents' relative importance), clustering (i.e., grouping of similar objects using clustering algorithms), and visualization (i.e., graphical visualizations of research constituents' connections) can be employed to enhance understanding of science mapping techniques. 13For instance, software like VOSviewer can be used to graphically visualize thematic clusters in co-citation analysis. 13vantages of bibliometric analyses include facilitating the examination of large datasets that are not feasible for investigation by manual review (e.g., literature reviews). 13Further, the relatively low cost and rapidity of conducting bibliometric analyses allow for replicable methods. 13,14Use of bibliometric techniques across different scientific fields is becoming increasingly popular. 15,16scoping review, which involves mapping the current literature and identifying gaps in research, 17 would be appropriate to summarize literature on CAIM bibliometric analyses.To the best of our knowledge, no systematic or scoping reviews have been conducted on bibliometric analyses of CAIM therapies.A preliminary search of the Cochrane Database of
REVISED Amendments from Version 1
To accommodate the reviewers' corrections, we have made some changes to the article.We previously stated that we will be extracting information on "health conditions targeted" in bibliometrics analyses.We have changed this to "health conditions managed" to be more inclusive of articles that do not specifically discuss treatment, but may cover diagnostics, clinical reasoning, or management of conditions.We have also re-phrased sentences in the introduction paragraph to clarify that standardized procedures for conducting bibliometric analyses do exist; however, there is no universal standard or consensus on what it should entail at minimum.Any further responses from the reviewers can be found at the end of the article Systematic Reviews and the Scopus database revealed no existing systematic or scoping reviews on the topic.Synthesizing bibliometric analyses on CAIM will provide insight into trends, such as the types of CAIM literature typically analysed from a bibliometric lens, statistical techniques that bibliometric analyses on CAIM utilize, and more broadly, where the field of CAIM is headed.While there are some guidelines on how to conduct bibliometric analyses, 13 there is no universal standard or consensus in the literature on what a bibliometric analysis entails.Accordingly, this review will also improve understanding of how bibliometric analyses are currently conducted on this topic.Thus, the purpose of this review will be to understand the characteristics of bibliometric analyses of CAIM research literature, which can inform future work within the field.
Approach and eligibility criteria
The present scoping review's research question is: "What are the characteristics of bibliometric analyses of the CAIM literature?".The proposed scoping review will be conducted in accordance with the Joanna Briggs Institute (JBI) methodology for scoping reviews, which recommends quantitative and descriptive analyses of main findings. 17This protocol's associated files (Appendix A -Search Strategies, and the PRISMA ScR Checklist) has been made openly available on Open Science Framework (see Extended data and Reporting guidelines 33 ).
The search strategy will aim to locate studies published in journals only (excluding grey literature), with no date restrictions (i.e., from inception to date of search execution).The only eligible study design will be bibliometric analyses (encompassing terms for "bibliometric analysis", "scientometric analysis", and "citation analysis"), or articles that include both a bibliometric analysis and another study design (e.g., bibliometric analysis and systematic review).All included bibliometric analyses will be focused on one or more CAIM therapies, as defined by a recently published operational definition of CAIM. 18This operational definition was created using a systematic search of four peer-reviewed or other quality-assessed resource types: 1) peer-reviewed articles from seven major bibliographic databases, 2) "Aims and Scope" webpages of peer-reviewed CAIM journals, 3) entries containing CAIM therapies in highly accessed online encyclopedias, and 4) highly ranked websites resulting from Health On the Net Code of Conduct (HONcode) searches. 18o date, this operational definition includes the greatest number of evidence sources and is the only one that captures the concept of "integrative medicine", alongside "complementary medicine" and "alternative medicine". 18ey literature sources will be excluded as bibliometric analysis studies are unlikely to be found outside of traditional academic publishing channels.Conference abstracts and study protocols will also be excluded as they likely will not contain adequate information required to describe the characteristics of bibliometric analyses on CAIM literature.Finally, all non-English publications will be excluded, due to language constraints of the authors.
Search strategy
The following electronic databases will be searched: MEDLINE, EMBASE, PsycINFO, and AMED (accessed via the OVID research platform), as well as CINAHL (accessed via EBSCOhost), Scopus, and Web of Science.The search strategy will include a comprehensive search string of CAIM terms 19 encompassing 604 distinct therapies described previously in an operational definition of CAIM. 18This search string of CAIM terms was created for OVID (e.g., MEDLINE, EMBASE, PsycINFO, AMED) and EBSCO platform (e.g., CINAHL) databases, as well as Scopus and Web of Science databases. 19Relevant scientific names and/or synonyms were added as a term (i.e., keyword, phrase), alongside relevant boolean operators.The comprehensive search string of CAIM 19 will be combined with search terms for bibliometric analyses (e.g., bibliometric analysis, statistical bibliography, citation analysis).The search strategy, including all identified keywords and equivalent index terms, will be adapted for each included database.All search strategies that will be run are provided in Extended data, 33 informed by PRISMA-S guidelines for reporting literature searches. 20,33udy and source of evidence selection Following the search, all identified citations will be collated and exported into Covidence, and duplicates will be removed.All titles/abstracts, followed by full texts, will be screened by reviewers independently and in duplicate.First, pilot title/abstract screening of a sample of twenty articles will be conducted by AQS and HL.A meeting will be held between AQS, HL, and JYN to discuss any challenges and resolve discrepancies.Following the pilot test, all titles/ abstracts will be screened for inclusion by AQS and HL.Then, pilot full-text screening will be conducted, in which AQS and HL will screen ten full texts.A meeting will be held between AQS, HL, and JYN to discuss challenges and resolve discrepancies.After this pilot step, full-text screening will be completed by AQS and HL.Reasons for exclusion of full texts will be recorded.Any disagreements that arise between reviewers throughout the selection process will be resolved on a weekly basis through discussion with HL, AQS, and JYN, if disagreements still cannot be resolved.The results of the search strategy and the study screening process will be presented in a PRISMA-ScR flow diagram in the final scoping review. 21
Data extraction
Data extraction will be conducted using Excel software.The data extraction form that will be used in this scoping review is informed by Donthu et al., 13 which provides an overview of how to conduct a bibliometric analysis.The extraction form will be developed in two stages.In stage one, AQS and HL will select ten articles at random that met the inclusion criteria from a preliminary search of CAIM and bibliometric analysis search terms on Scopus.AQS and HL will independently extract information from five articles each and meet to discuss discrepancies.A meeting will be held among AQS, HL, and JYN, to discuss changes needed to improve the form.In stage two, AQS and HL will identify the ten most highly cited articles that met inclusion criteria after running a search of CAIM literature and bibliometric analyses terms on Scopus, before independently extracting information from five articles each using the latest version of the data extraction form.Another meeting will be held between AQS and HL, and then with JYN, to approve the latest version of the extraction form.
The following information will be extracted from eligible bibliometric analyses: title, author, year, country, aim of the study, secondary study design (if applicable), the type of CAIM(s), the health condition or population managed, main findings, conclusions, and limitations.Also, the bibliometric information described will be summarized, including the databases searched, type of bibliometric methodology (i.e., performance analysis [such as citation-metrics or publication metrics] versus science mapping [such as co-word analysis, co-authorship analysis, bibliographic coupling, or enrichment techniques]), the number of studies included in the analysis, the number of metrics used, how information was reported (e.g., narrative summary, figures, tables, visualization software used), and how all variable measures align with the Donthu et al. 13 guideline for conducting bibliometric analyses.
To ensure consistency and quality of the extraction, an initial pilot test of data extraction will be conducted by all participating independent reviewers, assessing five articles.All reviewers will then meet with JYN, HL, and AQS to resolve discrepancies and disagreements.Based on the initial pilot testing, revisions to the data extraction form can be proposed and implemented.Upon completion of the pilot extraction step, reviewers will be divided into two teams led by HL and AQS.Teams will be further divided into pairs of two reviewers for duplicate data extraction of the same set of bibliometric full texts.These duplicate extractions will be reviewed by AQS and HL to ensure consistency.Weekly meetings will be held between reviewers and HL and AQS to standardize the data extraction process and resolve any issues identified.Any conflicts that cannot be resolved will be discussed with JYN.
Risk of bias
As we are going to use the JBI methodology for scoping reviews, we have elected not to conduct a risk of bias assessment.
Data analysis and presentation
The data will be summarized descriptively (e.g., frequencies of the number of studies, country, types of CAIM, outcomes reported in the bibliometric analyses) with full results presented in tabular format.Frequency of CAIM bibliometric analysis publications over time will be presented in graphical format.An additional figure will be created highlighting the types of bibliometric information each study reported.
Dissemination
The findings of this review will be disseminated in scientific journals.
Study status
The literature search is ongoing.
Discussion
We anticipate this project will advance the understanding of topics and trends in CAIM research, including what types of CAIMs are most commonly explored and what health conditions are managed through the use of these CAIMs.Accordingly, it can help CAIM researchers locate bibliometric sources to inform future research directions or identify gaps that warrant further investigation.It is anticipated that this review will also provide unique insights on how bibliometric analyses of CAIM literature are conducted, including the type of methodology used (such as performance analysis metrics or science mapping techniques), the number and types of outcomes reported, and how bibliometric information is presented (e.g., narratively, graphically).
As the number of scientific publications has been growing exponentially in the last fifty years, bibliometrics has become useful to quantitatively analyse publications on a particular topic. 22Generally, there is large variability in how bibliometric analyses are conducted, as there is no authoritative guideline on bibliometric methodology. 13,16indings from the completed review can help identify whether there are any inconsistencies in the way that bibliometric analyses specifically on CAIM literature are conducted.This may be useful to inform future, standardized reporting guidelines for bibliometric analyses, generally, or with a potential focus on CAIM topics.Further, it may be expected that performance analysis metrics used to measure research constituents' scientific impact (e.g., h-index, g-index, total citations) differ between bibliometric analyses. 23Given the varying capability of different performance analysis metrics in capturing scientific impact of a given research constituent, 24 this review could reveal the extent to which bibliometric analyses of CAIM literature are effectively measuring scientific impact.Comparisons of different research constituents' scientific impact are further complicated by how average values of bibliometric indicators often differ between disciplines (e.g., molecular biology, nursing). 22[27] Bibliometric methodology can be influenced by database changes such as the indexing of new journals or articles. 28Due to limitations of visualization softwares like VOSviewer, often only either Scopus or Web of Science databases can be searched, which cover most but not all databases. 29If the findings of this scoping review reveal similar limitations of bibliometric analyses conducted on CAIM literature, these limitations could potentially drive investigation into techniques that will improve analysis of bibliometric results.Additionally, there are no known critical appraisal tools for bibliometric analyses.Concerns have been expressed over a lack of knowledge of good practices when conducting bibliometric analyses. 30While this is outside the scope of this present review, future research may investigate assessment tools to evaluate the quality of published bibliometric analyses.The identification of a group of bibliometric analyses in this review may potentially serve as a test set for the development and investigation of quality indicators.
Strengths and limitations
Strengths of this study will include adherence to the JBI methodology for scoping reviews 17 and use of a comprehensive systematic search strategy 19 across several databases to identify eligible articles.Another strength is that screening and data extraction will be conducted in duplicate, significantly reducing bias.There are some limitations expected in this review.By only including studies written in English, we could be missing important international work.For example, Chinese databases may contain a higher volume of CAIM articles but are unable to be searched in this study.This is especially relevant as some forms of CAIM may be practiced more frequently in non-English speaking regions of the world, such as traditional Chinese medicine in China. 31,32Additionally, reported findings are expected to be descriptive in nature, such as the frequencies of the types of CAIMs examined in bibliometric studies or the frequencies of studies that engaged in science mapping versus performance analysis techniques.This makes it difficult to extrapolate themes or correlates, like which CAIMs are effective, or which bibliometric techniques are preferable.
Extended data
Open Science Framework: Characteristics of Bibliometric Analyses of Complementary, Alternative, and Integrative Medicine Literature: A Scoping Review.https://doi.org/10.17605/OSF.IO/JSQWY. 33is project contains the following extended data: -Appendix A -Search Strategies_Jan0923.docx(search strategies for MEDLINE, EMBASE, PsycINFO, AMED, CINAHL, Scopus and Web of Science databases).
Reporting guidelines
Open Science Framework: PRISMA-ScR checklist for 'Characteristics of bibliometric analyses of the complementary, alternative, and integrative medicine literature: A scoping review protocol.'https://doi.org/10.17605/OSF.IO/JSQWY. 33ta are available under the terms of the Creative Commons Attribution 4.0 International license (CC-BY 4.0).
Open Peer Review
Current Peer Review Status:
Zhengwei Huang
Jinan University, Guangdong, China Thank you very much for inviting me to review this Study Protocol.I noticed that this paper entitled "Characteristics of bibliometric analyses of the complementary, alternative, and integrative medicine literature: A scoping review protocol" was a revised version.It proposed a detailed research protocol for the bibliometric scoping review of CAIM literatures.The background was clearly demonstrated, and the methodology was acceptable.The other sections like discussions were okay.Based on my personal research experience on bibliometric analysis, I supposed that the protocol was probably feasible.Most importantly, the authors had well responded to the previous comments, and made proper revisions.Therefore, basically it can be Approved for indexing.An expectation is that, as the authors stated in the Dissemination Section, "the findings of this review will be disseminated in scientific journals", we can see the full report as soon as possible.
Is the rationale for, and objectives of, the study clearly described?Yes
Are the datasets clearly presented in a useable and accessible format? Yes
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: Bibliometrics; Nanomedicine; Drug delivery; Ferroptosis I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.
Is the rationale for, and objectives of, the study clearly described?Yes
Are the datasets clearly presented in a useable and accessible format? Yes
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: chiropractic, CAIM
I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.
Version 1
Reviewer Report 14 August 2023 https://doi.org/10.5256/f1000research.143074.r163736 © 2023 Trager R.This is an open access peer review report distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Robert Trager
Connor Whole Health, University Hospitals Cleveland Medical Center, Cleveland, OH, USA Overview I congratulate the authors on an extremely well-written, thorough, and clear protocol for a scoping review of bibliometric studies on CAIM.They adequately justify the rationale for the study and describe how their findings may impact the CAIM literature.The methodology is concise yet descriptive.I also appreciated that open-access files were included on OSF, along with an impressive search strategy.I have no major comments regarding the protocol, and only a couple of minor comments.
Minor comments
The authors note that their study will help identify "what health conditions these CAIMs target, " and in the abstract state "health condition targeted."While this is certainly OK, I believe that the language could be slightly altered in these sections of text.I expect that the authors may encounter bibliometric studies of CAIM which describe not only the treatment of certain conditions, but also allude to a diagnostic, clinical reasoning, case management process, or the use of CAIM preventative purposes among individuals who are already healthy (e.g., "wellness").For example, in our bibliometric study of chiropractic case reports, we found an increasing trend in studies describing the diagnosis of vascular disorders.While I am less familiar with other fields, I imagine a similar phenomenon may be noted for CAIM-related professions wherein providers have a broad scope of practice, requisite on diagnosis, such as osteopathy or physical therapy.I think the protocol could be therefore altered slightly to change "target(ed)" to "manage(ed)" or some other language that is broader to reflect more than just treatment, but rather an overall management of the patient. 1.
The authors describe CAIM as "therapies" throughout the manuscript.I think they have the liberty to describe it this way, yet I would caution them that in some instances the CAIM therapy is distinct from the branch of providers that often use that therapy.For example, in my field of chiropractic, chiropractors often use spinal manipulation, yet chiropractors also perform diagnosis and referral in their management of health conditions and sometimes omit spinal manipulation altogether.One might also consider that a single CAIM therapy could be provided by several types of practitioners.To continue with the above example, osteopaths and physical therapists also use spinal manipulation.The authors could have an a priori method for handling how they categorize CAIM therapies versus practitioner types.However, given that this may be confusing and/or unnecessary to establish before seeing the results, the authors could also describe the categorization of CAIM therapies versus provider types as an iterative process, subject to change if there is overlap between the two.
Comments for clarity
Data extraction -The phrase "We seek to conduct a scoping review" is redundant and could be deleted.
1.
This is totally optional as it's not directly related to the manuscript but may be helpful for other readers -on OSF I could not preview the two Appendix files and had to download them to be able to see them.Consider uploading the files as a PDF in OSF so they can be viewed within an internet browser.However, I recommend also keeping the original Word documents on OSF.
Is the rationale for, and objectives of, the study clearly described?Yes Is the study design appropriate for the research question?Yes
Are sufficient details of the methods provided to allow replication by others? Yes
Are the datasets clearly presented in a useable and accessible format?
Not applicable
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: chiropractic, CAIM I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.
We also find the reviewer's point interesting and acknowledge that different CAIM practices or techniques (e.g., spinal manipulation) may be employed by different types of practitioners.However, we anticipate that not all bibliometric analyses included in our review would include the same types of metrics.Accordingly, not all bibliometric analyses may report the types of providers that are providing these CAIM therapies in the literature.
In our data extraction form, we will report whether a paper has included the "types of providers" as a metric in their bibliometric analysis.However, we will unfortunately be unable to extract extensive details on which providers would be providing these therapies.This is for the sake of consistency across all the studies, and is in line with JBI scoping review methods which aims to map and provide a broad overview of the literature.To your point, it would be difficult to devise methods for categorizing CAIM therapies versus provider types as described.
Regarding the comments for clarity, we agree that the phrase "We seek to conduct a scoping review" is redundant and have removed it from the protocol.Additionally, we will upload PDF versions of the two Appendix files, alongside the Microsoft Word document files, in Open Science Framework.
different from other medical/health sub-fields?Are there particular pitfalls?And isn't it more or less necessary to compare these studies with other areas to be able to say something relevant about the patterns (at least indirectly?I have not checked the details, but there seem to be a few similar scoping reviews on other medical sub-fields.) The search procedure/extraction seems appropriate, as well as the choice of databases.Characteristics/information that will be extracted is also relevant in relation to the scope of the study.
A minor detail; the formulations about lack of standardized procedures in bibliometric analysis just seem blunt.Bibliometrics is, as you know, a diverse field of research methods and there are never ending discussions about limitations, potential developments (etc.), but there are certainly standardized procedures.
Looking forward to the results!Is the rationale for, and objectives of, the study clearly described?Partly
Are sufficient details of the methods provided to allow replication by others? Yes
Are the datasets clearly presented in a useable and accessible format?
Not applicable
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: Sociology, bibliometrics, medical sociology I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.
capture interesting trends in the field such as the publications over time of different CAIM therapies, or the countries that contribute most to particular research topics.This may be useful to CAIM researchers to identify gaps in the CAIM field and where research is headed.We have stated these points in the last paragraph in the introduction section.
Further, analyzing the characteristics of bibliometric analyses of CAIM literature can allow us to identify commonly and less commonly used bibliometric analysis metrics across studies.This could be of interest to inform the development of reporting guidelines for bibliometric analyses.As stated in the introduction section, while there are guidelines for how to conduct bibliometric analyses, there is no consensus in the literature on what should be included.We anticipate that articles may have diverse or unique bibliometric variables, or may even range in the number of variables used.As stated above, this could be useful to improve our understanding of how bibliometric analyses are conducted, and can help to inform future reporting guidelines on this methodology.
Regarding your suggestion on comparing characteristics of CAIM bibliometric analyses to other fields, we agree this would be interesting.We have not been able to locate systematic or scoping reviews that report the characteristics of bibliometric analyses (e.g., performance analysis or science mapping techniques used, main findings, health conditions managed) of CAIM literature or of other medical/health sub-fields.Accordingly, it is difficult to ascertain how our findings would compare to other medical/health sub-fields.We would be interested if the reviewer is able to share any identified review articles exploring the characteristics of bibliometric analyses in other medical fields.
Regarding the standardized procedures of bibliometrics, we acknowledge that bibliometric analyses can be conducted in diverse ways, and there are some guidelines on the types of metrics that can be used (e.g., Linnenleucke et al., 2019 1 ; Donthu et al., 2021 2 ).For example, we plan to use the paper by Donthu et al. 2 to inform our scoping review.However, there is a lack of consensus in the literature on what baseline information should be required as part of a bibliometric analysis, unlike other research designs (e.g., PRISMA guidelines and JBI methods have been widely adopted for reporting and conduct of systematic reviews).We have rephrased the sentence in the 5th paragraph of the Introduction section to reflect that standardized procedures exist, but that there is no universal standard or consensus in the literature on what a bibliometric analysis should at minimum entail.
The benefits of publishing with F1000Research: Your article is published within days, with no editorial bias • You can publish traditional articles, null/negative results, case reports, data notes and more • The peer review process is transparent and collaborative • Your article is indexed in PubMed after passing peer review • Dedicated customer support at every stage • For pre-submission enquiries, contact research@f1000.com This is an open access peer review report distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. | 2023-02-15T16:18:54.832Z | 2023-02-13T00:00:00.000 | {
"year": 2023,
"sha1": "5dcbe2662bca635eacdc23b3fa356487d07ab20e",
"oa_license": "CCBY",
"oa_url": "https://f1000research.com/articles/12-164/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3b156141956476113e4085037e71d0b65e24f349",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
205126242 | pes2o/s2orc | v3-fos-license | IFITM3 inhibits virus-triggered induction of type I interferon by mediating autophagosome-dependent degradation of IRF3
Interferon-induced transmembrane protein 3 (IFITM3) is a restriction factor that can be induced by viral infection and interferons (IFNs). It inhibits the entry and replication of many viruses, which are independent of receptor usage but dependent on processes that occur in endosomes. In this study, we demonstrate that IFITM3 plays important roles in regulating the RNA-virus-triggered production of IFN-β in a negative-feedback manner. Overexpression of IFITM3 inhibited Sendai virus-triggered induction of IFN-β, whereas knockdown of IFITM3 had the opposite effect. We also showed that IFITM3 was constitutively associated with IRF3 and regulated the homeostasis of IRF3 by mediating the autophagic degradation of IRF3. These findings suggest a novel inhibitory function of IFITM3 on the RNA-virus-triggered production of type I IFNs and cellular antiviral responses.
INTRODUCTION
As the first line of host defense, the innate immune system counters viral infection by expressing a number of intrinsic antiviral proteins, triggering the production of interferons (IFNs) and facilitating the activation of adaptive immunity. Intrinsic antiviral proteins, which are constitutively expressed, directly restrict the invasion or replication of viruses. Most intrinsic antiviral proteins can also be induced by IFNs. The expression of IFNs depends strictly on the recognition of pathogens by endosomal Toll-like receptors (TLRs) or cytosolic viral sensors, such as retinoic acid-inducible gene I (RIG-I) and cyclic GMP-AMP synthase. These receptors trigger the transduction of different signals, leading to the induction of type I IFNs (including IFN-α and IFN-β family members) and proinflammatory cytokines. Type I IFNs activate the Janus kinase signal transducer and activator of transcription pathway, and initiate the transcription of IFN-stimulated genes (ISGs). The products of these genes inhibit viral replication, eradicate virus-infected cells and facilitate the activation of antiviral adaptive immunity. 1,2 TLRs and RIG-I-like receptors (RLRs) detect RNA virus infection. For example, TLR3 recognizes viral double-stranded RNA (dsRNA) released by infected cells and triggers TIRdomain-containing adapter-inducing interferon-β-mediated signaling pathways, whereas TLR7 and TLR8 recognize viral single-stranded RNA and activate MyD88-dependent signaling. 3, 4 The RLRs, such as RIG-I and MDA5, recognize cytoplasmic viral RNA through their C-terminal RNA helicase domains, and then recruit the downstream adapter protein virus-induced signaling adaptor (VISA, also known as MAVS, IPS-1 and Cardif) through their CARD domains. [5][6][7][8] Both TLRand RLR-triggered signaling cascades activate the downstream kinases TBK1-IKKε and TAK1-IKKβ, leading to the activation of interferon regulatory factors (IRFs) and NF-κB. These activated transcription factors collaboratively trigger the transcription of type I IFN genes. 9,10 Interferon-induced transmembrane (IFITM) proteins are intrinsic antiviral restriction factors. Like other intrinsic factors, the IFITMs can be further induced by IFNs. 11,12 IFITM3, a member of the IFITM family, has been extensively studied 1 because it plays an important role in preventing endocytosed viral particles from accessing the host cytoplasm. Many viruses are inhibited by IFITM3, including Dengue virus, West Nile virus, Influenza A virus (IAV), flaviviruses, severe acute respiratory syndrome coronavirus and Vesicular stomatitis virus (VSV). [13][14][15] These are all enveloped viruses and enter the cell by endocytosis. It has been reported that IFITM3 restricts viruses independently of their receptor usage but is dependent on processes that occur in the endosome. It prevents membrane hemifusion mediated by envelope proteins of IAV and restricts the early steps of IAV replication. 16 It also restricts Human immunodeficiency virus 1 (HIV-1) infection by antagonizing the envelope glycoprotein. 17 IFITM3 restricts the entry of reoviruses, which are nonenveloped viruses, and utilizes endosome-dependent cell entry mechanisms. 14 Although the role of IFITM3 in antiviral immunity has been investigated intensively, here we report our novel observation that IFITM3 negatively regulates the virus-induced production of type I IFNs. Sendai virus (SeV), which fuses with the cell surface, is reportedly not restricted by IFITM3. 18 In this study, we demonstrate that IFITM3 negatively regulates SeV-induced production of type I IFNs by targeting IRF3. Thus, we report a distinctive mechanism of IFITM3 in antiviral immunity.
Transfection and reporter assays HEK293 cells or HeLa cells were transfected using the calcium phosphate precipitation method or FuGENE Transfection Reagent (Roche, Basel, Switzerland). The pRL-TK Renilla luciferase reporter plasmid was added to each transfection reaction to normalize for the transfection efficiency. The luciferase assays were performed using the Dual-Luciferase Reporter Assay System (Promega, Madison, WI, USA). 22 Coimmunoprecipitation and immunoblot analyses Transfected HEK293 cells from 10-cm dishes were lysed in l ml of Nonidet P-40 lysis buffer (1% Nonidet P-40, 150 mM NaCl, 1 mM EDTA, 20 mM Tris-HCl, pH 7.4-7.5, 10 μg/ml leupeptin, 10 μg/ml aprotinin and 1 mM phenylmethylsulfonyl fluoride). For each immunoprecipitation reaction, 0.9 ml of the cell lysate was incubated with 25 μl of Gamma Bind G Plus-Sepharose and 0.5 μg of antibody for immunoprecipitation at 4°C for 3 h. The Sepharose beads were intensively washed with lysis buffer containing 0.5 M NaCl. The precipitates were then subjected to SDS-PAGE and immunoblot analysis.
ELISA
The supernatants of the cell culture medium were analyzed using a human IFN-β (PBL) ELISA kit following the protocols recommended by the manufacturers.
In vitro binding assay For the glutathione S-transferase (GST) pull-down assay, GST fusion constructs were transformed into bacterial strain BL21. The fusion proteins were induced with 0.1 mM IPTG at 18°C overnight and purified with Glutathione-Sepharose 4B (GE healthcare) according to the manufacturer's instructions. The purified GST-tagged proteins were then incubated with the HEK293 cell extracts. The protein complex was pulled down with Glutathione-Sepharose 4B beads and then subjected to western blot analysis.
Statistical analysis
Data were statistically analyzed by the Student's t-test. P-values less than 0.05 were considered statistically significant.
RESULTS
Overexpression of IFITM3 inhibits SeV-triggered induction of IFN-β IFITM3 blocks the infection of many viruses that require endosomal entry pathways. 14 However, whether IFITM3 affects virus-induced production of IFN is still unknown. SeV is not inhibited by IFITM3 (Supplementary Figure S1a), possibly because it enters cells by fusing to the cell surface. 18 Thus, we infected HEK293 cells with SeV and performed luciferase assays to identify the role of IFITM3 in RLR-mediated antiviral signaling. Overexpression of IFITM3 inhibited SeV-induced activation of the IFNβ promoter, ISRE and NF-κB in a dosedependent manner in HEK293 cells (Figure 1a and Supplementary Figure S1b). Similar results were observed in HeLa cells (Supplementary Figure S1c), suggesting that the effects of IFITM3 on SeV-triggered antiviral signaling were not cell-type-specific. Overexpression of IFITM3 had no marked effect on IFNγ-triggered activation of the IRF1 reporter in either HEK293 or HeLa cells (Figure 1b and Supplementary Figure S1d) or TLR3 signaling (data not shown), suggesting that IFITM3 specifically inhibits virus-triggered activation of the RLR pathway. Although IFITM1 and IFITM2 have high sequence identities with IFITM3 (61% and 90%, respectively), they had no marked effect on SeV-induced activation of IFNβ and ISRE ( Figure 1c). We extended the analysis by evaluating the transcription of IFNB1 and its downstream genes. The SeVtriggered or transfected-poly(I:C)-induced transcription of the IFNB1, RANTES and ISG56 genes was inhibited by overexpression of IFITM3 (Figures 1d and e). SeV or poly(I:C)induced secretion of IFNβ into the medium was also impaired by overexpression of IFITM3 (Figures 1f and g). These results suggest that IFITM3 negatively regulates cellular antiviral signaling.
Knockdown of IFITM3 potentiates virus-triggered induction of IFN-β and inhibits viral replication
To confirm the role of endogenous IFITM3 in RLR-mediated production of type I IFNs, we prepared two RNAi constructs and determined their effects on the knockdown of IFITM3. As shown in Figure 2a, #2 IFITM3-RNAi transfection inhibited IFITM3 levels to 10% of the control sample (Po0.01). The #1 RNAi transfection inhibited IFITM3 levels to 60% of the control sample (Figure 2a). In the reporter assays, knockdown of IFITM3 by RNAi transfection significantly potentiated SeV-triggered activation of the IFN-β promoter, ISRE and NF-κB, but not IFN-γ-triggered activation of the IRF1 promoter (Figures 2b and c; Supplementary Figure S1e). The degree of potentiation correlated with the knockdown efficiency of the corresponding RNAi construct (Figure 2b and Supplementary Figure S1e). Because the effect of the #2 RNAi construct was better than that of the #1 RNAi construct, we selected the IFITM3-RNAi#2 construct for the experiments described below. Knockdown of IFITM3 by the RNAi construct (IFITM3i) potentiated SeV-and transfected-poly(I:C)-induced transcription of IFNB1, RANTES and ISG56 (Figures 2d and e), as well as the secretion of IFNβ into the medium (Figures 2f and g).
Next, we evaluated the effect of IFITM3 on viral replication. The conditioned medium from cultured HEK293 cells, which had been transfected with the IFITM3i construct and poly(I:C), was collected and used to treat Vero cells as previously described. 23 The treated Vero cells were then infected with GFP-VSV or GFP-NDV, and replication of the GFP-labeled viruses was evaluated by direct observation under microscopy. The green fluorescence representing VSV or NDV particles decreased markedly in cells treated with the conditional medium from IFITM3-knockdown cells compared with the control groups (Figure 2h), suggesting that IFITM3 plays inhibitory roles in virus-triggered induction of IFN-β and subsequently inhibits viral replication.
IFITM3 regulates virus-triggered signaling at the level of IRF3
To determine the molecular order of IFITM3 in the virustriggered signaling pathway, we examined the effects of IFITM3 on the transcription of IFNB1 mediated by components of the virus-triggered pathway. As shown in Figure 3a, IFITM3 inhibited the transcription of IFNB1 induced by overexpression of RIG-I-CARD, VISA, TBK1 and IRF3-5D. Consistent with this finding, knockdown of IFITM3 by RNAi potentiated the transcription of IFNB1 mediated by overexpression of IRF3 (Figure 3b). Because activation of the transcription factor IRF3 is a critical event in antiviral signaling, we next determined whether the activation status of IRF3 was affected by IFITM3. SeV-induced dimerization and nuclear translocation of IRF3 were increased in IFITM3-knockdown cells (Figure 3c and Supplementary Figure S2a). The total amount of IRF3 also increased dramatically when IFITM3 was knocked down (Figure 3c). To confirm this finding, we calculated the ratio of the amount of IRF3 dimer to the total amount of IRF3 by measuring the grayscale values of the bands in Figure 3c. The results showed that knockdown of IFITM3 enhanced the dimerization of IRF3 (Figure 3d). However, the transcription level of IRF3 remained unchanged when IFITM3 was knocked down, suggesting that IFITM3 did not affect IRF3 expression at the level of transcription (Figure 3e). To determine whether IFITM3 affected the protein levels of IRF3, we evaluated the expression of IRF3 after stimulation with SeV. The protein levels of IRF3 greatly decreased when IFITM3 was overexpressed, whereas knockdown of IFITM3 enhanced the IRF3 levels even without SeV infection (Figures 3f and g). Consistently, phosphorylation of IRF3 dramatically decreased when IFITM3 was overexpressed and increased when IFITM3 was knocked down (Supplementary Figure S2b). In contrast, the expression levels of IFITM3 had no marked effect on the other proteins in this pathway, such as TBK1 or TBK phosphorylation (Figures 3f and g Figure S2b). IFITM1 or IFITM2 had no marked effect on the basal level of IRF3 (Figure 3h). Collectively, these data suggest that IFITM3 regulates antiviral signaling at the level of IRF3 and participates in the regulation of the basal expression of IRF3.
IFITM3 associates with IRF3
To explore how IFITM3 regulates the expression of IRF3, we performed coimmunoprecipitation to determine whether IFITM3 interacted with IRF3. As shown in Figure 4a, IFITM3 interacted constitutively with endogenous IRF3, but not VISA, with or without SeV stimulation. Furthermore, IRF3-5A, which is the phosphorylation-deficient mutant of IRF3, was also found to interact with IFITM3 (Figure 4b), suggesting that the interaction between IFITM3 and IRF3 did not rely on the phosphorylation of IRF3. To further confirm this interaction, we performed an in vitro binding assay. Bacterially expressed and affinity-purified GST-IFITM3 fusion protein was incubated with ectopic IRF3 or IRF3-5A-expressing whole-cell extracts. As shown in Figure 4c, both IRF3 and IRF3-5A were pulled down by GST-IFITM3, suggesting that the interaction between IFITM3 and IRF3 was constitutive and independent of the activation status of IRF3. Consistent with this finding, a confocal immunofluorescence analysis showed that IFITM3 Figure S2c). Domain mapping analysis indicated that the N terminus of IRF3 (amino acids 1-140) was responsible for its interaction with IFITM3 (Figure 4d). Because IFITM3 was induced by IFNs, we analyzed the kinetic pattern of IFITM3 and IRF3. The expression of IFITM3 was induced while the expression of IRF3 was decreased following IFNα or poly(I:C) transfection treatment (Figure 4e). These data suggest that IFITM3 associates with IRF3 and possibly mediates the degradation of IRF3.
co-localized with IRF3 in the cytoplasm of HeLa cells (Supplementary
IFITM3 mediates autophagic degradation of IRF3 IRF3 degradation is mediated by proteasomal degradation and/ or autophagy. [24][25][26] To investigate the role of IFITM3 in IRF3 inhibitor MG132 or the autophagy inhibitor 3-MA. MG132 treatment had little effect on the degradation of IRF3 caused by overexpression of Flag-IFITM3-M (Figure 5a). However, 3-MA treatment could inhibit the degradation of IRF3 caused by overexpression of Flag-IFITM3-M (Figure 5a), suggesting that the IFITM3-mediated degradation of IRF3 was autophagydependent but not proteasome-dependent. To confirm this finding, we first assessed whether IRF3 was degraded through autophagy in our system. Silencing autophagy-related 7 (ATG7), which is a key factor during the synthesis of the autophagosome precursor, leads to the inhibition of autophagy. 27 Mouse atg7-deficient (atg7 − / − ) MEF cells and control wild-type (WT) cells were treated with cycloheximide (CHX), an inhibitor of new protein synthesis. Immunoblotting analysis showed that the level of IRF3 was higher in Atg7 − / − cells than in WT cells after treatment with CHX (Supplementary Figure S3b), confirming previous reports that autophagy is one of the turnover mechanisms of IRF3. 26 It has been reported that overexpression of IFITM3 can induce autophagy, including LC3 puncta and lipidation. 28,29 Consistent with this finding, IFITM3 dose-dependently promoted the transformation of type I LC3 to type II LC3, which is an indicator of enhanced autophagy (Figure 5b). A confocal immunofluorescence analysis showed that IFITM3 colocalized with autophagic components such as LC3, ATG5 and Beclin1 (Figures 5c and d). Moreover, IFITM3 or IRF3 interacted with LC3 and Beclin1 (Figure 5e), suggesting that LC3 and Beclin1 are involved in the process of IFITM3 mediating the autophagic degradation of IRF3. Overexpression of IFITM3 changed the subcellular location of IRF3 from a disperse distribution in the cytosol to specific organelle autophagosomes, as indicated by LC3-GFP (Figure 5f), suggesting that IFITM3 mediates the autophagic degradation of IRF3. Furthermore, viral infection induced the majority of IRF3 to translocate to the nucleus, but a portion of IRF3 colocalized with LC3, suggesting that SeV could indeed induce the translocation of cytosolic IRF3 into autophagosomes (Figure 5f). In HEK293 cells treated with CHX, IRF3 was degraded much more slowly when IFITM3 was silenced in comparison to the control cells (Figure 5g), suggesting that IFITM3-mediated autophagic degradation of IRF3 was critical for the turnover of IRF3.
DISCUSSION
IFITM3 has been extensively studied as a restriction factor that confers broad resistance to viral infection. 13,14 The enveloped and nonenveloped viruses that are restricted by IFITM3 utilize an endosomal entry mechanism. 30,31 IFITM3 restricts their replication by blocking virus-endosome fusion. 29,32 A possible underlying mechanism is that IFITM3 induces the accumulation of cholesterol in multi-vesicular bodies and in late endosomes, thus impairing the membrane fusion of intraluminal virion-containing vesicles and endosomes. 33 IFITM3 is also strongly retained in resident memory CD8 + T cells to facilitate cell survival and enhance cell resistance to infection with influenza viruses. 34 Here we report a novel role of IFITM3 in regulating the virus-induced production of type I IFNs. Using SeV or the dsRNA analog poly(I:C) as the agonist, we have shown that overexpression of IFITM3, but not IFITM1 or IFITM2, inhibits cytosolic-RNA-induced activation of the IFN-β promoter and the transcription of IFNB1 and its downstream antiviral genes, whereas the knockdown of IFITM3 has the opposite effects. Knockdown of IFITM3 enhances the dimerization and nuclear translocation of IRF3, and it promotes the antiviral signaling pathway against viral replication. These results suggest that IFITM3, the expression of which is enhanced by viral infection and type I IFNs, regulates the virus-triggered antiviral response in a negative-feedback manner. Many ISGs not only block viral replication during different phases of the viral replication cycle, but they also regulate the production of IFNs to maintain suitable antiviral responses. For example, ISG56 negatively regulates cellular antiviral responses by disrupting the interactions between mediator of IRF3 activation and VISA or TBK1. 35 Type I IFN leads to the induction of RBCK1, which can induce the subsequent degradation of IRF3. 36 PCBP2, another ISG, acts as a scaffold to facilitate AIP4-mediated degradation of VISA, which is a critical mechanism in the negative regulation of RLR signaling and is also apparent in Figure 4a. 37 By contrast, DDX60 enhances RLR-mediated production of IFNs by binding to RIG-I and promoting the binding of RIG-I to dsRNA. ISG15 regulates type I IFN signaling by modulating the ISGylation of RIG-I and IRF3. [38][39][40] Our study adds IFITM3 to the ISGs that negatively regulate virus-induced induction of type I IFNs while simultaneously inhibiting viral replication. As is well known, immoderate immune responses lead to pathological tissue damage and even induce autoimmune reactions. The inhibitory effects of IFITM3 on both IFN production and viral replication are undoubtedly important for maintaining a suitable level of innate antiviral responses.
The role of IFITM3 in restricting viruses is considered to be closely related to its cellular localization. IFITM3 partially localizes to acidic compartments, including late endosomes expressing Rab5 and Rab7, lysosomes expressing LAMP1, and autolysosomes expressing LC3 and CD63. 28,29 IFITM3 restriction of SARS-CoV is circumvented when trypsin digestion is used to trigger membrane fusion at or near the plasma membrane rather than within the acidic cellular compartment. 32 In this study, we found that the role of IFITM3 in regulating IFN signaling is also closely related to its cellular localization. We observed that overexpression of IFITM3 led to an increase in LC3 transformation and the formation of LC3 puncta (Figures 5b and f). Based on the substantial increase in IRF3 when IFITM3 is depleted (Figure 3c), we demonstrated that IRF3 and IFITM3 interact constitutively with one another and colocalize with the same cellular autophagosome compartments (Figures 4a-d and Supplementary Figure S2c). Because the autophagy inhibitor 3-MA rescued the degradation of IRF3 caused by IFITM3 (Figure 5a), we confirmed that autophagy is one mechanism of IRF3 homeostasis and that overexpression of IFITM3 enhances the process of autophagy, as evidenced by the transformation of LC3 from type I to type II. Finally, but importantly, we have provided evidence that IRF3 is concentrated in autolysosomes expressing LC3 when IFITM3 is overexpressed (Figure 5f). Upon stimulation by viruses, high levels of IFITM3, induced by IFNs, enhance the degradation of IRF3 by autophagy and thus regulate antiviral immune responses via a negativefeedback mechanism. Although in this study we have shown that IFITM3 negatively regulates the activation of NF-κB (Supplementary Figures S1b and S1e), it is still unclear whether this function is related to autophagy, necessitating further study.
In summary, our study reveals a novel function of IFITM3 in antiviral immunity. IFITM3 not only restricts many viruses but also negatively regulates type I IFN signaling by enhancing the autophagic degradation of IRF3. Our findings demonstrate a previously undescribed role of IFITM3 in regulating cellular antiviral responses. | 2018-04-03T04:06:24.551Z | 2017-04-24T00:00:00.000 | {
"year": 2017,
"sha1": "d9c0f502f9fab8499cd609b9c1e835104664048f",
"oa_license": null,
"oa_url": "https://www.nature.com/articles/cmi201715.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "d9c0f502f9fab8499cd609b9c1e835104664048f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
52039520 | pes2o/s2orc | v3-fos-license | Preservation of muscle mass as a strategy to reduce the toxic effects of cancer chemotherapy on body composition
Purpose of review Cancer patients undergoing chemotherapy often experience very debilitating side effects, including unintentional weight loss, nausea, and vomiting. Changes in body composition, specifically lean body mass (LBM), are known to have important implications for anticancer drug toxicity and cancer prognosis. Currently, chemotherapy dosing is based on calculation of body surface area, although this approximation does not take into consideration the variability in lean and adipose tissue mass. Recent findings Patients with depletion of muscle mass present higher chemotherapy-related toxicity, whereas patients with larger amounts of LBM show fewer toxicities and better outcomes. Commonly used chemotherapy regimens promote changes in body composition, primarily by affecting skeletal muscle, as well as fat and bone mass. Experimental evidence has shown that pro-atrophy mechanisms, abnormal mitochondrial metabolism, and reduced protein anabolism are primarily implicated in muscle depletion. Muscle-targeted pro-anabolic strategies have proven successful in preserving lean tissue in the occurrence of cancer or following chemotherapy. Summary Muscle wasting often occurs as a consequence of anticancer treatments and is indicative of worse outcomes and poor quality of life in cancer patients. Accurate assessment of body composition and preservation of muscle mass may reduce chemotherapy toxicity and improve the overall survival.
INTRODUCTION
Despite significant progress in the development of novel cancer treatments, chemotherapy is often utilized for most tumors irrespective of its associated toxicities [1]. It is now clear that chemotherapy plays a direct role in the loss of muscle mass and muscle strength in cancer patients (often referred to as 'cachexia'), a condition that can persist for months to years following remission [2][3][4][5][6][7][8][9]. Notably, patients suffering from cachexia-related symptoms are often unable to complete treatment regimens and may require delays in treatment, dose limitation, or discontinuation of therapy [10,11]. Several studies have been conducted with the goal of identifying strategies to minimize or prevent cancer therapy toxicities [12]. This review will highlight the mechanistic effects of cancer treatments on body composition and provide potential strategies proposed to limit chemotherapy-related toxicities in cancer patients, including ways to preserve lean body mass (LBM). biological activity that lead to ablation of cancer cells, but at the same time are responsible for dramatic toxicities in the host body. Among these side effects, nausea, vomiting, diarrhea, anorexia, body weight changes, and anemia are the most relevant [13]. Notably, muscle weakness and fatigue are some of the most common and distressing symptoms associated with cancer and chemotherapy [14,15], and it is estimated that over 70% of patients receiving cancer treatments will present symptoms associated with these conditions [16,17,18 && ,19,20]. Because of this, chemotherapy-dependent effects on body composition and on musculoskeletal function have recently become subject of interest [3]. Indeed, muscle dysfunction in cancer patients may affect the overall quality of life, including productivity and physical functioning, and this may be further intensified following chemotherapy [21][22][23][24][25][26][27]. In this regard, it has been shown that administration of chemotherapy promotes depletion of skeletal muscle mass in patients affected with advanced tumors, including lung, breast, colorectal, prostate, and nonsmall-cell lung (NSCLC) cancers, and this condition negatively impacts physical function by causing impaired muscle strength (such as slower chair-rise time and reduced hand-grip force), as well as joint dysfunction [
PROPER ASSESSMENT OF BODY COMPOSITION IS ESSENTIAL TO PREVENT CHEMOTHERAPY TOXICITY
Experimental and clinical findings suggest that body composition and, in particular, skeletal muscle mass play a pivotal role in the response to chemotherapy and in the prevention of its associated toxicities, as well as in ultimately predicting outcomes and survival of cancer patients. For instance, Du Bois and Du Bois [30] proposed a method to estimate pharmacokinetics and dosage of a drug by determining the body surface area (BSA) as a relation between height and weight, according to the formula BSA (m 2 ) ¼ ([height (cm 2 ) Â weight (kg)]/ 3600) 1/2 . Although not optimal, this method is still widely used, especially for dosing of drugs characterized by a low therapeutic index, as in the case of chemotherapeutics, and several modifications were suggested to generate a better approximation of the BSA [31,32].
There remains, however, a potential limitation associated with this dosing method in that it does not take into account the considerable variation in BSA because of changes in fat mass. Indeed, it has been previously demonstrated that the assessment of BSA can overestimate or underestimate the correct drug dosing. This is particularly true for antineoplastic agents, most of which have a narrow therapeutic window, thus leading to low efficacy in case of underdosing or severe side effects in case of overdosing [33]. Along the same line, Chatelut et al. [34] provided evidence that the clearance of different chemotherapeutic agents often poorly correlate with the BSA, thus casting doubt on the effectiveness of this parameter for the dosing of chemotherapy. Another study by Prado et al. [35] also showed that cancer patients presenting with identical BSA may regardless show high variability in LBM, primarily because of significant changes in adipose tissue mass. These findings suggest that accurate body composition assessment is linked to chemotherapy toxicity and survival. Therefore, wasting of skeletal muscle mass, by constituting a smaller volume of distribution for anticancer drugs, may also lead to inadvertent overdosing and exacerbated toxicity. This hypothesis was further supported by more recent evidence showing that patients with low amount of lean tissue at time of cancer diagnosis were also more susceptible to develop side effects following chemotherapy administration [36]. Additionally, the prevalence of dose-limiting toxicities was also shown in a cohort of advanced renal cell carcinoma patients presenting muscle depletion and low lean tissue mass with respect to patients not affected by these conditions [37].
In order to address the concerns related to the use of BSA for chemotherapy dosing, alternative methods have been proposed, including the assessment of the ideal body weight or the BMI (i.e. weight adjusted for stature, kg/m 2 ). However, all these weight-based metrics do not take into account body mass composition and the relative proportions and distributions of lean, fat and bone mass in the human body [38,39]. As body composition in cancer patients may result highly variable in terms of muscle and fat mass, as well as of distribution of adipose tissue between abdominal and subcutaneous compartments, these
KEY POINTS
Anticancer treatments severely affect body composition, primarily by causing muscle depletion, as well as loss of adipose tissues and bone mass.
Muscle wasting following administration of chemotherapy strongly affects the quality of life, leading to fatigue and reduced physical function, and predicts poor survival among cancer patients.
Accurate assessment of body composition and preservation of skeletal muscle mass represent powerful tools to reduce chemotherapy toxicity.
Preservation of muscle mass in cancer chemotherapy Pin et al. 41]. This is of particular importance in patients with 'sarcopenic obesity,' a condition describing individuals that simultaneously present with high fat mass and low muscularity resulting in increased risk for adverse outcomes in the occurrence of cancer [42,43].
Enhanced treatment-associated toxicity and increased mortality in patients affected with different types of cancer have been shown to directly correlate with changes in body composition, primarily muscle mass, and there is also evidence that the amount of adipose tissue may represent a useful predictor of outcomes. Indeed, data generated in patients with metastatic colorectal cancer (mCRC) receiving bevacizumab suggested that low visceral adipose tissue correlates with shorter survival and overall negative outcomes [44]. Sarcopenia is also an indicator of poor outcomes and greater toxicity in patients with nonmetastatic [45] and resectable stage I-III colorectal tumors [46], or in patients affected with mCRC and receiving palliative chemotherapy [47 & ]. Loss of skeletal muscle or changes in skeletal muscle density (SMD) following systemic chemotherapy treatments were associated with poor survival in patients affected with diffuse large B-cell lymphoma [48], as well as foregut [49] and ovarian [50] cancers. Analogously, in a study conducted on lung cancer patients, chemotherapy treatment preceded the detection of decreased muscle mass and increased adipose tissue. In this context, sarcopenia was correlated with reduced tolerance to chemotherapy treatment and thought to predict a worse prognosis [51].
As expected, a retrospective analysis of advanced nonsquamous NSCLC patients who received platinum-based therapy in combination with bevacizumab demonstrated that weight gain during or after treatment is a reliable indicator of clinical benefit and improved survival [52]. In a retrospective study including 193 patients affected with unresectable pancreatic cancer showing significant loss of adipose tissue following administration of neoadjuvant treatment, gain in muscle mass was associated with increased chance of resectability and better outcomes [53 & ]. Together these findings indicate that proper assessment of body composition is an important factor to consider for the prevention of chemotherapy toxicity.
ANTI-CANCER DRUGS ARE ASSOCIATED WITH MUSCULOSKELETAL DYSFUNCTIONS
Preclinical investigations support a relationship between chemotherapy treatment and the loss of body weight, constituting both LBM and adipose tissue (i.e. cachexia) occurring in the majority of cancer patients. Le Bricon and colleagues were among the first to provide evidence of a link between the administration of chemotherapeutics [such as cyclophosphamide, 5-fluorouracil (5-FU), cisplatin, or methotrexate] and abnormal nitrogen balance in the muscle of tumor-bearing rats leading to significant loss of skeletal muscle mass. Notably, these drug-associated toxicities were exacerbated in the cancer hosts, despite the fact that tumor proliferation was effectively counteracted [54].
Further, several investigators provided the first mechanistic explanation for the loss of muscle mass observed in tumor hosts exposed to chemotherapeutics. On the basis of these findings, anticancer drugs (including cisplatin, irinotecan, Adriamycin, and etoposide) were shown to cause muscle wasting directly via activation of the NF-kB pathway and independently of the commonly implicated ubiquitin-proteasome system, or indirectly via production of pro-inflammatory cytokines, such as IL-1b, IL-6, and TNF, or by inducing oxidative stress and tissue injury [55][56][57]. Other independent investigations also proposed that the molecular mechanisms accounting for the loss of muscle size and strength in animals bearing cancer and/or exposed to chemotherapy were correlated with activation of proinflammatory pathways, down-regulation of anabolism and exacerbation of muscle proteolysis [58].
In the attempt to identify some of the mechanisms responsible for the development of cachexia following exposure to chemotherapy, we recently investigated the role of some of the anticancer agents utilized for the treatment of colorectal and other solid tumors, namely FOLFIRI (5-FU, leucovorin, irinotecan) and FOLFOX (5-FU, leucovorin, oxaliplatin) [9]. The administration of these widely used chemotherapy regimens to healthy mice was able to reproduce some of the alterations typical of cancer cachexia, including body weight loss, adipose tissue, skeletal muscle wasting, and weakness. Our evidence showed that the chemotherapy treatment was responsible for hyperactivation of the ERK1/2 signaling pathway, previously involved in the pathogenesis of cachexia [59], as well as for structural changes in the sarcomeres and for dramatic muscle mitochondrial depletion. Chemotherapy also led to abnormal oxidative metabolism and to an oxidative-to-glycolytic shift in fiber type composition [9].
Interestingly, these findings were in line with previously published data, suggesting that cancer and chemotherapy may promote the appearance of a cachectic phenotype by activating similar mechanisms [60]. Our observations were also subsequently validated by our comprehensive proteomic profiling aimed at comparing cachexia in a setting of cancer or following chemotherapy [61]. Importantly, trabecular bone tissue was also significantly affected by chemotherapy treatments. For instance, doxorubicin and, in particular, FOLFIRI were recently shown to promote dramatic loss of bone [62,63 & ], whereas aromatase inhibitors, usually prescribed as the standard of care in the therapy of postmenopausal breast cancer, were shown to promote osteolysis by activating osteoclast-mediated bone resorption and to exacerbate muscle weakness in animals bearing estrogen-receptor negative breast cancers [64]. Altogether, these findings suggest that administration of compounds with cytotoxic and antiproliferative properties promote muscle and bone derangements by activating a wide range of mechanisms.
PRESERVATION OF MUSCLE MASS AS A TOOL TO COUNTERACT CHEMOTHERAPY TOXICITY
Experimental and clinical data support the importance of the relationship between muscle mass and the response to chemotherapy, thus also suggesting that preservation of muscle mass per se represents a novel strategy to ultimately prevent chemotherapy toxicity and improve quality of life with cancer. Agents targeting skeletal muscle anabolism have been tested with the goal of preserving muscle mass in the presence of cancer and following the treatment with chemotherapy drugs (Fig. 1). In 2008, Garcia et al. [65] proposed the treatment with ghrelin, a potent growth hormone secretagogue endowed with orexigenic and neuroprotective properties, as a method to counteract cisplatinassociated loss of body and muscle weight. The molecular mechanisms involved in determining better muscle phenotype and improved survival in tumor hosts exposed to chemotherapy included down-regulation of inflammation and p38/C/EBPb/myostatin signaling, as well as activation of Akt and myogenic factors, such as myogenin, and myoD [58]. Another group showed that synthetic agonists of the ghrelin receptor counteract chemotherapy-induced toxicity by effectively preventing [69]. Interestingly, our recent studies found that ACVR2B/Fc also exerts powerful protective effects related to the preservation of bone mass in animals chronically administered FOLFIRI in combination with ACVR2B/Fc. These findings further demonstrate that the preservation of muscle mass may provide a tool to counteract chemotherapy toxicity by identifying a potential strategy for the detection of early cancer-associated musculoskeletal deficits following cancer treatments [63 & ].
CONCLUSION
Changes in body composition, mainly resulting in depletion of skeletal muscle mass, have been linked to the use of anticancer drugs. On the basis of a growing number of experimental and clinical studies, there is now substantial agreement on the idea that the loss of lean mass represents an accurate prognostic factor for augmented treatment toxicity, worsened outcomes, and overall reduced survival in cancer patients. In an attempt to identify the molecular causes responsible for musculoskeletal disorders upon treatment with chemotherapy, several research groups have focused their attention on the in-vivo effects of commonly used chemotherapy regimens, including cisplatin, doxorubicin, FOL-FIRI. These findings support that muscle size and function are primarily affected by activating signaling pathways that have been previously implicated in promoting muscle atrophy and that are driven by processes that impinge on mitochondrial metabolism and muscle protein homeostasis. A series of promising experimental data also suggests the use of muscle pro-anabolic strategies as powerful tools to spare lean tissue in a setting of cancer or chemotherapy (Fig. 1). However, additional studies are necessary to establish novel methods for the accurate assessment of body composition in patients with cancer, with the ultimate goal of monitoring the changes in fat, muscle, and bone tissue that follow the treatment with chemotherapy. Completion of this endeavor will ultimately allow performing simultaneous adjustment of drug dosing, thus also attaining reduction in musculoskeletal side effects in patients with cancer. | 2018-08-21T22:42:14.697Z | 2018-08-21T00:00:00.000 | {
"year": 2018,
"sha1": "d40f8fef634943d35f9d90f2696befcbeb6b2796",
"oa_license": "CCBYNCND",
"oa_url": "https://europepmc.org/articles/pmc6221433?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "d40f8fef634943d35f9d90f2696befcbeb6b2796",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
229686192 | pes2o/s2orc | v3-fos-license | Inflammatory bowel disease in sub-Saharan Africa: a protocol of a prospective registry with a nested case–control study
Introduction The epidemiology of inflammatory bowel disease (IBD) in sub-Saharan Africa is poorly documented. We have started a registry to determine the burden, phenotype, risk factors, disease course and outcomes of IBD in Zimbabwe. Methods and analysis A prospective observational registry with a nested case–control study has been established at a tertiary hospital in Harare, Zimbabwe. The registry is recruiting confirmed IBD cases from the hospital, and other facilities throughout Zimbabwe. Demographic and clinical data are obtained at baseline, 6 months and annually. Two age and sex-matched non-IBD controls per case are recruited—a sibling or second-degree relative, and a randomly selected individual from the same neighbourhood. Cases and controls are interviewed for potential risk factors of IBD, and dietary intake using a food frequency questionnaire. Stool is collected for 16S rRNA-based microbiota profiling, and along with germline DNA from peripheral blood, is being biobanked. The estimated sample size is 86 cases and 172 controls, and the overall registry is anticipated to run for at least 5 years. Descriptive statistics will be used to describe the demographic and phenotypic characteristics of IBD, and incidence and prevalence will be estimated for Harare. Risk factors for IBD will be analysed using conditional logistic regression. For microbial analysis, alpha diversity and beta diversity will be compared between cases and controls, and between IBD phenotypes. Mann-Whitney U tests for alpha diversity and Adonis (Permutational Multivariate Analysis of Variance) for beta diversity will be computed. Ethics and dissemination Ethical approval has been obtained from the Parirenyatwa Hospital’s and University of Zimbabwe’s research ethics committee and the Medical Research Council of Zimbabwe. Findings will be discussed with patients, and the Zimbabwean Ministry of Health. Results will be presented at scientific meetings, published in peer reviewed journals, and on social media. Trial registration number NCT04178408.
GENERAL COMMENTS
The paper is interesting in a way that for the first it attempt to was the control group besides a registry. The main problem with this study method manuscript is the unclarity and ambiguity of the questionnaire that by which the data of patients with be collected. The authors should place the complete version of questionnaire in appendix to be reviewed bother validity be checked.
Vineet Ahuja
All India Institute of Medical Sciences, New Delhi, India REVIEW RETURNED 23-Aug-2020
GENERAL COMMENTS
This is a great initiative and I congratulate the authors on having thought of a well-planned strategy to address this issue. A note of caution from my side : 1. In countries where IBD is emerging , the general awareness of the disease presentation is less . Hence , most of the mild cases will either present /or be on follow up with the physicians rather than the gastroenterologists. Such patients are often treated for long as cases of haemorrhoidal bleed or recurrent GI infection before a definite diagnosis is made. Hence, the plan for increasing the awareness and advertising the registry will have to be seriously pursued, otherwise, there may be a risk of underreporting of IBD cases. 2. In Zimbabwe, Intestinal TB is present. It would be a challenge to differentiate between Int TB and Crohn's is this setting and to make a definite diagnosis would take time. This would be very important for the registry . I hope the authors have planned out a strategy to address this challenge
Reviewer: 1
The authors propose an ambitious long-term (>5 year) study of IBD risk factors, phenotype, burden, disease course/outcomes in a sub-Saharan African referral center. The primary analysis will focus on risk-factors for IBD and other analyses will be descriptive. The planned sample size is based on an a priori calculation to study potential risk factors for this disease in the Parirenyatwa hospital in Harare, Zimbabwe. IBD incidence is rising in many developing countries and patient care experience of the investigative team leads them to suspect that UC in particular may be more common in Zimbabwe than previously appreciated. The proposal is clearly written, comprehensive and ethically sound. The authors should more clearly address a few potential methodologic concerns: • The medical literature on IBD natural history is widely contaminated by referral bias. Population-level investigation may be very difficult, if not impossible in Zimbabwe. This study will be conducted at a tertiary care center likely to catch referred and therefore severe cases. This will potentially over estimate disease incidence, descriptions of phenotype and natural history/outcomes assessments.
This limitation must be clearly described. The data generated by this study could be used to support population wide efforts in the future. Response: The reviewer raises a critical point which we have added to the limitations section at the end of the discussion (page 9, lines 13-end).
• Control patients are intended to be the "first consenting sibling nearest in age to the case." However, these persons may share many, if not all of the environmental risk factors under consideration; this could make it difficult measure significant association of these factors, even when a relationship exists (type 1 error). Enrollment of the planned alternative, neighborhood controls, would be a stronger design.
Response: We thank the reviewer for raising this important point that the family control will potentially introduce a type 1 error for the environmental factors. While the neighbourhood control will enable us to broadly answer differences in environmental factors, the familial or sibling control might allow us to better explore specific differences in environment and/ or gut microbiota between affected and unaffected siblings under highly similar conditions overall. We will try and minimise (and identify) the impact of the bias introduced by the sibling control by performing the statistical analysis in the casecontrol study in step-wise fashion: cases versus neighbourhood controls, cases versus siblings, and cases versus siblings and neighbourhood controls combined. We have clarified this analytical strategy in the statistical plan (page 7, 3 rd paragraph).
• Consider measuring the cost of care; one of the major motivating factors for population level investigation of IBD in many countries has been the high burden that is placed on the healthcare economy. Demonstration of this in the study population might be an important lever for support of additional studies.
Response: This is an important issue, and we will consider adding this as an objective. However, it will require the addition of appropriate skills to the collaborating team, and we will have to discuss with our IRBs since this will likely require a formal amendment of the protocol. Thus, we are unable to add costs of care to the manuscript. However, will strongly consider adding costs of care to our study protocol.
Reviewer: 2
The paper is interesting in a way that for the first it attempt to was the control group besides a registry. The main problem with this study method manuscript is the unclarity and ambiguity of the questionnaire that by which the data of patients with be collected. The authors should place the complete version of questionnaire in appendix to be reviewed bother validity be checked. Response: Thank you for this comment. We will submit the questionnaires as supplementary information.
Reviewer: 3 This is a great initiative and I congratulate the authors on having thought of a well-planned strategy to address this issue. A note of caution from my side : 1. In countries where IBD is emerging , the general awareness of the disease presentation is less . Hence , most of the mild cases will either present /or be on follow up with the physicians rather than the gastroenterologists. Such patients are often treated for long as cases of haemorrhoidal bleed or recurrent GI infection before a definite diagnosis is made. Hence, the plan for increasing the awareness and advertising the registry will have to be seriously pursued, otherwise, there may be a risk of underreporting of IBD cases.
Response: Thank you for raising this valid point. We specifically plan to use the registry as a vehicle for increasing awareness of IBD, and plans for such programs are in place. Please also compare our response to the first comment of reviewer 1. The potential referral bias is now clearly state in the limitations section of our manuscript at the end of the discussion (page 9, lines 13-end).
GENERAL COMMENTS
The authors have satisfactorily edited the protocol in response to my prior review.
Vineet Ahuja
All Institute of Medical Sciences, New Delhi, India REVIEW RETURNED 01-Oct-2020
GENERAL COMMENTS
I am satisfied with the revised manuscript and have no further comments. | 2020-12-24T09:07:28.547Z | 2020-12-01T00:00:00.000 | {
"year": 2020,
"sha1": "b2a85aabc738c20a8e71553175b25fa8422b3f21",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopen.bmj.com/content/bmjopen/10/12/e039456.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4028ff1ca5f3e63651226b0ff278c272ef432ad7",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257210050 | pes2o/s2orc | v3-fos-license | The Complexity of Peripheral Arterial Disease and Coronary Artery Disease in Diabetic Patients: An Observational Study
Background Atherosclerosis is a systemic disease that causes luminal narrowing. Patients with peripheral arterial disease (PAD) also exhibit an increased risk of death from cardiovascular complications. This risk is the same for symptomatic or asymptomatic patients. Over a 5-year period, patients with PAD have a 20% chance of suffering from a stroke or myocardial infarction. Additionally, their mortality rate is 30%. This study aimed to assess the relationship between coronary artery disease (CAD) complexity using SYNTAX score and PAD complexity using Trans-Atlantic Inter-Society Consensus II (TASC II) score. Methods The study was designed as single-center cross-sectional observational and included 50 diabetic patients referred for elective coronary angiography and peripheral angiography was done. Results Most of the patients were males (80%) and smokers (80%) with mean age of 62 years. The mean SYNTAX score was 19.88. There was a significant negative correlation between SYNTAX score and ankle brachial index (ABI) (r = -0.48, P = 0.001) and a significant positive correlation with glycated hemoglobin (HbA1c) level (R2 = 26, P = 0.004). Complex PAD was found in nearly half of the patients with 48% having TASC II C or D classes. Those with TASC II classes C and D had higher SYNTAX scores (P = 0.046). Conclusions Diabetic patients with more complex CAD had more complex PAD. In diabetic patients with CAD, those with worse glycemic control had higher SYNTAX scores and the higher the SYNTAX score, the lower the ABI.
Introduction
Atherosclerosis is a systemic disease of the large and mediumsized arteries causing luminal narrowing (focal or diffuse) [1].
Patients with peripheral arterial disease (PAD) have a 2-3% chance of suffering a myocardial infarction (MI). There is a 2 -3 times higher chance of developing angina compared to age-matched controls. Patients with PAD also exhibit an increased risk of death from cardiovascular complications [2]. This risk is the same for patients with symptomatic PAD or asymptomatic PAD. Over a 5-year period, patients with PAD have a 20% chance of suffering from a stroke or MI. Additionally, their mortality rate is 30% [3].
The AGATHA study discovered that patients with PAD in only one vascular territory had a 35% chance of having disease in at least one other territory. This information suggests that patients with PAD have a significantly greater risk of contracting cerebrovascular disease or coronary heart disease [2,3].
By the time a patient presents with PAD, it is already severe. About 20-30% of patients with PAD also have diabetes mellitus. However, it is likely that only a fraction of PAD cases is diagnosed by asymptomatic presentations and lower degrees of PAD. To detect low degrees of PAD, a patient can be tested using an ankle brachial index (ABI) [3].
The SYNTAX score is a validated scoring system used to the assessment of complexity and anatomical severity of coronary artery disease (CAD). It correlates well with major adverse cardiovascular events and cardiovascular mortality [4,5].
Trans-Atlantic Inter-Society Consensus II (TASC II) classification is an internationally derived definition that is dedicated for the assessment of PAD according to anatomical distribution, number, and nature of lesions [6].
In this study, we tried to find the relationship of complexity of PAD and CAD in a group of diabetic patients. Most of the available data included diabetics as well as non-diabetics.
Materials and Methods
This is a prospective cross-sectional observational study. It included 50 diabetic patients referred for elective coronary angiography with documented CAD and PAD detected by peripheral angiogram. An informed written consent was taken from The aim of this study was to assess the relationship between CAD complexity using SYNTAX score and PAD complexity using TASC II classification in diabetic patients.
The study was approved by the local Institutional Ethical Committee of Faculty of Medicine, Ain Shams University. This study was conducted in compliance with the ethical standards of the responsible institution on human subjects as well as with the Helsinki Declaration.
Sampling
All 50 patients, in the time period from August 2016 till July 2017, were enrolled using quota sampling from patients referred for elective coronary angiogram with documented PAD by examination and confirmed by duplex ultrasound.
Inclusion criteria
Patients with chronic coronary syndrome undergoing diagnostic coronary angiography with documented CAD and PAD were included.
Procedures
Procedures included: 1) History taking. 2) Complete physical examination. 3) Laboratory investigations. 4) Coronary angiography. It was performed using conventional techniques and was analyzed by experienced interventional cardiologists. 5) SYNTAX scoring. SYNTAX score was calculated using an online calculator [7]. A SYNTAX score of 0 indicates no measurable coronary disease, while a score 1 indicates the presence of CAD, with CAD complexity increasing as the SYNTAX score increases. The SYNTAX score algorithm includes: dominance; number of lesions; segments involved per lesion with lesion characteristics; total occlusions with subtotal occlusions (number of segments, age of total occlusion, blunt stump, bridge collaterals, first segment beyond occlusion visible by anterograde or retrograde filling, and side branch involvement); trifurcation, number of diseased segments; bifurcation type and angulation; aortic osteal lesion; severe tortuosity; lesion length; heavy calcification; thrombus; diffuse disease with numbers of segments. 6) Peripheral angiography and TASC II classification (Tables 1, 2 [6]). Peripheral angiography was done at the same index coronary angiography procedure by experienced interventional cardiologist for both lower limbs. It was performed using conventional techniques and was analyzed by experienced interventional cardiologists.
Statistical analysis
We used the statistical package SPSS 17. Continuous variables were reported as means ± standard deviation (SD) or as me-
Baseline characteristics
Most of the 50 diabetic patients were males (80%), hypertensives (84%) and smokers (80%). All of them had high low-density lipoprotein (LDL) levels, with mean of 157 mg/ dL and the majority had uncontrolled diabetic mellitus (DM) with high levels of glycated hemoglobin (HbA1c) with mean of 8.47±1.47%. Most of the cohort had no symptoms of PAD (94%) ( Table 3).
TASC II classification
The PAD classification and complexity was nearly evenly distributed with 24% TASC II A, 28% TASC II B, 22% TASC II C and 26% TASC II D (Fig. 2).
Relationship between complexity of CAD and PAD
Complex PAD anatomy represented by TASC II C and D classes showed higher SYNTAX scores (P = 0.046) (Fig. 3, Table 4). The highest median (IQR) SYNTAX score in patients with TASC II class C was 26.5 (19 -36) (P < 0.001) (Fig. 4).
Relationship between complexity of PAD and ABI
The mean ABI was 0.8 ± 0.07 which is lower with increased complexity of the PAD as shown in Table 5, where all the patients in TASC II D class had low ABI < 0.9 (P = 0.001).
Relationship between complexity of CAD and ABI
The higher the SYNTAX score, the lower the ABI with a sig- nificant negative correlation between SYNTAX score and ABI with correlation coefficient of -0.48 and R 2 of 23 (P < 0.001) (Fig. 5).
Relationship between complexity of CAD and HbA1c
The higher the HbA1c, the higher the SYNTAX score with a significant positive correlation between SYNTAX score and HbA1c with correlation coefficient of 0.51 and R 2 of 26 (P < 0.01) (Fig. 6).
Discussion
In this prospective cross-sectional observational study, we investigated the relationship between the complexity of CAD assessed by SYNTAX score and the complexity of associated PAD assessed by TASC II class in a group of diabetic patients with normal kidney functions.
Baseline characteristics
Most of the studied Egyptian cohort were males, smokers, hypertensives, with high LDL levels and inadequate glycemic control. Where these represent the traditional risk factors for atherosclerosis with high prevalence among the studied group, they were shown in the registry published by Shaheen et al [8] in the European Society of Cardiology Registry on ST elevation MI; compared to other countries, Egyptian patients had higher prevalence of traditional risk factors.
Relationship between complexity of CAD and PAD
Complex PAD anatomy represented by TASC II C and D classes showed higher SYNTAX scores. The median SYN-TAX scores of diabetic patients with TASC II A, TASC II B, TASC II C, and TASC II D were 10, 13.5, 26.5, and 19.5, respectively, compared with the results in the study by Aykan et al [9]. They studied 449 patients, 30% of whom had diabetes. They found that patients with TASC II A, TASC II B, TASC II C, and TASC II D had median SYNTAX scores of 13.25, 14, 19, and 19, respectively. These findings were relatively higher in diabetic patients. Vuruskan et al [10] developed a scoring system (total peripheral score (TPS)) using the TASC II classification and the SYNTAX II score to predict CAD severity in patients with lower extremity arterial disease. They showed a modest positive correlation between TPS and SYNTAX (Pearson correlation = 0.467, P < 0.001).
Relationship between complexity of CAD and glycemic control
In the current study, mean HbA1c was 8.47% (higher than the target value in current guidelines) and was significantly positively correlated with CAD severity as represented by the SYNTAX score. The higher the HbA1c level, the higher the Our results were consistent with similar findings by Dar et al [11]. They studied the prevalence of type 2 DM and association of HbA1c with severity of CAD in patients presenting as non-diabetic acute coronary syndrome and found a significant positive correlation between HbA1c and CAD complexity rep-resented by Gensini score.
Study limitations
The main limitation of this study is being a single-center trial with relatively limited number of participants. A larger number of enrolled patients in the study population can also result in a higher positive correlation to be obtained in future studies. The exclusion of patients with renal impairment also removed an important risk factor in diabetic patients with atherosclerotic CAD.
Conclusions
Diabetic patients with more complex CAD had more complex PAD. In diabetic patients with CAD, those with worse glycemic control had higher SYNTAX scores and the higher the SYNTAX score, the lower the ABI. | 2023-02-27T16:10:12.989Z | 2023-02-01T00:00:00.000 | {
"year": 2023,
"sha1": "2f9d84056e2625ae80765ff74077acf5dd060a8a",
"oa_license": "CCBYNC",
"oa_url": "https://cardiologyres.org/index.php/Cardiologyres/article/download/1463/1412",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "32b5757017167f6563bc16755486ecb8ab04fdcc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
11213239 | pes2o/s2orc | v3-fos-license | Electronic Alert System for Improving Stroke Prevention Among Hospitalized Oral‐Anticoagulation‐Naïve Patients With Atrial Fibrillation: A Randomized Trial
Background Many patients with atrial fibrillation (AF) do not receive oral anticoagulants (OAC) for the prevention of stroke and systemic embolism. We aimed to improve the prescription of (OAC) among hospitalized patients with AF. Methods and Results We developed a computer‐based electronic alert system for identifying hospitalized OAC‐naïve patients with AF. The alert system contained a CHA 2 DS 2‐VASc score calculation tool and provided recommendations for OAC prescription. The alert system was tested in a 1:1 randomized controlled trial at the University Hospital Bern: Patients with suspected AF without an active prescription order were allocated to an alert group in which an alert was issued in the electronic patient chart and order entry system or to a control group in which no alert was issued. The primary end point was the rate of adequate OAC prescription at hospital discharge, defined as prescription in OAC‐naïve men and women with CHA 2 DS 2‐VASc score ≥1 and ≥2, respectively. Overall, 889 OAC‐naïve patients (455 from the alert group and 434 from the control group) were eligible for analysis. Although the CHA 2 DS 2‐VASc score module was used in only 48 (10.5%) patients from the alert group, 100 (22.0%) patients from the alert group versus 69 (15.9%) from the control group received adequate OAC prescription (relative risk 1.38; P=0.021). OAC or antiplatelet therapy was prescribed in 325 (71.4%) patients from the alert group versus 271 (62.4%) from the control group (P=0.004). Conclusions Versus standard care, the alert system modestly improved OAC prescription among consecutive hospitalized AF patients. Clinical Trial Registration URL: https://www.clinicaltrials.gov. Unique identifier: NCT02455102.
A trial fibrillation (AF) is the most common heart rhythm disorder and its prevalence is consistently increasing. 1,2 One quarter of all 40-year-olds will develop this arrhythmia during the course of their lives. 3 AF increases the risk of stroke by 5 times 4 and doubles the risk of cardiovascular deaths and strokes after just 1 year. 5 Oral anticoagulation therapy (OAC) with vitamin K antagonists reduces the risk of stroke and systemic embolism by 80% compared with placebo. 6 The direct oral anticoagulants are at least as effective as vitamin K antagonists. However, direct oral anticoagulants confer improved safety as compared with vitamin K antagonists in terms of bleeding complications. [7][8][9][10] Current guidelines of the American Heart Association, the American College of Cardiology, and the European Society of Cardiology recommend the calculation of the CHA 2 DS 2 -VASc score in all patients with AF. 6,11 In patients with a score of ≥1 point with the exception of women without additional risk factor, OAC is recommended for the prevention of stroke and systemic embolism. 11 However, many patients with AF do not take OAC as recommended by the guidelines. 12 Undertreatment can be reduced by increasing adherence rates to medications at the patient level. 13 However, better quality of treatment may also be achieved by making the physicians in charge aware of a problem that needs to be addressed. In this respect, computer-based electronic alert systems or clinical decision support systems may improve the prescription of recommended therapy among hospitalized patients. 14,15 In a randomized controlled clinical trial, a single computer alert to the physician in charge increased the rate of adequate prescriptions of thromboprophylaxis and reduced the rate of venous thromboembolism by 41%. 16 It remains unclear whether computer-based electronic alert systems improve adequate OAC prescription among hospitalized AF patients.
Alert System
We developed a computer-based electronic alert system for identifying consecutive hospitalized OAC-na€ ıve patients with AF and tested the hypothesis that such an alert system would improve OAC prescription.
The alert system automatically identified hospitalized patients with AF without an active OAC prescription in the electronic order entry system. The alert system was incorporated into the electronic medical chart and order entry system of the University Hospital in Bern, Switzerland. It recognized AF by permanently searching diagnosis lists and physician notes of the entire electronic patient chart database for free text entries of AF or its various abbreviations. Alerts were issued 24 hours after the onset of hospital stay if the following 4 criteria for an individual patient were present: (1) AF detected by search criteria; (2) no active prescription order for anticoagulants, including unfractionated or low-molecularweight heparin, fondaparinux, direct oral anticoagulants, or vitamin K antagonists; (3) at least 1 electronic drug prescription order other than an anticoagulant had to be in place in the order entry system; and (4) the patient was randomized to the alert group. Once the criteria were fulfilled, the alert was issued in the electronic patient chart. The alert was visible to physicians and nurses, but only physicians were enabled to respond to the alert. In the first alert screen (Figure 1), the physician was notified that this patient had suspected AF without an active OAC prescription. In addition, the physician was asked to confirm the presence of AF. The physician in charge had the option to complete the CHA 2 DS 2 -VASc score electronically or to reject the alert if there was no AF. If AF was present but the physician was unable to complete the CHA 2 DS 2 -VASc score, he or she was able to postpone the action 3 times. During this time, the alert remained active. After 3 times of rejecting the alert, the physician in charge was informed that the alert would permanently disappear from the electronic patient chart. If the physician in charge agreed to calculate the CHA 2 DS 2 -VASc score, a new screen with the CHA 2 DS 2 -VASc score items opened ( Figure S1). The system automatically entered the information for the score items sex and age. The system also calculated the score once the remaining items of the score were entered. In men with a calculated score <1 and women with a score <2, no further information was provided to the physician and the alert disappeared. For all other patients with increased CHA 2 DS 2 -VASc score, an additional screen opened, containing the current recommendations from the European Society of Cardiology for stroke prevention in patients with AF with the following text. 11 The alert system was tested and adjusted in a passive runin phase in collaboration with the IT Department of the University Hospital (Inselspital) Bern. Instructions for use of the alert system were provided in the electronic medical chart Figure 1. Electronic alert screen that is sent to physicians in charge of patients with atrial fibrillation but without oral anticoagulation treatment. and order entry system. In addition, the heads of the medical departments were asked to inform their medical staff about the study.
Study Design
From September 2014 until October 2015 at the University Hospital Bern, we randomly assigned 1707 patients in a 1:1 fashion to the alert group (n=877) and to the control group (n=830) where no alert was issued. Randomization was performed electronically by automatically generating a number between 1 and 65 535 for each eligible patient. Patients with odd numbers were randomized to the alert group, whereas patients with even numbers were randomized to the control group.
All hospitalized patients aged ≥18 years with AF but without an active OAC prescription in the order entry system were included. There were no exclusion criteria.
The study (ClinicalTrials.gov. Identifier: NCT02455102) was approved by the institutional review board at the University of Bern. Informed consent by the patients was waived for the following reasons: (1) The study did not involve an intervention to patients but to physicians. The intervention served to remind the responsible physician to assess the stroke risk in patients with AF and to consider the prescription of preventive measures if an increased risk of stroke was present. Therefore, the alert was regarded as a clinical decision support system to help the physician to comply with current international consensus guidelines. However, the responsible physician solely carried the responsibility for ordering or not ordering measures to prevent stroke in patients with AF; (2) there was no direct or indirect patient contact during the study (chart review from the hospital stay only; and (3) a consent procedure in control group patients without alert was regarded as unethical and would have confounded the outcome.
As this was a hospital-wide quality improvement initiative, which involved all departments except pediatrics, approval was also obtained from the hospital management.
End Points
The primary end point of the study was the rate of adequate OAC prescription at hospital discharge, defined as prescription of any of the recommended drug regimens in OAC-na€ ıve men with CHA 2 DS 2 -VASc score ≥1 and in OAC-na€ ıve women with CHA 2 DS 2 -VASc score ≥2. Patients were considered OAC-na€ ıve if they were not receiving OAC within 30 days prior to randomization. The secondary end point was the use of the CHA 2 DS 2 -VASc score calculation tool by the physician in charge. We also collected data to calculate the HASBLED score. This score indicates bleeding risk and includes the risk factors hypertension, abnormal renal or liver function, stroke, bleeding history or predisposition, labile INR, elderly, and drugs or alcohol abuse. 17
Statistical Analysis
For sample size calculation, we assumed a 30% rate of adequate OAC prescription in the alert group and a 20% rate in the control group. Using a power of 90% and a 2-sided alpha of 5%, at least 412 OAC-na€ ıve AF patients per group were required to reject the null hypothesis. During the recruitment phase, we continuously monitored all randomized patients whether they were OAC na€ ıve and had confirmed AF by medical record review. We planned to terminate the patient recruitment phase once at least 412 OAC-na€ ıve AF patients per group were eligible for analysis.
Data for baseline characteristics and the primary and secondary end points are presented as absolute numbers and percentages or as means and standard deviations for categorical or continuous variables, respectively. P-values for differences between the groups with regard to end points are calculated with v 2 tests. All P-values are 2-sided and Pvalues <0.05 were considered significant. For the primary end point, we additionally calculated the relative risk for ordering adequate OAC prescription comparing the alert and control groups. All analyses were performed with IBM SPSS Statistics for Windows, Version 21.0 (IBM Corp, Armonk, NY).
Patient Characteristics
A total of 889 OAC-na€ ıve AF patients were eligible for analysis ( Figure 2). Mean (SD) age was 73.9 (11.3) years. Overall, 359 (40.3%), 48 (5.4%), and 31 (3.5%) patients had paroxysmal, permanent, and persistent AF, respectively, whereas in 451 (50.7%) patients, the type of AF was unknown. Both groups were balanced with respect to baseline characteristics with the exception of a higher rate of systemic hypertension, a trend toward a higher rate of transient ischemic attack, and a trend toward a lower rate of renal dysfunction in the alert group ( Table 1). The most frequent reasons for hospital admission were acute coronary syndrome or other cardiovascular disease (30.0%), cancer (10.6%), nonpulmonary infection (7.9%), stroke (6.9%), and heart failure (6.4%) ( Table 2). The mean duration of the hospital stay was 9.4 (10.8) days.
Very few patients had a CHA 2 DS 2 -VASC score of 0, and there were only 10 women with a CHA 2 DS 2 -VASC score of 1 (Table S1). Overall, 856 (96.3%) of the patients were OAC candidates based on the CHA 2 DS 2 -VASC score. There was no difference in the proportion of OAC candidates based on the CHA 2 DS 2 -VASC score between the alert (443; 97.4%) and the control (413; 95.2%) groups (P=0.083). On the other hand, 391 (44.0%) patients had a HASBLED score of ≥3. There was no difference in the proportion of patients with HASBLED score of ≥3 between the alert (208; 45.7%) and the control (183; 42.2%) groups (P=0.287). There were only 32 (3.6%) patients with a HASBLED score ≥5 (Table S2). There was no difference in the proportion of patients with HASBLED score of ≥5 between the alert (16; 3.5%) and the control (16; 3.7%) groups (P=0.892).
In 48 (10.5%) patients from the alert group, physicians used the electronic entry system to calculate the CHA 2 DS 2 -VASc score. Among these, 19 (39.6%) calculations were identical as compared with data obtained from discharge letters and 16 (37.5%) calculations differed by 1 point. Only 1 patient judged to have an increased score according to information from the discharge letter was classified as a lowrisk patient by the physician in charge. In patients from the alert group, an OAC prescription was present in 11 (22.9%) patients whose physicians used the CHA 2 DS 2 -VASc score calculation tool and 89 (21.9%) patients whose physicians did not use it (P=0.87).
Discussion
In this randomized controlled clinical trial, the computerbased alert system increased adequate OAC prescription rates as compared to standard of care among consecutive hospitalized OAC-na€ ıve patients with AF.
The finding is in agreement with the results of a previous study from our group testing the effectiveness of a clinical decision support system in the prevention of venous thromboembolism. 16 Taken the findings from these 2 randomized trials together, there is increasing evidence supporting the implementation of computerized decision support systems in cardiovascular medicine. Both alert systems significantly increased adequate prescription rates. In the present study, the observed rates of adequate OAC prescription of 22.0% in the alert group and 15.9% in the control group were somewhat lower than expected (30% versus 20%), respectively. Of note, the tool to calculate the CHA 2 DS 2 -VASc score was used in a minority of patients. Nevertheless, a simple reminder of untreated AF has obviously increased awareness and improved treatment quality offered by the physicians in charge.
Due to the poor use of the CHA 2 DS 2 -VASc score tool, we were only able to evaluate the accuracy of the CHA 2 DS 2 -VASc score calculation in about 10% of the patients from the alert group. As compared with our calculations of the CHA 2 DS 2 -VASc score based on information from discharge letters, the CHA 2 DS 2 -VASc scores calculated with the tool were quite accurate. Only 1 patient was classified as low risk by the tool but as high risk by information from the discharge letter. Interestingly, the use of the CHA 2 DS 2 -VASc score calculation tool had no impact on the OAC prescription rate. Reasons for not using the CHA 2 DS 2 -VASc score tool may include knowledge of the CHA 2 DS 2 -VASc score prior to the alert, time constraints, the high rate of non-sense alerts, and known contraindication to OAC because of an increased risk of bleeding. In this respect it was notable that 40% of the Values are numbers and percentages for categorical data or means with standard deviation for continuous data. *Myocardial infarction, peripheral artery disease, or aortic plaque. † Chronic heart failure or left ventricular ejection fraction ≤40%. ‡
History of major bleeding and predisposition (anemia). §
Creatinine clearance <30 mL/min. patients in the study had a HASBLED score ≥3, indicating increased bleeding risk. 11 This may also be a reason for the overall low OAC prescription rate in the study. Of relevance, a considerable amount of patients received antiplatelet therapy, and as compared with the control group more patients from the alert group received OAC or antiplatelet therapy. A previous clinical decision support system was developed to facilitate clinical decision making with regard to OAC treatment in AF patients. 18 This system included both the calculation of the CHA 2 DS 2 -VASc and the HASBLED scores. However, it did not include what we think is the main benefit of our electronic alert system, namely, to issue alerts to patients with suspected AF without an active OAC prescription.
A limitation of our alert system was the high rate of nonsense alerts for patients who were not OAC-na€ ıve because alerts were often issued before anticoagulation treatment was ordered through the electronic prescription system. Prior to the randomization phase, the rate of non-sense alerts was reduced by issuing alerts not directly on admission but 24 hours after the onset of the hospital stay. Very few departments at the Inselspital did use the electronic patient chart only for entering diagnoses but not for prescription of pharmaceutical treatments. This problem was then solved through sending alerts only if at least 1 drug has been prescribed through the alert system at 24 hours. Another problem was that many patients had paused OAC therapy due to planned surgical procedures. Patients with paused OAC therapy were identified by reviewing medical discharge letters and then excluded from the analysis. In addition, alerts were issued for several patients in whom AF was identified by the alert system but not confirmed by the treating physician. We think that the use of the CHA 2 DS 2 -VASc score calculation tool and the rate of adequate OAC prescription can be further improved by reducing the rate of non-sense alerts. We plan to continuously improve the identification of OAC-na€ ıve patients and integrate the alert system in routine clinical practice. Further limitations of our study were the single-center setting, and the lack of systematic evaluation of why anticoagulation was not given. Physicians' acceptance of the alert was not assessed systematically either, nor was the potential impact of other electronic alert systems on the present alert system considered. The strengths of the present study include its randomized design and chart reviews of bleeding and stroke risks. The intervention was designed to modify physician behavior, but randomization was at the patient level. Since individual physicians may have treated several patients, the observations are not independent, and the outcome analysis typically should account for clustering of patients within physicians. However, since more than 500 physicians were involved in the study, such a cluster analysis was deemed less helpful.
An increase in the prescription rate of anticoagulant treatment in AF may translate into a reduction of the future risk of stroke. We did not collect data to investigate the effect of the alert system on the risk of stroke and systemic embolism. A prospective multicenter trial testing the effects of the alert system on end points is therefore encouraged. The total cost for the entire project was US $230 000, of which the majority was used for developing and testing of the alert system. Therefore, the implementation of the alert system in other hospitals seems feasible.
In conclusion, we developed and tested a novel electronic decision support system for improving adequate stroke prevention measures among hospitalized OAC-na€ ıve patients with AF. In comparison to routine clinical practice, this alert system modestly increased adequate OAC prescription. Our results suggest that hospitals with electronic patient chart and order entry systems may consider implementing similar computer-based alerts to increase physician awareness of untreated AF. | 2018-04-03T01:20:59.490Z | 2016-07-01T00:00:00.000 | {
"year": 2016,
"sha1": "fd0aadb946b78372b0fff97715b9fcbf610916c3",
"oa_license": "CCBYNCND",
"oa_url": "https://www.ahajournals.org/doi/pdf/10.1161/JAHA.116.003776",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fd0aadb946b78372b0fff97715b9fcbf610916c3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258436858 | pes2o/s2orc | v3-fos-license | A Parameter-free Adaptive Resonance Theory-based Topological Clustering Algorithm Capable of Continual Learning
In general, a similarity threshold (i.e., a vigilance parameter) for a node learning process in Adaptive Resonance Theory (ART)-based algorithms has a significant impact on clustering performance. In addition, an edge deletion threshold in a topological clustering algorithm plays an important role in adaptively generating well-separated clusters during a self-organizing process. In this paper, we propose a new parameter-free ART-based topological clustering algorithm capable of continual learning by introducing parameter estimation methods. Experimental results with synthetic and real-world datasets show that the proposed algorithm has superior clustering performance to the state-of-the-art clustering algorithms without any parameter pre-specifications.
INTRODUCTION
T HE recent growth of Internet of Things (IoT) technology has enabled the creation and acquisition of a wide variety of data. Such data are regarded as important economic resources that can be used for marketing, finance, and IoT solution development. Supervised learning and unsupervised learning are typical approaches for analyzing the acquired data and extracting useful information. In general, supervised learning algorithms require a sufficient amount of labeled training data for achieving high information extraction performance. In contrast, unsupervised learning such as clustering can extract useful information without requiring labeled training data. k-means [1], Gaussian Mixture Model (GMM) [2], and Self-Organizing Map (SOM) [3] are typical clustering algorithms. Although k-means, GMM, and SOM are quite simple and highly applicable, these algorithms have to specify the number of clusters or the size of the network in advance. This drawback makes it difficult to apply these algorithms to data whose distributions are unknown and/or continually changing.
Growing Neural Gas (GNG) [4] and Adjusted Self-Organizing Incremental Neural Network (ASOINN) [5] can adaptively generate topological networks (i.e., nodes and cal network from the given data while maintaining superior clustering performance to conventional algorithms. Note that continual learning is generally categorized into three scenarios, namely domain incremental learning, task incremental learning, and class incremental learning [20], [21]. This paper focuses on class incremental learning problems.
The contributions of this paper are summarized as follows: (i) CAE is proposed as a new parameter-free ART-based clustering algorithm capable of continual learning. (ii) An estimation method of the number of nodes for calculating a similarity threshold is introduced to CAE. The sufficient number of nodes for calculating the threshold is estimated by a DPP-based criterion incorporating CIM. (iii) An estimation method of a node deletion threshold is introduced to CAE, which is inspired by the edge deletion method of SOINN+ [19]. The node deletion threshold is estimated based on the age of each edge. (iv) Empirical studies show that CAE has superior clustering performance to the state-of-the-art algorithms without any parameter specifications while maintaining continual learning ability.
The paper is organized as follows. Section 2 presents a literature review for growing self-organizing clustering and clustering-based classification algorithms capable of continual learning. Section 3 presents the preliminary knowledge for a similarity measure and a kernel density estimator used in CAE. Section 4 presents the learning procedure of CAE in detail. Section 5 presents extensive simulation experiments to evaluate clustering performance by using synthetic and real-world datasets. Section 6 concludes this paper.
Growing Self-organizing Clustering Algorithms
Classical clustering algorithms such as Gaussian Mixture Model (GMM) [2] and k-means [1] have shown their adaptability and applicability in many fields. However, the major drawback of these algorithms is that the number of clusters/partitions has to be pre-specified. To solve this problem, growing self-organizing clustering algorithms such as GNG [4] and ASOINN [5] have been proposed. GNG and ASOINN adaptively generate topological networks (i.e., nodes and edges) for representing the distributions of given data. However, since these algorithms permanently insert new nodes and edges for learning new information, there is a possibility to forget previously learned information (i.e., catastrophic forgetting). More generally, this phenomenon is called the plasticity-stability dilemma [8]. As a GNG-based algorithm, Grow When Required (GWR) [22] can avoid the plasticity-stability dilemma by adding nodes whenever the state of the current network does not sufficiently match the instance. SOINN+ [19] is an ASOINN-based algorithm that can detect clusters of arbitrary shapes in noisy data streams without any pre-defined parameters. One common problem of GWR and SOINN+ is that as the number of nodes in the network increases, the cost of calculating a threshold for each node increases, and thus the learning efficiency decreases.
One promising approach for avoiding the plasticitystability dilemma is an ART-based algorithm that uses a pre-defined similarity threshold (i.e., a vigilance parameter) for controlling a learning process. Thanks to this property, many ART-based clustering algorithms and their improvements have been proposed [9], [14], [15], [23]. In particular, algorithms that use CIM as a similarity measure have shown superior clustering performance to other clustering algorithms [11], [24], [25], [26]. A well-known drawback of ART-based algorithms is the specification of significantly data-dependent parameters such as a similarity threshold (i.e., a vigilance parameter). Several studies have proposed to avoid and/or suppress the effect of the abovementioned drawback by applying multiple vigilance values [27], by specifying the vigilance parameter indirectly [12], [28], and by adjusting some data-dependent parameters during the learning process [29]. However, parameters to be pre-specified still exist in these algorithms, which affect their clustering performance.
Clustering-based Classification Algorithms Capable of Continual Learning
Recently, deep learning has shown outstanding capability in many fields, such as image processing and natural language processing [30], [31], [32]. In contrast, the continual learning ability of deep learning is not sufficient [33]. Continual learning is categorized into three scenarios [20], [21]: domain incremental learning [34], [35], task incremental learning [36], and class incremental learning [36], [37], [38], [39]. In general, deep learning capable of continual learning uses selective learning of weight coefficients between neurons or sequential addition of neurons in the output layer corresponding to new information. However, the major problem with this approach is that a network structure for representation learning is basically fixed, and therefore, there is an upper limit to the memory capacity of the entire network.
In general, a clustering algorithm can be applied to classification tasks by using a clustering result (e.g., cluster centroids, topological networks) as a classifier [16], [40], [41], [42]. Note that, in many cases, a clustering-based classification algorithm can inherit the continual learning ability from a clustering algorithm [14], [24], [43]. This fact enhances the attractiveness of clustering capable of continual learning and motivates us to develop clustering-based classification algorithms. Typical clustering-based classification algorithms are Episodic-GWR (EGWR) [7] and ASOINN Classifier (ASC) [5], which use GWR and ASOINN, respectively, for generating base classifiers. AutoCloud [44] is a fully online algorithm that autonomously creates clusters and merges them. AutoCloud is a quite flexible algorithm that can be used for unsupervised, semi-supervised, and supervised classification tasks. The state-of-the-art clusteringbased classification algorithm is GSOINN+ [45]. GSOINN+ is an extended algorithm of SOINN+, which uses ghost nodes and a weighted nearest-neighbor rule based on the fractional distance for classification tasks. Although GSOINN+ successfully adapts to supervised learning, there are two parameters to be pre-specified for maintaining good classification performance. ART-based classification algorithms (e.g., ARTMAP) are also successful approaches [14], [24], [43], [46], [47], [48]. However, similar to ARTbased clustering algorithms, ART-based classification algorithms also suffer from the difficulty of the specification of parameters for maintaining good classification performance. A small number of studies have focused on specification/adjustment methods for significantly data-dependent parameters [16], [48], [49], [50], [51]. However, these studies still have parameters that need to be adjusted for each dataset. This fact emphasizes the significance of developing parameter-free clustering algorithms capable of continual learning.
PRELIMINARY KNOWLEDGE
This section presents preliminary knowledge for a similarity measure and a kernel density estimator used in the proposed CAE algorithm.
Correntropy and Correntropy-induced Metric
Correntropy [13] provides a generalized similarity measure between two arbitrary data points x = (x 1 , x 2 , . . . , x d ) and y = (y 1 , y 2 , . . . , y d ) as follows: where E [·] is the expectation operation, and κ σ (·) denotes a positive definite kernel with a bandwidth σ. The correntropy can be estimated as follows: In this paper, we use the following Gaussian kernel in the correntropy: A nonlinear metric called CIM is derived from the correntropy [13]. CIM quantifies the similarity between two data points x and y as follows: Here, thanks to the Gaussian kernel without a coefficient In general, the Euclidean distance suffers from the curse of dimensionality. However, CIM reduces this drawback thanks to the correntropy which calculates the similarity between two arbitrary data points by using a kernel function. Moreover, it has also been shown that CIM with the Gaussian kernel has a high outlier rejection capability [13].
Kernel Density Estimator
In general, the bandwidth of a kernel function can be estimated from λ instances belonging to a certain distribution [52], which is defined as follows: where Γ denotes a rescale operator (d-dimensional vector) which is defined by a standard deviation of the d attributes among λ instances, ν is the order of a kernel, R(F ) is a roughness function, and κ ν (F ) is the moment of a kernel. The details of the derivation of (5) and (6) can be found in [52]. In this paper, we use the Gaussian kernel for CIM. Therefore, ν = 2, R(F ) = (2 √ π) −1 , and κ 2 ν (F ) = 1 are derived, and (6) is rewritten as follows: Equation (7) is known as the Silverman's rule [53]. Here, H contains the bandwidth of each attribute.
Overview
The proposed CAE algorithm is an extended algorithm of CAEA [12] which is the state-of-the-art ART-based topological clustering algorithm. CAEA has two data-dependent parameters to be pre-specified, i.e., the number of nodes λ for calculating a similarity threshold, and a threshold a max for an edge deletion process. Different from CAEA, CAE estimates these parameters during a learning process, i.e., the sufficient number of nodes λ for calculating a similarity threshold is estimated by a DPP-based criterion incorporating CIM, and the node deletion threshold a max is estimated based on the age of each edge. Table 1 summarizes the main notations used in this paper. The following subsections provide the learning processes of CAE step by step. Algorithm 1 summarizes the entire learning procedure of CAE.
1st winner node ys 2 2nd winner node σ bandwidth of a kernel function S set of bandwidths of a kernel function N k set of neighbor nodes of node y k λ number of active nodes A set of active nodes D diversity of active nodes Vs 1 CIM value between a data point x and ys 1 Vs 2 CIM value between a data point x and ys 2 R matrix of pairwise similarities V threshold similarity threshold (a vigilance parameter) M k number of winning counts of y k M set of winning counts M k (M k ∈ M) a(y k , y l ) age of an edge between nodes y k and y l ∈ Y \ y k E set of ages of edges (a(y k , y l ) ∈ E) α del set of ages of deleted edges amax edge deletion threshold Algorithm 1: Learning procedure of CAE Input: a set of data points X . Output: a set of nodes Y, a set of ages of edges E. 1 while existing data points to be trained do 2 Input a data point x (x ∈ X ). 3 if the number of active nodes λ is not defined or the number of nodes |Y| is smaller than λ/2 or a similarity threshold V threshold is not calculated then 4 Create a node as y |Y|+1 = x, and update a set of nodes as Y ← Y ∪ {y |Y|+1 }.
5
Estimate the diversity of nodes. // Algorithm 2 6 if the diversity of nodes is sufficient (i.e., D < 1.0e−6) then 7 Calculate a similarity threshold V threshold . 8 else 9 Select the 1st and 2nd nearest nodes from x (i.e., y s1 and y s2 ) based on CIM.
10
Perform vigilance test, and create/update nodes and edges. // Algorithm 3 11 Estimate an edge deletion threshold a max . // Algorithm 4 12 Delete edges by using a max . // Algorithm 5 13 if the number of presented data points is a multiple of λ then 14 Delete isolated nodes.
Estimation of Diversity of Nodes
In CAE, a similarity threshold is defined by a pairwise similarity among nodes (i.e., Y) (see Section 4.3 for details). Thus, the diversity of nodes for calculating the similarity threshold is important to obtain an appropriate threshold value, which leads to good clustering performance. The diversity D of the node set Y is estimated by a DPPbased criterion [17], [18] incorporating CIM as follows: where Here, D is a determinant of the matrix R, and R is a matrix of pairwise similarities between nodes in Y. A bandwidth σ for CIM is calculated from H in (7) by using a set of nodes Y. As in (7), H contains a bandwidth of each attribute. In this paper, the median of H is used as the bandwidth of the Gaussian kernel in CIM, i.e., In general, the diversity D = 0 means that the set of nodes Y is not diverse while D > 0 means Y is diverse. In other words, the value of D becomes close to zero when a new node is created around the existing nodes.
Algorithm 2 summarizes the estimation process of the diversity of nodes. In CAE, the value of λ is set as the two times of the number of nodes (i.e., 2|Y|) when the diversity D satisfies D < 1.0e−6. If the number of nodes becomes smaller than λ/2 after a node deletion process, λ is calculated again in line 5 of Algorithm 1 using Algorithm 2.
As shown in lines 3-4 of Algorithm 1, the first λ/2 data points (i.e., {x 1 , x 2 , . . . x λ/2 }) directly become nodes, i.e., Y = {y 1 , y 2 , . . . , y λ/2 } where y k = x k (k = 1, 2, . . . , λ/2). In addition, the bandwidth for the Gaussian kernel in CIM The value of λ is automatically updated by Algorithm 2 in the proposed CAE algorithm. In an active node set A, λ nodes are stored. When a new node is added to A, an old node is removed to maintain the active node set size as λ. The addition of a new node and the removal of an old node are explained later.
Calculation of Similarity Threshold
The similarity threshold V threshold is calculated by the average of the minimum pairwise CIM values in the active node set A as follows: where S is a set of bandwidths of the Gaussian kernel in CIM for each node in A, which are calculated by using (7) and (10) when a new node is created.
Selection of Winner Nodes
During the learning process of CAE, every time a data point x is given, two nodes which have a similar state to x are selected from Y, namely the 1st winner node y s1 and the 2nd winner node y s2 . The winner nodes are determined based on the value of CIM in line 9 of Algorithm 1 as follows: where s 1 and s 2 denote the indexes of the 1st and 2nd winner nodes, respectively. S = {σ 1 , σ 2 , . . . , σ |Y| } is a set of bandwidths of the Gaussian kernel in CIM corresponding to a set of nodes Y.
Note that the 1st winner node y s1 becomes a new active node, and the oldest node in the active node set A (i.e., λ nodes in Y) is replaced by the new one.
Vigilance Test
Similarities between the data point x and the 1st and 2nd winner nodes are defined in line 10 of Algorithm 1 as inputs for Algorithm 3.
The vigilance test classifies the relationship between the data point x and the two winner nodes into three cases by using the similarity threshold V threshold , i.e., • Case I The similarity between the data point x and the 1st winner node y s1 is larger (i.e., less similar) than V threshold , namely: • Case II The similarity between the data point x and the 1st winner node y s1 is smaller (i.e., more similar) than V threshold , and the similarity between the data point x and the 2nd winner node y s2 is larger (i.e., less similar) than V threshold , namely: • Case III The similarities between the data point x and the 1st and 2nd winner nodes are both smaller (i.e., more similar) than V threshold , namely:
Creation/Update of Nodes and Edges
Depending on the result of the vigilance test, a different operation is performed. If the data point x is classified as Case I by the vigilance test (i.e., (16) is satisfied), a new node is created as y |Y|+1 = x, and updated a set of nodes as Y ← Y ∪ {y |Y|+1 }. Here, the node y |Y|+1 becomes a new active node, and the oldest node in the active node set A (i.e., λ nodes in Y) is replaced by the new one. In addition, a bandwidth σ |Y|+1 for y |Y|+1 is calculated by (7) and (10) with the active node set A, and the number of winning counts of y |Y|+1 is initialized as If the data point x is classified as Case II by the vigilance test (i.e., (17) is satisfied), first, the number of winning counts of y s1 is updated as follows: Then, y s1 is updated as follows: Here, the node y s1 becomes a new active node, and the oldest node in the active node set A (i.e., λ nodes in Y) is replaced by the new one.
When updating the node, the difference between x and y is divided by M s1 . Thus, the change of the node position is smaller when M s1 is larger. This is because the information around a node, where data points are frequently given, is important and should be held by the node.
The age of each edge connected to the first winner node y k1 is also updated as follows: where N s1 is a set of all neighbor nodes of the node y s1 . If the data point x is classified as Case III by the vigilance test (i.e., (18) is satisfied), the same operations as Case II (i.e., (19)- (21)) are performed. In addition, the neighbor nodes of y s1 are updated as follows: In Case III, moreover, if there is an edge between y s1 and y s2 , an age of the edge is reset as follows: In the case that there is no edge between y s1 and y s2 , a new edge is defined with an age of the edge by (23).
Apart from the above operations in Cases I-III, the nodes with no edges are deleted (and removed from the active node set A) every λ data points for the noise reduction purpose (e.g., the node deletion interval is the presentation of λ data points), which is performed in lines 13-14 of Algorithm 1.
With respect to the active node set A, its update rules are summarized as follows. In Case I, a new node is directly created by the data point x and added to A. In Case II and Case III, the updated winner node in (20) is added to A. In all cases, the oldest active node is removed from A. Then, in lines 13-14 of Algorithm 1, all active nodes with no edges are removed. After this removal procedure, the number of active nodes can be smaller than λ.
Algorithm 3: Update nodes and edges
Input: a data point x, a set of nodes Y, a set of active nodes A, a set of bandwidths of a kernel function S, a set of ages of edges E, a set of winning counts M, the 1st and 2nd winner nodes y s1 , y s2 , the similarities V s1 , V s2 between x and y s1 , y s2 , the similarity threshold V threshold . Output: a set of nodes Y, a set of active nodes A, a set of bandwidths of a kernel function S, a set of ages of edges E, a set of winning counts M.
5
Update the active node set A.
Update the active node set A. 10 for y k ∈ N s1 do 11 a(y s1 , y k ) ← a(y s1 , y k ) + 1.
Algorithm 3 summarizes the creation/update processes for nodes and edges.
Estimation of Edge Deletion Threshold
CAE estimates an edge deletion threshold based on the ages of the current edges and the deleted edges, which is inspired by the edge deletion mechanism of SOINN+ [19].
The edge deletion threshold a max is defined as follows: where α del is a set of ages of all the deleted edges during the learning process, and |α del | is the number of elements in α del .ᾱ del is the arithmetric mean of α del . α is a set of ages of edges which connect to y s1 (α ⊂ E), and |α| is the number of elements in α. The coefficient a thr is defined as follows: where α 0.75 is the 75th percentile of elements in α, and IQR(α) is the interquartile range.
Algorithm 4:
Estimate edge deletion threshold Input: the 1st winner node y s1 , a set of ages of edges E, a set of ages of deleted edges α del . Output: the edge deletion threshold a max . 1 α ← a set of ages of edges which connect to y s1 .
Algorithm 5: Delete edges Input: the edge deletion threshold a max , the 1st winner node y s1 , a set of neighbors of the 1st winner node N s1 , a set of ages of deleted edges α del . Output: a set of ages of edges E, a set of ages of deleted edges α del .
Delete the edge between y s1 and y k .
The edge deletion threshold a max is updated each time the age of an edge increases. Algorithm 4 summarizes the estimation process of the edge deletion threshold a max .
The differences between the above method and SOINN+ are summarized as follows.
• The coefficient of IQR(α) in (25) is set to 2 in SOINN+, while it is set to 1 in CAE. The motivation for this change is that an ART-based topological clustering algorithm tends to create a fewer edges than a SOINNbased algorithm, especially in the early stage of learning. Therefore, the coefficient of IQR(α) in (25) is set as 1 to weaken the influence of a thr while emphasizing the information of deleted edges.
• SOINN+ considers all edges that can reach y s1 , while CAE only considers edges which connect to y s1 . The motivation for this change is to reduce computation time. SOINN+ searches all the nodes and edges connected to y s1 whenever a max is estimated. In contrast, CAE only uses edges connected to y s1 (i.e., line 2 in Algorithm 4). Thus, computation speed of CAE is faster than SOINN+.
Deletion of Edges
If there is an edge whose age is greater than the edge deletion threshold a max , the edge is deleted and a set of ages of deleted edges α del is updated.
Algorithm 5 summarizes the edge deletion process.
SIMULATION EXPERIMENTS
In this section, the performance of CAE is evaluated from various perspectives compared with the state-of-the-art clustering algorithms. First, the clustering performance of CAE is evaluated qualitatively and quantitatively by using a two-dimensional synthetic dataset in stationary and nonstationary environments. Next, the clustering performance of CAE is evaluated quantitatively by using real-world datasets. In this case, the clustering performance is evaluated indirectly by using the clustering results as classifiers to perform classification tasks. Third, we analyze and discuss the validity of the number of nodes which is automatically estimated by the DPP-based criterion incorporating CIM. Finally, we analyze the computational complexity of CAE. Note that all the simulation experiments are carried out on Matlab 2020a with a 2.2GHz Xeon Gold 6238R processor and 768GB RAM.
Compared Algorithms
We compare six algorithms, namely AutoCloud [44], ASOINN [5], SOINN+ [19], TCA [11], CAEA [12], and CAE. Note that AutoCloud organizes clusters by using fuzzy concepts, i.e., the algorithm allows each data point to belong to multiple clusters simultaneously. ASOINN and SOINN+ are GNG-based algorithms while TCA, CAEA, and CAE are ART-based algorithms. The source code of AutoCloud 1 , ASOINN 2 , SOINN+ 3 , TCA 4 , and CAEA 5 is provided by the authors of the original papers. The source code of CAE is available on GitHub 6 . SOINN+ and CAE have no parameters, while Auto-Cloud, ASOINN, TCA, and CAEA have parameters which affect their clustering performance. We use grid search to specify parameter values of those algorithms. Table 2 summarizes the range of grid search for parameters in each algorithm. These parameters and their ranges are based on the experiments in the original paper of each algorithm. We use the Euclidean distance for a distance function in AutoCloud as in the original paper [44].
Evaluation by using Synthetic Dataset
First, we evaluate the clustering performance of CAE by using a two-dimensional synthetic dataset in the stationary and non-stationary environments. shows the dataset in the non-stationary environment where data points in Fig. 1 are divided into six streams. Each stream is given in sequential order as in streams #1 to #6. During our experiments, the data points in each dataset are presented to each algorithm only once without preprocessing. In the stationary environment, the data points are randomly selected from the entire dataset. In the nonstationary environment, the data points are randomly selected from a specific distribution in the dataset, and the cldistributionass is shifted sequentially. Since CAE is a clustering algorithm, we use the same data points for training and testing, i.e., an algorithm is trained by all the data Stationary 2 400 10 450 0.1 50 30 Non-stationary 2 250 10 points of a dataset and tested by the same data points as the training data.
Results
During grid search, each algorithm is trained by using all the data points, and calculating the Normalized Mutual Information (NMI) [54] score by using the same data points as training. We repeat the evaluation 15 times with data points selected by different random seeds. The result of the parameter setting that gives the median NMI score among 15 runs is used for comparison. Table 3 summarizes the results of grid search for synthetic dataset. It can be seen that the parameter values for each algorithm are depending on the environment. Fig. 3 shows the visualization of self-organizing results in the stationary environment. The figures are the results of the trial in which a median NMI is obtained. AutoCloud creates and merges data clouds (i.e., cluster-like granular structures) without edge information, and thus the clustering result seems different from all the other algorithms. In addition, AutoCloud is originally proposed as an online learning algorithm. Therefore, the clustering result in the stationary environment is not well-organized. ASOINN and SOINN+ tend to create disconnected topological networks. On the other hand, thanks to properties of ART-based algorithms (e.g., a fixed similarity threshold value), TCA, CAEA, and CAE successfully define well-organized topological networks. Comparing CAE with TCA and CAEA, TCA and CAEA tend to generate a larger number of nodes than CAE. In Fig. 3, we can also see that there are some isolated nodes in CAE. This is because the number of presented data points is, in general, not a multiple of the deletion cycle (i.e., because the deletion process in line 14 of Algorithm 1 is, in general, not applied just before the termination of the algorithm for performance evaluation). This phenomenon also exists in SOINN+ (see Fig. 6). Moreover, the same phenomenon is also observed in TCA and CAEA depending on the deletion cycle of isolated nodes. A simple solution for avoiding this phenomenon is to delete isolated nodes after the learning procedure. However, since these algorithms aim for continual learning, they prepare for future learning without removing isolated nodes after the current learning procedure. Therefore, we consider that this is not a drawback for these algorithms.
Figs. 4-9 show the visualization of self-organizing results in the non-stationary environment. The figures are the results of the trial in which a median NMI is obtained. With respect to AutoCloud, the clustering performance is better than that in the stationary environment. However, it is difficult for AutoCloud to handle non-Gaussian distributions. ASOINN tends to generate topological networks by noise data points. Besides, ASOINN tends to generate much more nodes than that in the stationary environment. With respect to SOINN+, two clusters are eventually connected. In addition, as in the case of ASOINN, SOINN+ also tends to generate much more nodes than those in the stationary environment. In contrast, TCA, CAEA, and TCA generate topological networks that are similar to the results obtained The values in parentheses indicate the standard deviation.
in the stationary environment. In Fig. 9, although CAE creates several isolated nodes, each isolated node does not overlap with topological networks. On the other hand, Fig. 6 shows that the isolated nodes of SOINN+ overlap with topological networks, indicating that the networks are not properly generated.
Tables 4 and 5 summarize the results for quantitative comparisons to the synthetic dataset in the stationary and non-stationary environments, respectively. Here, we compare the mean values of NMI, Adjusted Rand Index (ARI) [55], the number of nodes and clusters of each algorithm among 15 runs. Algorithm comparisons based on NMI and ARI (in Tables 4 and 5) show that the clustering performance of each algorithm is comparable in both environments. However, the number of nodes of ASOINN and SOINN+ is greatly different depending on the environment while TCA, CAEA, and CAE have a similar number of nodes in each environment.
The above-mentioned observations of the results in this section indicate that the stability of the self-organizing performance of TCA, CAEA, and CAE is superior to that of AutoCloud, ASOINN, and SOINN+.
Evaluation by using Real-world Datasets
Next, we indirectly evaluate the clustering performance of CAE via classification tasks, which use a clustering result as a classifier, on real-world datasets in the stationary and non-stationary environments.
Dataset
We use 10 real-world datasets from public repositories [56], [57]. Table 6 summarizes statistics of the 10 real-world datasets. During our experiments, all data points in each dataset are presented to each algorithm only once without pre-processing. As in section 5.2.2, in the stationary environment, all data points in each dataset are presented to each algorithm in random order. In the non-stationary environment, the data points are randomly selected from a specific class in the dataset, and the class is shifted sequentially. In addition, we use the same data points for training and testing, i.e., an algorithm is trained by all data points in each dataset and tested by the same data points as the training data. Note that the class information of each data point is not used in the training phase.
Results
During grid search, each algorithm is trained for each dataset by using all data points, and calculating the NMI score by using the same data points as training. In each parameter setting of grid search, we repeat the evaluation 20 times (i.e., 2×10-fold cross validation) with different random seeds. The result of the parameter setting that gives the highest NMI score is used for comparisons. Tables 7 and 8 summarize the results of grid search for real-world datasets in the stationary and non-stationary environments, respectively. It can be seen that the parameter values for each algorithm depend on the datasets and the environment. Note that the proposed CAE has no parameter to be pre-specified.
To emphasize the difficulty of using a single parameter specification for all datasets, we use CAEA with fixed parameter values, called CAEA(mean), as an additional compared algorithm. Tables 7 and 8 show the parameter values of CAEA(mean) for the stationary and non-stationary environments, respectively. These parameters are specified by using the mean values of the parameters of CAEA over all datasets in each environment. Tables 9 and 10 show the results of classification performance on the 10 real-world datasets in the stationary and non-stationary environments, respectively. The best value in each metric is indicated by bold, and the values in parentheses indicate the standard deviation. A number to the right of each evaluation metric is the average rank of an algorithm over 20 evaluations. The smaller the rank, the better the metric score. In addition, a darker tone in a cell corresponds to a smaller rank (i.e., better evaluation).
As general trends, CAEA shows better classification performance than the other algorithms. In contrast, the classification performance of CAEA(mean) deteriorates compared with CAEA. This indicates that CAEA is very sensitive to parameter specifications. Although CAE does not show the best value of NMI and ARI, it generally shows better clustering performance than the other algorithms except for CAEA with careful parameter specifications. The parameterfree algorithms (i.e., SOINN+ and CAE) tend to create a large number of nodes and clusters. In particular, SOINN+ creates a very large number of nodes and clusters when the number of data points is large (e.g., Skin dataset with 245,0057 data points). This is a disadvantage from the viewpoint of data aggregation/grouping, which is the general purpose of clustering. Since AutoCloud is specialized as an online learning algorithm, its classification performance is significantly low in the stationary environment. In addition, AutoCloud could not build a predictive model within 12 hours under the available computational resources in the The best value in each metric is indicated by bold. The values in parentheses indicate the standard deviation. A number to the right of a metric value is the average rank of an algorithm over 20 evaluations. The smaller the rank, the better the metric score. A darker tone in a cell corresponds to a smaller rank N/A indicates that an algorithm could not build a predictive model within 12 hours under the available computational resources. AutoCloud does not have a node thus it is indicated by a symbol "-".
case of datasets with a large number of data points (i.e., Letter and Skin datasets). For statistical comparisons, the Friedman test and Nemenyi post-hoc analysis [58] are used. The Friedman test is used to test the null hypothesis that all algorithms perform equally. If the null hypothesis is rejected, the Nemenyi post-hoc analysis is then conducted. The Nemenyi posthoc analysis is used for all pairwise comparisons based on the ranks of results on each evaluation metric over all datasets. The difference in the performance between two algorithms is treated as statistically significant if the p-value defined by the Nemenyi post-hoc analysis is smaller than the significance level. Here, the null hypothesis is rejected at the significance level of 0.05 both in the Friedman test and the Nemenyi post-hoc analysis.
Figs. 10-12 show critical difference diagrams based on results of NMI and ARI of each algorithm, which are defined by the Nemenyi post-hoc analysis. A better specification has lower average ranks, i.e., on the right side of each diagram. In theory, algorithms within a critical distance (i.e., a red line) do not have a statistically significance difference [58]. Fig. 10 shows a critical difference diagram based on the overall results (i.e., all the results of NMI and ARI in the stationary and non-stationary environments). CAEA is the lowest rank (i.e., best) algorithm with a statistically significant difference. However, the rank of CAEA(mean) is greatly deteriorated compared with CAEA. CAE is the best algorithm with a statistically significant difference among parameter-free/fixed algorithms (i.e., SOINN+ and The best value in each metric is indicated by bold. The values in parentheses indicate the standard deviation. A number to the right of a metric value is the average rank of an algorithm over 20 evaluations. The smaller the rank, the better the metric score. A darker tone in a cell corresponds to a smaller rank N/A indicates that an algorithm could not build a predictive model within 12 hours under the available computational resources. AutoCloud does not have a node thus it is indicated by a symbol "-". CAEA(mean)). Moreover, CAE shows a lower rank with a statistically significant difference than ASOINN, TCA, and AutoCloud whose parameters specified by grid search. In order to discuss the features of each algorithm in detail, Figs. 11 and 12 show critical difference diagrams based on the results of the stationary and non-stationary environments, respectively. With respect to CAEA, CAE, and TCA, these algorithms generally show lower ranks in both environments. On the other hand, SOINN+ and CAEA(mean) are less stable algorithms because their results vary greatly depending on the environment.
The above-mentioned observations suggest that although CAEA performs better than CAE, the superiority of CAE is demonstrated by its high and stable clustering performance on various datasets without specifying any parameters.
Validity of the Number of Active Nodes
From the experiments with the synthetic and real-world datasets, it can be considered that CAE has a superior clustering performance to the state-of-the-art parameterfree/fixed algorithms (i.e., SOINN+ and CAEA(mean)). In general for self-organizing clustering algorithms, a good specification of the similarity threshold provides good clustering performance. In CAE, the number of active nodes λ plays an important role in the calculation of the similarity threshold, which is estimated by the DPP-based criterion incorporating CIM.
This section analyzes and discusses the validity of the number of active nodes, and its effect to clustering performance (i.e., NMI) and the number of clusters in CAE. First, we manually set the value of λ (i.e., the number of active nodes), and then perform the training process of CAE without the specification process of λ. Then, we evaluate a trained network for obtaining a NMI value. The rest of the experimental settings is the same as in section 5.3. Figs. 13 and 14 show the relationships among the number of active nodes, NMI, and the number of clusters of each dataset in the stationary and non-stationary environments, respectively. A gray bar represents a value of NMI, and a blue line represents the number of clusters, respectively. In addition, a red star represents the estimated number of active nodes by a DPP-based criterion incorporating CIM.
Note that in Figs. 13 and 14, the number of clusters in each dataset is oscillating. The reason for this phenomenon is as follows: in the case that the deletion cycle of isolated nodes (i.e., the number of active nodes λ) is a multiple of the number of presented data points, there is no isolated node as clusters after training. In contrast, in the case that the deletion cycle λ is not a multiple of the number of presented data points, at most (λ − 1) isolated nodes are remaining as clusters after training. As mentioned in section 5.2.2, a simple solution for avoiding this phenomenon is to delete isolated nodes after the learning procedure. However, since these algorithms aim for continual learning, they prepare for future learning without removing isolated nodes after the current learning procedure. Therefore, we consider that this is not a drawback for those algorithms.
As the general purpose of clustering (i.e., data aggregation/grouping), the desired result is high NMI value with a small number of clusters. From this perspective, in Figs. 13 and 14, the estimation of the number of active nodes is well-performed except for Iris dataset. Especially, the estimated number of active nodes for Ionosphere, Image Segmentation, Phoneme, Texture, and Skin datasets archive the almost highest NMI value both in the stationary and non-stationary environments.
The above-mentioned observations suggest that the DPP-based criterion incorporating CIM and the calculation of the similarity threshold V threshold as in (11) practically works well on various datasets.
Computational Complexity
For computational complexity analysis, we use the notations in Table 1, namely d is the dimensionality of a data point, n is the number of data points, K is the number of nodes, λ is the number of active nodes, and |E| is the number of elements in the ages of edges set E.
The computational complexity of each process in CAE is as follows: for computing a bandwidth of a kernel function in CIM is O(d), for computing CIM is O(ndK) (line 9 in Alg. 1), for sorting the results of CIM is O(K log K) (line 9 in Alg. 1), for calculating a pairwise similarity matrix by using CIM is O(( λ 2 ) 2 dK) (line 1 in Alg. 2), for calculating determinant of the pairwise similarity matrix is O(( λ 2 ) 3 ) (line 2 in Alg. 2), and for estimating the edge deletion threshold is O(|E| log |E|) (Alg. 4).
In general, λ n, K n, and λ < K. As a result, the computational complexity of CAE is O(ndK).
CONCLUDING REMARKS
This paper proposed a new parameter-free ART-based topological clustering algorithm capable of continual learning by introducing two parameter estimation processes, namely the estimation of the number of active nodes for calculating a similarity threshold by a DPP-based criterion incorporating CIM, and the estimation of a node deletion threshold based on the age of each edge. Empirical studies with the synthetic and real-world datasets showed that the clustering performance of CAE is superior to the state-of-the-art parameterfree/fixed algorithms while maintaining continual learning ability. Thanks to the capabilities of CAE, therefore, we can expect the high utility and potential of CAE as a data preprocessing method in various applications.
In real-world applications related to health, finance, and medical fields, a dataset often contains numerical and categorical attributes simultaneously. Such a dataset is called a mixed dataset [59]. A future research topic is to develop a parameter-free clustering algorithm that can handle a mixed dataset while maintaining continual learning capabilities. | 2023-05-03T01:16:03.925Z | 2023-05-01T00:00:00.000 | {
"year": 2023,
"sha1": "8b2adac2a14d776c0d254bbe7b16de3fc826dea2",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "8b2adac2a14d776c0d254bbe7b16de3fc826dea2",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
268793355 | pes2o/s2orc | v3-fos-license | Lesson of the month: Large vessel vasculitis: A rare cause of transaminitis
We present the case of a 73-year-old male with pyrexia of unknown origin (PUO). He was a returned traveller from Southern Africa and underwent extensive investigation to rule out an infective cause. This was mostly unrevealing but there was a notable transaminitis (ALT predominant) with normal bilirubin level. He showed no serological or clinical improvement despite antibiotic treatment. Subsequent CT-PET showed high mural uptake in the thoracic and abdominal aorta and its major branches, confirming the diagnosis of Large Vessel Vasculitis (LVV). This case highlights the importance of considering LVV in patients with PUO and with transaminitis.
Case presentation
A 73-year-old man presented to hospital with a 4-week history of fever, night sweats, weight loss and frontal headache.There was no jaw claudication or visual disturbance.He had recently returned from holiday in Southern Africa.Reported alcohol use was within medically recommended limits.Systems enquiry revealed no other focal symptoms.Clinical examination revealed tenderness over the frontal sinus, but the rest of the cardiac, respiratory, GI and neurological examination was normal.
Inflammatory markers were raised with CRP 159 mg/L (reference range (RR): 0-5 mg/L) and ESR > 100 mm in hr (RR: 0-30 mm in 1 h).Two sets of blood cultures showed no growth, as did a urine culture.Procalcitonin result suggested a low risk of bacterial infection.COVID 19 PCR was negative and HIV and EBV serology were negative.TB interferon-gamma release assay was negative, as well as negative thick and thin films for malarial parasites.Liver function tests revealed ALT 364 U/L (RR: 0-50 U/L), ALP 156 U/L (RR: 30-120 U/L) and serum bilirubin within the normal range.Viral hepatitis screen was negative.ANA and ANCA profile were negative.Lactate dehydrogenase level was 619 U/L (RR: 208-378 U/L).
CT head with contrast revealed minor acute pansinusitis only.Abdominal USS showed fatty liver changes only.Contrast-enhanced CT of the thorax, abdomen and pelvis showed nothing to explain his presentation.
At admission broad-spectrum antibiotic therapy was initiated.LFT derangement was seen on admission bloods so antibiotic induced hepatitis was unlikely.Several days into his admission persistent fever and raised inflammatory markers prompted referral to Rheumatology.CT-PET scan was organised, which demonstrated high mural uptake in the thoracic and abdominal aorta and its major branches, particularly the subclavian, carotid and vertebral arteries, confirming the diagnosis of LVV.
High dose glucocorticoids and Methotrexate were started, with rapid clinical and serological improvement.Both inflammatory markers and liver function tests had normalised one month later Figs. 1 and 2 .
Discussion
Giant Cell Arteritis (GCA) and Takayasu Arteritis (TA) come under the overarching category of Large Vessel Vasculitis.Systemic symptoms such as fever, malaise and weight loss are common to all these conditions. 1 The diagnostic challenge occurs when patients present with non-specific constitutional symptoms alone.
Liver involvement is a rare but recognised association in LVV.Few studies have described cholestatic hepatitis secondary to LVV.3][4] Serum bilirubin is typically unchanged. 2However, our case is unique in that LFT derangement was of a hepatocellular pattern.
Radiological abnormalities seen in the liver of LVV patients have also been described.MRCP performed in one patient demonstrated beadingtype appearance of the intrahepatic ducts.A finding consistent with cholangitis.Upon the initiation of treatment repeat MRCP revealed no evidence of ductal abnormality. 5Other Images of the liver have also revealed granulomatous hepatitis 6 and cavernous hemangiomas. 7he histological changes observed in the liver of patients with LVV are also variable.Although usually normal there have been reported cases of hepatocyte necrosis with portal and lobular inflammation. 2arly complications of GCA include ischaemic optic neuropathy, which can be irreversible. 1Patients with GCA also have a twofold increased lifetime risk of Aortic aneurysm. 2Therefore, early initiation of glucocorticoids is crucial.
Conclusion
There is a need to highlight the association between LVV and transaminitis.Raising awareness enables diagnosis and in turn prompt treatment.
Patient consent
The patient in this study unfortuantely passed away.Therefore, written informed consent was obtained from his next of kin.
Fig. 2 .
Fig. 2. Trend of liver function tests pre and post initiation of treatment. | 2024-04-01T06:17:36.567Z | 2024-03-01T00:00:00.000 | {
"year": 2024,
"sha1": "df6fa052a791f036615db21222163ec334ede959",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.clinme.2024.100035",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "746812a17c24603a709002fc1c91685f0d267f71",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
120331429 | pes2o/s2orc | v3-fos-license | A Cross-Efficiency Based Ranking Method for Finding the Most Efficient DMU
In many applications of DEA, ranking of DMUs and finding the most efficient DMU are desirable, as reported by Toloo (2013). In this paper, after introducing an improvement to the measure of cross-efficiency by Jahanshahloo et al. (2011), we develop a new ranking method under the condition of variable returns to scale (VRS). Numerical example illustrates the effectiveness of the proposed cross-efficiency based ranking method and demonstrates the advantages of our proposal, against the other ranking approaches.
Introduction
Data envelopment analysis (DEA) provides a relative efficiency measure to evaluate decision making units (DMUs) with multiple inputs and multiple outputs.While it is an effective approach in identifying the best practice frontier, its flexibility in selecting the input/output weights and its nature of self-evaluation may result in a relatively high number of efficient DMUs.The lack of discrimination between efficient DMUs has been considered as an important problem in DEA models and subgroups of papers have been developed in this field in which many researchers have sought to improve the differential capabilities of DEA and to fully rank both efficient and inefficient DMUs.
Since these ranking methods have been developed based on some of the aspects of production possibility set (PPS), in certain cases, different calculations are reached in applying the alternative ranking methods.Furthermore, for each method, there are problematic areas, for example, infeasibility and instability of the proposed model.Hence, whilst each ranking technique is useful in a specialist area, no methodology can be prescribed as the complete solution to the question of ranking.
To increase the discrimination power of DEA models and make its weights more realistic, cross-efficiency evaluation has been suggested by Sexton et al. [1] and was later investigated by Doyle and Green [2,3].The basic idea of the crossefficiency evaluation is to evaluate the overall efficiencies of the DMUs through both self-and peer-evaluations and can usually provide a full ranking for the DMUs to be evaluated.Therefore, it has found a significant number of applications in various fields; see Green et al. [4], Sun and Lu [5], Bao et al. [6], Wu et al. [7,8], and Yang et al. [9].
This paper develops a new ranking method under the condition of VRS, which is based on introducing an improvement to Jahanshahloo et al. 's measure of crossefficiency [10].That is, the proposed cross-evaluation method is developed as a BCC extension tool that can be utilized to rank the DMUs using cross-efficiency scores and to identify the most BCC efficient DMU.
The rest of the paper is organized as follows.Section 2 gives a brief introduction to the DEA models.The mathematical foundation of our method to propose a secondary objective function in VRS cross-evaluation is discussed in Section 3. In Section 4, numerical example illustrates the effectiveness of the proposed cross-efficiency based ranking method.Finally, Section 5 is devoted to concluding remarks.
DEA Background
Consider DMUs that are evaluated in terms of inputs and outputs.Let and , = 1, 2, . . ., , = 1, 2, . . ., , be their input and output values for DMU = 1, 2, . . ., .The CCR efficiency of DMUs is measured by where and V are input and output weight vectors and 0 and 0 are the input and output vectors for the DMU under evaluation, respectively.Model (1) can be transformed to the following LP format, called input oriented CCR model: Note that model CCR in oriented output is as follows: Previous models have constant returns to scale (CRS) characteristic.Banker et al. [32] proposed BCC model which has variable returns to scale (VRS).The LP form of this model in oriented output is as follows: The optimal objective value of this model is BCC efficiency score of DMU 0 in oriented output.However, in classical DEA models, no preference information is needed and the weights are allowed total flexibility to obtain maximum efficiency.Thompson et al. [33] were the first who studied the role of weight restrictions in DEA models.Note that the above DEA models have alternative optimal solutions and free selection of weights may lead one DMU to have two types of weights.In one type, all of positive weights are on one group of variables and another with its weights symmetrically allocated to all variables.Dimitrov and Sutton [34] proposed symmetry measure which was the relative weight of each output dimension to all other output dimensions as follows: where is the difference in symmetry between output dimension and dimension for all DMUs under evaluation.They minimized the sum of all the values (∑ , = , where = [1, 1, . . ., 1] is an -dimensional vector) and then effectively rewarded symmetry with asymmetry scaling factor ≥ 0. By adding this constraint to model (3), we have But model ( 6) is not an LP model.Fortunately, as we are minimizing , we may change the equality to ≤ as any optimal solution will have the equality constraint satisfied.With this observation, Dimitrov and Sutton [34] rewrite (6) as follows, where its objective function is ∈ [1, ∞) and has the same feasibility region as LP model (3): The resulting objective function value of the LP ( 7) is referred to as the SWAT score with smaller scores being more desirable.Here, we want to extend the above model to BCC model and variable returns to scale environment.It is straightforward that the SWAT model can easily adapt to situation with variable returns to scale in BCC model as follows: where the objective function is ∈ [1, ∞) and the factor is a nonnegative factor, which determines how much a particular DMU will be penalized for an asymmetric selection of virtual weights.Note that = 0 is equivalent to the original oriented output BCC model and → ∞ is equivalent to the situation that all DMUs select equal virtual weights for all outputs.However, ideal value can be determined by the decision maker (DM).
BCC Cross-Efficiency Evaluation
Jahanshahloo et al. [10], based on model (7), suggested a secondary goal for cross-efficiency evaluation in CRS environment.In this section and based on model ( 8), we propose an improvement to their method to propose a secondary objective function in VRS cross-evaluation.
Cross-Efficiency for CCR Model.
Let * 0 ( = 1, 2, . . ., ) and V * 0 ( = 1, 2, . . ., ) be the optimal solution to model (7).Then, * 00 = ∑ =1 * 0 0 is referred to as the CCR efficiency of DMU 0 by self-evaluation.If * 00 = 1, DMU 0 is CCR efficient and otherwise it is CCR inefficient.Moreover, is referred to as DMU efficiency peer-evaluated by DMU 0 ( = 1, 2, . . ., , ̸ = 0).As a result, each DMU has one CCR efficiency and ( − 1) peer-efficiencies and CCR cross-efficiency score of DMU 0 is calculated as follows: It is noticed that DEA models may have multiple optimal solutions.This nonuniqueness of input/output optimal weights would damage the use of cross-efficiency concept due to the ambiguity in using weights for execution of final results.To resolve this ambiguity, alternative secondary goals in cross-efficiency evaluation have been introduced.
Cross-Efficiency for BCC Model. Let 𝑢 *
0 , = 1, 2, . . ., , V * 0 , = 1, 2, . . ., , V * 0 , be the optimal solution of model ( 4).Then, * 00 = ∑ =1 V 0 0 − V * 0 is BCC efficiency The average of all * 0 is called BCC cross-efficiency score which is calculated as follows: Note that, because of using oriented output of BCC model, the DMU with lower cross-efficiency score has better rank.However, the weights obtained from model ( 4) are usually not unique.As a result, the cross-efficiency is determined depending on the optimal solution arising from the particular LP software in use.As discussed before, Jahanshahloo et al. [10] suggested the use of symmetric weights as a secondary goal in CCR cross-efficiency evaluation.Here, to propose a secondary objective function in VRS cross-evaluation, we propose the following algorithm.
Step 2. To select the best weight from alternative optimal weights of model ( 4), based on the concept of symmetric weights introduced by model ( 8), solve LP model (13) to select suitable weight between alternative solutions.Consider We add constraints (), () for selecting symmetric weights in BCC model by minimizing the sum of difference in symmetry between output dimension and dimension for DMU 0 in objective function.Selecting symmetric weights is desirable, because, in application, centralization of weights in one group of variables may not be acceptable.
Step 3. Compute the BCC cross-efficiency score of DMU based on the formulations ( 11) and (12), where ( * 0 , V * 0 , V * 0 ) is optimal solution of model (13).Finally, DMU 0 with the lowest cross-efficiency score is the most BCC efficient DMU.
Note that we can use model ( 8) instead of Steps 1 and 2 of the above algorithm.As discussed before, is nonnegative factor where = 0 results in the classical BCC model in oriented output and, for = 1, model ( 8) is equivalent to model (13).
However, the ideal value for parameter can be determined by DM.In other words, as it is discussed in Dimitrov and Sutton [34], instead of (8) and for greater flexibility, we can use the following model which puts greater burden on the DM to define the appropriate values: In the next section, we use the proposed BCC crossevaluation approach for ranking of DMUs in a numerical example and finding the most BCC efficient DMU.To demonstrate the advantages of our proposal, against the other approaches, we compare the results to the results of using the approach proposed by Toloo and Nalchigar [35].
Numerical Example
Six nursing homes are evaluated in terms of two inputs and two outputs.The data set is reported in Table 1.Input variables are 1 : staff hours per day including nurses and physicians, 2 : supplier per day, measured in thousands of dollars.
Output variables are 1 : total medicative plus medicated reimbursed patient days, 2 : total privately paid patient days.By using mixed integer linear problem proposed by Toloo and Nalchigar [35], the most BCC efficient DMU is not unique and, based on the results depicted in Table 2, all of BCC efficient DMUs can choose as most BCC efficient DMU.This is due to the fact that the model proposed by Toloo and Nalchigar [35] has alternative optimal solutions.
Results of the proposed model (13) in this paper are presented in Table 3.The second column of this table shows BCC efficiency, and BCC cross-efficiency scores produced by our proposed method are depicted in the third column.Based on our procedure, DMU 2 is the most BCC efficient unit, alone.Moreover, to compare the obtained results with BCC model, in BCC model, five of six units are most efficient and choosing single most efficient unit is impossible.
Moreover, Table 4 shows cross-efficiency score based on model (14) with different amounts for .Rank of DMUs is depicted in parentheses.It represents that DMU 2 is the most efficient DMU with three types of , where, as mentioned before, factor shows how a particular DMU will be penalized for an asymmetric choice of virtual weights.
Conclusion
In this paper, we proposed BCC cross-efficiency with model based on symmetric weight selection and by using it we discussed a procedure for finding the most BCC efficient DMU.Considerably, advantages of our cross-efficiency model are as
Table 1 :
Data set for six nursing homes.the DMU 0 is called BCC efficient and otherwise it is called BCC inefficient.Moreover, the BCC peer-evaluated efficiency of DMU ( = 1, 2, . . ., ) by DMU 0 is calculated by
Table 3 :
Efficiency scores based on BCC model and BCC crossefficiency model.
follows: it is a linear program which is always feasible, it gives full ranking for DMUs, and it can find single most efficient unit. | 2018-12-30T00:16:06.977Z | 2014-04-22T00:00:00.000 | {
"year": 2014,
"sha1": "1698e6be4b3eeafaa6826b721a810b8ea258f628",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/mpe/2014/269768.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1698e6be4b3eeafaa6826b721a810b8ea258f628",
"s2fieldsofstudy": [
"Mathematics",
"Engineering"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
226560028 | pes2o/s2orc | v3-fos-license | Experimental Research on the Physical and Mechanical Properties of Concrete
with Recycled Plastic Aggregates
In order to study the effect of recycled plastic particles on the physical and mechanical properties of concrete, recycled plastic concrete with 0, 3%, 5% and 7% content (by weight) was designed. The compressive strength, splitting tensile strength and the change of mass caused by water absorption during curing were measured. The results show that the strength of concrete is increased by adding recycled plastic into concrete. Among them, the compressive strength and the splitting tensile strength of concrete is the best when the plastic content is 5%. With the increase of plastic content, the development speed of early strength slows down. Silane coupling agent plays a positive role in the strength of recycled plastic concrete. The water absorption saturation of concrete has been basically completed in the early stage. The addition of silane coupling agent makes the porosity of concrete reduce and the water absorption of concrete become poor. By summing up the physical and mechanical properties of recycled plastic concrete, it could be found that the addition of recycled plastic was effective for the modification of concrete materials. Under the control of the amount of recycled plastic, the strength of concrete with recycled plastic aggregates can meet the engineering requirements.
Introduction
In recent years, people pay more and more attention to plastic pollution. At the same time, people's understanding of environmental hazards is gradually deepened. Polytetrafluoroethylene plastics are widely used in the industries of atomic energy, national defense, aerospace, electronics, electrical, chemical, machinery, instrument, architecture, textile, metal surface treatment, pharmacy, medical treatment, textile, food, metallurgy and smelting, etc., as high and low temperature resistant, corrosion-resistant materials, insulation materials, anti-sticking coating, etc., making it an irreplaceable product [1][2][3]. As polytetrafluoroethylene (PTFE) was widely used in these industries, it would inevitably lead to the increase of waste PTFE. The waste of PTFE would be produced in the process of synthesis, processing, secondary processing and application of PTFE. PTFE has excellent performance [1,3], high price and great recycling value, and its recycling has been highly valued. Scholars all over the world have carried out active research on this problem, and put forward various suggestions and methods for the treatment and utilization of waste plastics [4][5][6]. As a civil engineer, it is a very good choice to apply waste plastics to ordinary concrete after a certain degree of treatment. It can not only solve environmental pollution and achieve sustainable development, but also improve the performance of concrete.
In the past, many scholars introduced recycled aggregate into concrete to study its basic properties [7][8][9][10][11][12][13][14][15]. Some scholars introduced fiber into concrete to increase the tensile strength and other properties of concrete [16][17][18]. Some scholars introduced some lightweight aggregate into concrete to study the performance of lightweight concrete [19][20][21][22]. At the same time, some experts introduced solid waste plastic particles into concrete [23][24][25][26]. Marzouk et al. [27] introduced waste plastic bottles into concrete, studied the volume density and mechanical properties of the material, and studied the relationship between the mechanical properties and microstructure of the material by means of SEM. The compressive strength and flexural strength of the composite were not affected by the replacement of sand with volume fraction less than 50% by PET with particle size upper limit of 5 mm. Panyakapo et al. [28] introduced the application of thermosetting plastics as admixtures in the mix proportion of lightweight concrete. The plastic not only led to a low dry density concrete, but also a low strength. Foti [29] simply cut waste plastic bottles into fibers and introduced them into concrete to study the possibility of improving the ductility of concrete. Saikia et al. [30] studied the influence of the size and shape of recycled polyethylene terephthalate (PET) aggregate on the mixing and hardening properties of concrete. With the addition of PET aggregate, the compressive strength, splitting tensile strength, elastic modulus and flexural strength of concrete decreased, and with the increase of the content of PET aggregate, the decrease of these properties increased. de Oliveira et al. [31] studied the basic mechanical properties of this fiber-reinforced mortar by adding PET bottle fiber cut by simple machine into the mortar. The addition of PET fiber could improve the flexural strength and toughness of mortar. Iucolano et al. [32] Studied the effect of recycled plastic aggregate on the physical and mechanical properties of mortar. The replacement of recycled plastics increased the porosity, reduced the bending and compression strength, and increased the water vapor permeability. Arulrajah et al. [33] evaluated three kinds of recycled waste plastic particles with linear low-density polyethylene filled with calcium carbonate, high-density polyethylene (HDPE) and low-density polyethylene (LDPE) as raw materials, and with crushed brick (CB) and recycled asphalt pavement (RAP) as mixtures. The strength, rigidity and elastic modulus of the blends were evaluated. Coppola et al. [34] used a new polymer aggregate with special properties to replace natural sand to prepare light mortar. This mortar has many advantages, such as reducing consumption of natural sand, reducing structural self-weight and improving aggregate and cement slurry. When increasing the amount of sand replacement, it was observed that the consistency of mortar decreased, as expected, the mechanical properties also decreased. Thorneycroft et al. [35] found that it was feasible to replace 10% sand with recycled plastic, which could save 820 million tons of sand every year. Through proper mix design, the structural performance of waste plastic concrete could be maintained.
Through previous studies, it could be found that the research on other recycled aggregate concrete has been very mature, the polytetrafluoroethylene (PTFE) type of plastic was not used in production of concrete frequently in the previous studies. Due to the different selection of materials, there were some differences in the conclusions of each scholar. In this work, waste polytetrafluoroethylene recycled plastic was introduced into concrete. Through mechanical tests, the compressive and splitting tensile properties of recycled polytetrafluoroethylene plastic concrete were studied. The water absorption property of this kind of concrete was studied by the weight change test in water. Finally, it was found that the properties of recycled plastic concrete were improved to some extent. Therefore, it is of great potential to introduce recycled polytetrafluoroethylene plastic into concrete.
Materials
In this test, P.C 32.5R composite Portland cement was used as cement, with specific parameters as shown in Tab. 1. Fine aggregate was river sand, with specific parameters as shown in Tab. 2. Coarse aggregate was crushed stone, with specific parameters as shown in Tab. 3. Sieves curves of the fine and coarse aggregate were shown in Fig. 1. Plastic particles were polytetrafluoroethylene (see Fig. 2), with particle diameter of 1-2 mm, with specific parameters as shown in Tab. 4. Silane coupling agent was KH560, with specific parameters as shown in Tab. 5.
Methods
The mix proportion of recycled plastic concrete was water: cement = 0.40, sand ratio was 0.3, plastic particles were added into the concrete, and the admixture did not replace any component of the concrete. The amount of admixture took sand as a reference, accounting for 0, 3%, 5% and 7% of the mass proportion of cement, and the amount of silane coupling agent was 0.5% of the mass proportion of plastic particles. The concrete mix was shown in Tab. 6. HJW-30 concrete horizontal mixer was used for concrete mixing. Firstly, stone and cement were added into the mixer for 120 s, then sand was added into the mixer for 120 s, and then plastic particles and water were added into the mixer for 120 s. Finally, the mixed concrete was put into a group of plastic molds with three samples of 100 × 100 × 100. the sample was placed on the vibration table and vibrated until the surface slurry flows out, then the surface of the sample was ground flat, and finally the sample was separated from the mold after standing in the air for 24 hours. The specimens were cured in tap water at 20 ± 2°C, and the compressive strength, splitting tensile strength and mass of concrete were measured at 7 and 28 days of curing age. The strength testing machine was a 200 t uniaxial pressure testing machine. The load control rate was 4000 N/s for compressive strength test and 400 N/s for splitting tensile strength test. Electronic platform scale instrument was used for mass test.
In the process of strength test, the surface of the specimen and the upper and lower pressure plates of the instrument should be wiped clean first. Then the specimen needed to be placed on the lower pressing plate or base plate of the testing machine. The bearing surface of the test piece should be perpendicular to the top surface at the time of forming. The center of the specimen should be aligned with the center of the lower pressing plate of the testing machine. Then the test machine was started. When the upper pressing plate was close to the specimen or steel base plate, adjust the ball socket to make the contact even. The load should be applied continuously and evenly during the test. When the specimen was close to failure and starts to deform rapidly, the accelerator of the testing machine should be stopped until failure. The final failure load was recorded. In the process of mass test, the mass of the sample was measured at the beginning of curing, the 7th day of curing and the 28th day of curing respectively.
Compressive Strength
The recycled plastic concrete specimens were divided into four groups according to the quantity of admixture. The compressive strength test results are shown in Tab. 7. Fig. 3 shows the compressive strength of recycled plastic concrete at 7 and 28 days. When the proportion of plastic particles was 0, 3% and 5%, the compressive strength of concrete was rising, while when the proportion of plastic particles was 7%, the compressive strength of concrete dropped sharply, and the compressive strength of concrete with 5% plastic particles reached the peak. In addition, the compressive strength of the concrete with 3% and 5% admixtures for 7 and 28 days was higher than that of the ordinary concrete. However, the compressive strength of 7 days curing concrete with 7% plastic particles was less than that of ordinary concrete, while the compressive strength of 28 day curing concrete was greater than that of ordinary concrete. At the same time, it could be seen that the compressive strength of ordinary concrete in 7 days curing period is 95% of that in 28 days, that in 7 days curing period with 3% recycled plastic concrete was 96% of that in 28 days, that in 7 days curing period with 5% recycled plastic concrete was 91% of that in 28 days, and that in 7 days curing period with 7% recycled plastic concrete was 75% of that in 28 days. It could be seen that with the increase of plastic content, the development speed of early strength slowed down. In the previous studies, the compressive strength would decrease with the increase of plastic content [28,32,26]. But in this work, because plastic did not replace the material in concrete, it was only the pure use of plastic waste, so it was equivalent to adding more materials to concrete, which eventually led to the increase of strength compared with ordinary concrete. Fig. 4 shows the compressive strength of recycled plastic concrete with or without silane coupling agent. From the overall trend, the compressive strength of recycled plastic concrete with silane coupling agent was generally higher than that of concrete without silane coupling agent. When the content of recycled plastic was 3%, the compressive strength of concrete with silane coupling agent was 1.09 times of that without silane coupling agent. When the content of recycled plastic was 5%, the compressive strength of concrete with silane coupling agent was 1.15 times of that without silane coupling agent. When the content of recycled plastic was 7%, the compressive strength of concrete with silane coupling agent was 1.05 times of that without silane coupling agent. The compressive strength of concrete mixed with silane coupling agent reached the maximum value when the amount of plastic was 5%, and the compressive strength curve of concrete without silane coupling agent showed a gradual increasing trend. It can be seen that silane coupling agent plays a positive role in the compressive strength of recycled plastic concrete. According to previous studies [36,37], silane coupling agent could improve the interfacial adhesion between materials, so adding silane coupling agent could improve the compressive strength of concrete.
Splitting Tensile Strength
The splitting tensile strength test results are shown in Tab. 8. Fig. 5 shows the comparison of splitting tensile strength of recycled plastic concrete with curing age of 7 days and 28 days. It could be seen from the overall trend of the strength curve that the splitting tensile strength of recycled plastic concrete at 28 days was generally higher than that of ordinary concrete, while the splitting tensile strength of concrete at 7 days was lower than that of ordinary concrete when the plastic content was 5% and 7%. It could be seen that with the increase of plastic content, the development of early splitting tensile strength of concrete was not perfect. The splitting tensile strength of concrete with 7 days curing age reached the maximum value when the recycled plastic content was 3%, and the splitting tensile strength of concrete with 5% and 7% recycled plastic content gradually decreased. The splitting tensile strength of concrete with recycled plastic content of 5% reached the maximum value in the curing period of 28 days, and the splitting tensile strength of concrete with recycled plastic content of 7% began to decrease. When the content of recycled plastic was 0, 3%, 5% and 7%, the cleavage tensile strength of concrete can reach 97%, 80%, 66% and 77% of the 28 days strength when the curing age is 7 days. It could be seen that the early cleavage tensile strength of concrete mixed with plastic was generally reduced. In previous studies, most people found that with the increase of plastic content, the splitting tensile strength decreased [38,39]. However, in this study, the splitting tensile strength of concrete is stronger than that of ordinary concrete. There are two main reasons: on the one hand, silane coupling agent increases the cohesion of plastic and concrete materials [36]; on the other hand, the tensile property of plastic itself is more excellent [1]. Fig. 6 shows the contrast of splitting tensile strength of recycled plastic concrete with or without silane coupling agent. From the overall trend of the strength curve, it could be seen that the splitting tensile strength of concrete with silane coupling agent was generally higher than that without silane coupling agent, and the splitting tensile strength of concrete reached the peak value when the plastic content was 5%. When the content of recycled plastic was 3%, the splitting tensile strength of concrete with silane coupling agent was 1.61 times of that of concrete without silane coupling agent. When the content of recycled plastic was 5%, the splitting tensile strength of concrete with silane coupling agent was 1.42 times of that of concrete without silane coupling agent. When the content of recycled plastic was 7%, the splitting tensile strength of concrete with silane coupling agent was 1.21 times of that of concrete without silane coupling agent. It could be seen that silane coupling agent played a positive role in the splitting tensile strength of recycled plastic concrete. As the silane coupling agent can improve the compressive strength of concrete, it is the same principle that silane coupling agent can improve the splitting tensile strength of concrete.
The relationship between compressive strength and splitting tensile strength of recycled plastic concrete was analyzed. It could be seen that the compressive strength of recycled plastic concrete with silane coupling agent was 9.79 times, 10.34 times, 12.12 times and 9.78 times of splitting tensile strength respectively when the curing age was 7 days, the plastic content was 0, 3%, 5% and 7%. The compressive strength of recycled plastic concrete with silane coupling agent at 28 days was 9.96 times, 8.62 times, 8.75 times and 10.10 times of splitting tensile strength respectively. When the content of plastics was 3%, 5% and 7%. The compressive strength of recycled plastic concrete without silane coupling agent was 12.72 times, 10.85 times and 11.71 times of splitting tensile strength at the curing age of 28 days. In the previous studies, the compressive strength of concrete was about 10 times of the splitting tensile strength [40,41]. In this study, it was found that the law of recycled plastic concrete was basically consistent with previous studies.
Mass Change
The mass change results are shown in Tab. 9. Fig. 7 shows the mass change of recycled plastic concrete in 7 and 28 days. According to the overall trend of the histogram, when the content of plastic particles was 3%, the water absorption quality of the concrete will change the most when it leaves the mold, that was to say, the water absorption was the most. With the increase of the content of plastic particles, the change of water absorption gradually decreases, and when the content of plastic particles was 7%, the water absorption was less than that of ordinary concrete. According to the comparison of 7 days and 28 days water absorption of concrete, the water absorption of 7 days concrete was generally less than 28 days, in which the water absorption of 7 days concrete with 0,3%, 5% and 7% plastic particles was 8 g, 6 g, 7 g and 9 g lower than that of 28 days respectively. It could be seen that the water absorption saturation of concrete was basically completed in the early stage, with little change in the later stage. Figure 7: Comparison of 7 days and 28 days mass change of recycled plastic concrete Fig. 8 shows the mass change of recycled plastic concrete with or without silane coupling agent. From the overall trend, the water absorption of concrete with silane coupling agent decreased with the increase of plastic content, while that of concrete without silane coupling agent increased with the increase of plastic content. It could be seen that with the increase of silane coupling agent, the porosity of recycled plastic concrete decreased and the water absorption became poor. According to previous studies [36,37], silane coupling agent could improve the interfacial adhesion between materials. With the increase of silane coupling agent, the compactness of concrete becomes better and the mass changes less.
Conclusions
Through the analysis and discussion of the test data of recycled plastic concrete, the following conclusions can be drawn: 1. The addition of recycled plastics to concrete increases the strength of concrete, and the compressive strength reaches the best when the recycled plastics content is 5% in the 7-day and 28-day curing age of concrete; the splitting tensile strength reaches the best when the recycled plastics content is 3% in the 7-day curing age of concrete, and reaches the best when the recycled plastics content is 5% in the 28-day curing age of concrete. 2. When the recycled plastic content is 0, 3%, 5% and 7%, the 7-day compressive strength of concrete can reach 95%, 96%, 91% and 75% of the 28 days strength respectively, and the 7-day splitting tensile strength of concrete can reach 97%, 80%, 66% and 77% of the 28-day strength. It can be seen that with the increase of plastic content, the hydration heat of concrete slows down, and the development speed of early strength slows down. 3. When the content of recycled plastic is 3%, 5% and 7%, the compressive strength of concrete with silane coupling agent is 1.09 times, 1.15 times and 1.05 times of that without silane coupling agent, and the splitting tensile strength of concrete with silane coupling agent is 1.61 times, 1.42 times and 1.21 times of that without silane coupling agent, respectively. It can be seen that silane coupling agent plays a positive role in the strength of recycled plastic concrete. 4. When adding silane coupling agent, the compressive strength of concrete is 9.79 times, 10.34 times, 12.12 times and 9.78 times of splitting tensile strength when the curing age is 7 days, the amount of plastic is 0, 3%, 5% and 7%, respectively. When the curing age is 28 days, the compressive strength of concrete is 9.96 times, 8.62 times, 8.75 times and 10.10 times of splitting tensile strength respectively when the plastic content is 0, 3%, 5% and 7%. When no silane coupling agent is added, the compressive strength of concrete is 12.72 times, 10.85 times and 11.71 times of splitting tensile strength when the curing age is 28 days, the plastic content is 3%, 5% and 7%, respectively. 5. The water absorption of concrete in 7 days is generally less than 28 days. The water absorption of concrete in 7 days with 0, 3%, 5% and 7% plastic particles are 8 g, 6 g, 7 g and 9 g lower than that in 28 days respectively. It can be seen that the water absorption saturation of concrete is basically completed in the early stage, with little change in the later stage. With the increase of silane coupling agent, the porosity of concrete decreases and the water absorption becomes poor. | 2020-06-04T09:12:44.588Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "6f3278cf41e177c6666fff74e674c50d05c214ab",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.32604/jrm.2020.09589",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "ced5e604625ac6b577283d09e8e7fa4cc90b6ec8",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
220923473 | pes2o/s2orc | v3-fos-license | Comments on “Perspectives, attitude, and practice of lithium prescription among psychiatrists in India”
Indian Journal of Psychiatry Volume 62, Issue 4, July-August 2020 443 A multicentre, double-blind, randomised, parallel-group, superiority trial. Lancet Psychiatry 2017;4:365-77. 4. Salehi B, Mohammadbeigi A, Kamali AR, Taheri-Nejad MR, Moshiri I. Impact comparison of ketamine and sodium thiopental on anesthesia during electroconvulsive therapy in major depression patients with drug-resistant; a double-blind randomized clinical trial. Ann Cardiac Anaesthes 2015;18:486-90. 5. Zhang M, Rosenheck R, Lin X, Li Q, Zhou Y, Xiao Y, et al. A randomized clinical trial of adjunctive ketamine anesthesia in electro-convulsive therapy for depression. J Affect Disord 2018;227:372-8. Access this article online
The authors explicitly mention that lithium is the only U. S. Food and Drug Administration (FDA) approved treatment for the maintenance therapy of Bipolar Disorder. [2] This is scientifically misleading, as other psychotropics such as Aripiprazole, Lamotrigine, Olanzapine, Risperidone long-acting preparation (Risperdal Consta), Quetiapine, and Ziprasidone are also have been approved by the FDA specifically for maintenance therapy for Bipolar Disorder. [2,3] The authors stated that "the majority of psychiatrists who completed the survey were of the opinion that lithium dose titrations should be done both in the acute phase and maintenance phase on a dose-dependent basis rather than the blood level-dependent basis." One of the most important known evidence is that plasma lithium in humans does not always reflect the intracellular levels. [4] This could explain the possible scientific explanation as to why majority of psychiatrists in the survey opted for a dose-dependent titration rather than the blood level-dependent titration of Lithium to improve the symptoms.
Authors have looked to explore the barriers to lithium prescription such as adverse effects, monitoring, dose titration, experience, clinical comorbidities, the onset of action and adherence, however they missed some other important barriers like the nonavailability of medications and availability of laboratory services to monitor serum lithium and other biochemical parameters while the person is on lithium.
In the survey, authors have asked about the use of lithium over other molecules in both first episodes and multi-episode mania. The question was ambiguous as "others molecules" in the question was not explained clearly, i.e., other molecules can be interpreted as either other mood stabilizers like valproate or an antipsychotic. Furthermore, it was not clear whether the question was about lithium and other molecules being given in the acute phase or maintenance phase of mania and whether mania was associated with or without psychotic symptoms. Choosing a mood stabilizer in bipolar disorder is depends on multiple parameters like the clinical profile of patients and the patient's choice. These were not discussed in the study. Hence, the study finding on psychiatrists with experience of >5 years preferred lithium over other molecules in both first episode and multiple-episode mania than those who had <5 years experience as psychiatrist has to be interpreted with consideration of above limitations.
Financial support and sponsorship
Nil.
This is an open access journal, and articles are distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 License, which allows others to remix, tweak, and build upon the work non-commercially, as long as appropriate credit is given and the new creations are licensed under the identical terms.
National guidelines for media reporting of suicide
Sir, The guest editorial, "Media matters in suicide -Indian guidelines on suicide reporting," [1] made interesting reading. We are happy to note that the Press Council of India (PCI) is planning to initiate guidelines for suicide reporting. This is timely, given the known impact of media reporting on suicides.
In this context, we draw the attention of readers to the position statement and guidelines on the subject issued by the Indian Psychiatric Society (IPS) and published in the Indian Journal of Psychiatry in 2014. [2] This succinct guideline, not cited in the editorial, is the first and only national guideline on media reporting of suicide. Although the emphasis in this guideline is on print media, the recommendations can easily be extrapolated to visual and electronic media. The guideline illustrates the power of the media on suicides through both Werther and Papageno effects. With the burgeoning of technology, the widespread use of gadgets, the younger age at first gadget use, and the privacy with which the gadgets may be used, visual and electronic media may have to be targeted the most in the implementation of the guidelines. These media can particularly be of help for suicide prevention using the principles of the Papageno effect.
The guidelines by the PCI emphasize how suicide should not be reported. The IPS guidelines, in addition, give suggestions for positive reporting, such as utilizing the media to create public awareness of mental illness and to de-stigmatize suicide. The IPS guidelines also make explicit recommendations, such as not to publish suicide notes, which are often breached, perhaps to sensationalize news. The IPS guidelines may be more specific to the Indian context vis-a-vis the broad WHO guidelines based on which PCI formulated its guidelines. The IPS guidelines, therefore, complement and supplement the PCI guidelines. The joint efforts of the PCI and the IPS would, therefore, enhance the quality of media reporting of suicide.
It is also noteworthy that the Department of Psychiatry, Government Medical College, Thrissur, Kerala, had organized a workshop for journalists in collaboration with mental health professionals and had brought out a 15-item guideline for responsible media reporting of suicide in 2001. A study that was later conducted found that there were modest changes in the reporting style of the media with regard to suicides and attempted suicides, and the changes have persisted over the years. [3] Although there are scattered efforts to create media awareness about suicide reporting under the aegis of the IPS, a consolidated nationwide approach is lacking.
An active collaboration between media personnel (PCI) and psychiatrists (IPS) is urgently needed, as suggested in the editorial, to promote the responsible portrayal of suicide in media and to deter suicides, especially in young and vulnerable individuals.
Financial support and sponsorship
Nil. | 2020-07-30T02:07:21.874Z | 2020-07-01T00:00:00.000 | {
"year": 2020,
"sha1": "85776b25739182fbf92d9f44d75959de8f955ba8",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/psychiatry.indianjpsychiatry_714_19",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "552fc26ff0edb803240589de73f855538369bc3e",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
140063877 | pes2o/s2orc | v3-fos-license | Prospective, Comparative Evaluation of Forearm and Lower Leg Hair Removal with 808-nm Diode Laser at Different Fluences
Background and Objectives High fluence diode lasers have emerged as the gold standard for removal of unwanted hair. Lowering the energy should result in less pain but could theoretically affect the efficacy of the therapy. The author designed this study to compare the efficacy of a low fluence 808-nm diode laser to that of a high fluence 808-nm diode laser for permanent hair removal in Korean women.
INTRODUCTION
Excess or unwanted hair growth remains a treatment challenge and considerable resources are spent achieving a hair-free appearance. There are many traditional treatments. For example, plucking, chemical depilatories, shaving, waxing, and electrolysis. But they are not considered ideal method. These methods can be inconvenient and most only produce short-term improvement. Laser hair removal have become one of the most common noninvasive cosmetic procedures performed in plastic surgery and dermatology because they are proven to be efficient methods for permanent hair removal. Commonly used lasers and light sources include long-pulsed neodymium (Nd):yttrium-aluminum-garnet (YAG) laser (1,064 nm), diode laser systems (800 nm), alexandrite laser (755 nm), ruby laser (694 nm), and intense pulsed light (IPL) source. In the peer-reviewed literature, the diode laser systems have emerged as the most effective hair removal method. 1 Current laser treatments rely on the technique of selective photothermolysis. The goal of which is to target a defined structure using a particular wavelength of light delivered in or about the time that the target structure loses 50% of its heat, also known as the thermal relaxation time. The energy absorbed selectively heats the target while allowing the surrounding area to remain relatively untouched. In laser hair removal, melanin in the hair shaft is the target chromophore, whence heat is transferred to the associated stem cells and follicular bulb. 2 While many wavelengths can target melanin, this study uses 808nm diode lasers. They are currently considered amongst the most effective lasers for hair removal. We compared two methods of delivery; low fluence and high fluence. A previous study histologically demonstrated that repetitive low fluence laser devices do indeed induce necrosis of the hair follicle. The purpose of this study is comparative evaluation of forearm and lower leg hair removal using different fluences 808-nm diode laser.
Patients selection
This is a prospective single-center, bilaterally paired-, randomized-comparison study. Twenty six female volunteers were recruited from the hospital staff and the outpatient clinics of the Department of plastic and reconstructive surgery of sooncheonhyang bucheon hospital, Korea from July 2017 through April 2018. And they were divided into two groups; forearm treatment group and lower leg treatment group. The study enrolled patients who met eligibility criteria. Inclusion criteria were as follows: Aged between 20 and 55 years, willing to abstain from use of any additional hair removal treatment during the duration of the study, and willing and able to comply with all study requirements. Exclusion criteria included: patients under 18 years of age; active localized or systemic infections; history of acne scarring or herpes simplex virus in the treatment area; photosensitivity or allergy; use of the following drugs: minoxidil, finasteride, isotretinoin, steroids, photosensitizing medications. In Addition; skin pigmentation disorders, Hailey-Hailey disease, Darier's disease, ichthyosis, vitiligo, inverse psoriasis; pregnancy and lactate women; subjects with tattoos, chronic daylight exposure, tanning; received laser hair removal treatment for the forearm and lower leg (within 12 weeks).
Study procedure
At first visit, hair counts per cm 2 was performed on both forearms and lower legs to indicate the baseline values pretreatment. Then all of the areas were treated in four sessions, with a gap of one month between each session. In each patient 2 × 2 cm area at baseline was determined for counting the number of hair. The ratio between the number of hairs before and after laser treatment was calculated. 808-nm Diode laser system (HR808 prototype diode laser TM ; Wontech, Daejeon, Korea) was used in this study. Pre-cooled water-based lubricating gel was applied before treatment for epidermal protection as a heat sink. The treatment areas were routinely cleansed and hairs were trimmed to a uniform length of 2-3 mm using a safety scissors. Patients were treated with a low fluence mode (12 J/cm 2 ) on the right forearm or lower leg and a high fluence mode (14 J/cm 2 ) on the left forearm or lower leg. Laser pulses (30 milliseconds) were given with five to ten passes. Laser pulses were emitted at a fixed rate of 7 Hz. The mean time interval between successive pulses over any given area was 1-2 seconds. The clinical endpoint of treatment was the presence of slight perifollicular erythema.
After each treatment session, digital photographs (Nikon D750; Nikon Corp., Tokyo, Japan) were taken of the subject's forearm and lower extremities prior to the laser treatment in photographic room. Each photo was taken maintaining positioning, camera settings and identical ambient lighting. And the density and thickness of the hair were assessed through visual and Folliscope equipment (Fig. 1). The results were evaluated three months after completing the final session.
The intensity of pain was subjectively expressed by each Original Article patient and recorded respectively after the each treatment, by visual analogue scale (VAS) with a range from 0 (no pain) to 10 (unbearable pain). Therapeutic outcomes were assessed by patient satisfaction of treatment results. That was evaluated using the following 4 point grading scale. At the seven-month (Visit #6) follow-up visit, all subjects reported excellent (3), very good (2), good (1), and fair (0) satisfaction with their outcome. Independent clinical assessments of the treatment areas were conducted by three plastic surgeons blinded to the study subjects based on comparative photographs using the 5 point grading scale: 0 = no change; 1 = slight improvement (1-25%); 2 = moderate improvement (26-50%); 3 = significant improvement (51-75%); and 4 = excellent improvement (>75%). The median value of the three scores in each patient was determined to be the final score of the physicians' assessment.
The study was designed in compliance with the ethical principles of the Declaration of Helsinki. The trial was approved by the institutional ethics committee.
Statistical analysis SPSS 14.0 (SPSS Inc., Chicago, Illinois, USA) was used for statistical analysis. Paired t-test or Wilcoxon's signed rank test (95% confidence interval)were used to measure the variations between baseline and at one month after treatment. McNemar's test (McNezmar-Bowker's test if 2 × 2 or higher) was performed to compare changes in categorical variables before and after treatment. Statistical significance was considered to be p < 0.05.
RESULTS
The study population comprised 26 subjects who visited for forearm and lower leg hair removal. A total of 26 female patients completed the study (13 subjects with forearm hair removal and 13 subjects with lower leg hair removal), average age of 29.7 years with a range of 24 to 38 years and 35.2 years with a range of 25 to 51 years. After the end of treatments, all the patients in both groups were satisfied and noticed a long-term hair reduction.
In forearm group, based on hair density at the baseline (prior to the treatment) and at the visit #6, the means of hair follicle thickness reduction on the low fluence side were 50% and high fluence side were 65.6%, respectively. The means of hair follicle density reduction on the low fluence side were 14.4% and high fluence side were 19.0%, respectively (Tables 1, 2). Comparably, the mean VAS pain scores with the high fluence (2.1) were slightly higher than that with the low fluence (1.91). Plastic surgeon's assessment score was 3.15 on high fluence side, and 2.84 on low fluence side.
In lower leg group, the means of hair follicle thickness reduction on the low fluence side were 57.7 % and high fluence side were 60.7%, respectively. The means of hair follicle density reduction on the low fluence side were 45.9% and high fluence side were 63.7%, respectively (Tables 3, 4). Comparably, the mean VAS pain scores with the high fluence (2.4) were higher than that with the low fluence (2). Plastic surgeon's assessment score was 3.54 on high fluence side, and 3.00 on low fluence side. Patient's satisfaction survey results were marked in Fig. 2. The percentage of 'Excellent' were higher in lower leg.
In both site, the percentage of 'Excellent' were higher in high fluence group.
There was no statistically significant difference (p < 0.05) between the high fluence and the low fluence. The treatments were tolerated well without anesthesia. The majority of patients appealed slightly heating sensation in treatments, but they told it was bearable with the treatment. Following treatments there was perifollicular erythema, which were the clinical end-point of the treatment and were reported as having disappeared within 10 hours. Long-term side effects such as hyperpigmentation, hypopigmentation or skin atrophy were not reported.
DISCUSSION
In the modern world, the pursuit of ideality leads to the fact that unwanted hair on the body becomes not only uncomfortable, but also completely undesirable. In order to get rid of them, patients use a variety of methods, such as: depilation, shaving, wax, as well as depilation cream.
Original Article
Common methods of hair removal have only temporary effect, which makes patients constantly spend money, time and think about this problem. For a long time, laser hair removal takes the leading position in the world of modern noninvasive procedures. This procedure is practically painless, long-lasting and effective. Alavi et al. 3 reported the evidence of using hair removal laser safe and permanent in hair reduction. As well as significant positive effects on skin, including increased dermal density and decreased transepidermal water loss, which might be used in skin rejuvenation.
The mechanism of laser hair removal process based on the ability of the laser to exert its influence by a beam of light that transforms into the heat energy that pass for 4 mm and absorbed by the pigment melanin. Light energy is converted into heat, heating the follicle and hair cells in order to destroy it completely.
Various clinical studies in laser hair removal have been described in the literature. 4-7 Royo et al. 8 provided clinical assessment of 755-nm diode laser for hair removal. It was concluded that 755-nm diode laser may be a highly efficacious in hair removal, because of the high energy that can be safely applied in treatment of difficult cases. Whereas Brown, 9 Pai et al., 10 Koo et al. 11 describe dramatic decrease in therapy-related pain and good efficacy after the using of low fluencies at a high average power with a multiple pass in-motion of 810-nm diode lasers.
The 808-nm diode laser is a fast, safe and painless diode platform that can be used to raise pulse repetition rates up to 20 Hz and manipulate a wide range of flu- 12 Many studies have been reported about relationship of fluence, repetition rate and treatment effectiveness. Li et al. 13 compared the safety and efficacy of different fluencies and repetition rates of using the 810 nm diode laser for axillary hair removal. It was concluded that a low fluence (10 J/cm 2 ), high repetition rate (10 Hz) mode is more efficient method of laser hair removal with less treatment discomfort than high fluence (34-38 J/cm 2 ), low repetition rate (10 Hz). Pai et al. 10 compared two 810-nm diode laser machines having different fluencies and repetition rates by similar methods for facial hair removal. It was also concluded that a low fluence (10 J/cm 2 ), high repetition rate (10 Hz) mode is more efficient with less pain than high fluence (25-35 J/cm 2 ), low repetition rate (2 Hz). In our result, the overall mean reduction rates of thickness and density in both groups were slightly high in high fluence (Fig. 3, 4). In aspect of patient satisfaction, majority of patients were satisfied to treatment. Briefly 80% of patients respond 'Excellent' or 'Very good'. Especially high fluence group's percentage of 'Excellent' are higher than low fluence group (Fig. 2). In visual assessment by plastic surgeons, high fluence group's score is higher than low fluence group. But there was no statistically significant difference. We think that is because enrolled patient numbers are small.
Comparing with other study, remarkable finding is that parameter of 'high' fluence is much smaller than other comparison study. Theoretically, lowering the fluence is supposed to reduce pain due to the procedure and potential side effects. The pain generated from the procedure may be influenced by many factors, such as the laser parameters (fluence, wavelength and pulse duration), skin cooling, treatment site and the quantity and quality of hair. Rogachefsky et al. 14 have reported that the pain was directly related to longer pulse duration and higher fluence, and complications were greatest at the highest pulse duration and fluence. But in our study, high fluence group's pain score in our study is similar with other study's low fluence group 10,14 and there are no complications. Also other study's low fluence treatment use multiple pass, but we use single pass technique. So we think our study's high parameter has advantage of procedure's comfortability in aspect of pain and time saving. We reduce the time comparing with other low fluence treatment by using single pass and get slightly better treatment effect than conventional low fluence treatment, with similar treatment comfortability. Further study with large number group will be necessary to verify significant difference.
CONCLUSION
When comparing low fluence mode, high fluence mode (14 J/cm 2 ) HR808 prototype diode laser can efficiently remove unwanted hair. Also this mode significantly improves comfortability and reduce treatment time when compared to other traditional low and high fluence mode diode laser.
CONFLICT OF INTEREST
The authors declare that they have no conflicts of inter- | 2019-04-30T13:08:56.982Z | 2018-12-30T00:00:00.000 | {
"year": 2018,
"sha1": "cbe29692477e02ae35925543e92f510df2c61983",
"oa_license": "CCBYNC",
"oa_url": "http://www.jkslms.or.kr/journal/download_pdf.php?doi=10.25289/ML.2018.7.2.55",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "b152f36dcc35a3541ac6fe48f0e76dbe9d4f0b2e",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Materials Science"
]
} |
236655983 | pes2o/s2orc | v3-fos-license | HEVC Fast Intra-Mode Selection Using World War II Technique
: High-Efficiency Video Coding (HEVC) applies 35 intra modes to every block of a frame and selects the mode that gives the best prediction. This brute-force nature of HEVC makes it complex and unfit for real-time applications. Therefore, a fast intra-mode estimation algorithm is presented here based on the classic World War II (WW2) technique known as the ‘German Tanks Problem’ (GTP). This not only is the first article to use GTP for early estimation of intra mode, but also expedites the estimation process of GTP. Secondly, the various elements of the intra process are efficiently mapped to the elements of GTP estimation. Finally, the two variations of GPT are modeled and are also minimum-variance estimates. These experimental results indicate that proposed GTP-based fast estimation reduced the compression time of HEVC from 23.88% to 31.44%.
Introduction
The timely availability of multimedia content to a user is as important as the availability of intelligence in a war. In digital marketing, video covers 80% [1] of material. According to Cisco [2], 82% of total Internet traffic is videos. Moreover, a single HD television that is connected to the Internet will generate, on average, an equivalent amount of traffic to that generated by an entire family. This problem escalates when it comes to Ultra-High-Definition (UHD) devices, as they require double the HD video bit-rate. Cisco estimates that by 2023, 66% of televisions will be UHD. Therefore, an efficient algorithm is required that can reduce the size of multimedia content. Compression algorithms reduce the size of videos but maintain the same subjective video quality.
For this work, we will use the High-Efficiency Video Coding (HEVC) [3] compression standard. This standard, also known as H.265, reduces the bit-rate requirement by 50% and delivers the same subjective quality [4]. This bit-rate reduction is achieved by using big block sizes to compress smooth regions of the video, and small block sizes to compress textured regions. Moreover, the content of each block is predicted by using neighboring pixels. These neighboring pixels are projected onto the area of the current block in 35 different ways to achieve best prediction. These 35 projections are known as intra modes in HEVC.
HEVC can efficiently compress UHD content, and all issues related to UHD can be solved, but HEVC is not applicable to real-time applications. The complexity of HEVC has increased as compared to its previous standard, H.264. The complexity of HEVC increases as it tries all the block sizes and intra modes to predict the best tradeoff between rate and distortion, known as the RD cost. HEVC test model software, HM [5], first partitions the image into 64 × 64-size blocks known as Coding Units (CUs) and the best intra mode is selected by applying all 35 intra modes. Then, this 64 × 64 CU is partitioned into 4 CUs of equal size, and the same intra mode procedure is repeated on them. This CU partitioning and intra-mode selection process is repeated until the CU size reaches 8, which is the smallest CU in HEVC. This brute-force characteristic of HEVC makes it unfit for real-time application.
To reduce the complexity of HEVC, a fast intra-mode [6] decision-making task will be carried out in this article. There are 35 intra modes, and one of them is the best; therefore, a mechanism is needed that facilitates early intra-mode decision. Out of these 35 modes, 33 are angular modes that project neighboring pixels in 33 different ways onto the current CU. The remaining two modes are used to predict the smooth regions present in the image. Figure 1 [7] shows the directions of the 33 angular modes. The RMD module of the HM software shortlists up to 8 intra modes for the current block by using the Hadamard cost. The number of intra modes depends on the size of the block. The RMD module shortlists {8, 8, 3, 3, 3} intra modes for block size ∈ {4 × 4, 8 × 8, 16 × 16, 32 × 32, 64 × 64}, respectively. Then, the RDO module of the HEVC computes the RD cost for these intra modes (shortlisted by RMD) and selects 1 intra mode. The proposed work is inspired by Optimal Stopping Theory (OST) and Classical Secretary Problem (CSP)-based models, and applied the German Tanks Problem (GTP) [8] algorithm to perform a fast intra-mode decision. The GTP algorithm is a statistical model just like OST and CSP, but is based on a true story. The GTP was created during World War II (WW2) to predict the number of tanks that the Germans had. This algorithm, GTP, not only predicted the number of tanks with great accuracy, but also discarded the claims made by intelligence. This victory over British and American intelligence gave GTP unique importance and significance. Moreover, the formulation of GTP is very simple. Therefore, this article applies GTP to predict the intra mode for the current CU. The proposed GTPbased algorithm outperformed existing algorithms. The main contributions of proposed algorithm include: (i) Being the first algorithm to use GTP for early intra-mode decisions.
(ii) Having not only a strong foundation, but also computational efficiency. (iii) Providing a satisfactory tradeoff between rate and distortion, and performing better than many existing algorithms.
This article is structured as follows: related works, motivation, the proposed model, and results are presented in Sections 2-5, respectively. After that, the article is concluded in Section 6.
Related Works
The literature is full of fast algorithms. One reason for intra-mode popularity is that it is a very challenging area because one mode must be selected from 35 modes. Therefore, the probability of selecting the correct option is only 0.02%.
In [9], 30% of the encoding time is saved by a classical secretary problem-based algorithm. This work evaluates a minimum of two modes to compute the stopping point. In [10], the pixel values of the left and above block are used to decide the planar mode. This results in a 14% time reduction in the encoding process. Zhu et al. in [11] saved 16.1% of the encoding time. In this work, the Hadamard transform is used instead of DCT.
The author of [12] saved 28% of the time by proposing a model that performs intramode selection in an iterative process. In each iteration, a few intra modes are selected from the available pool (i.e., 35 modes). In [13], a 60% time-saving is achieved. This work tries a sub-set of modes. Ying in [14] saved 38% of the time by proposing a model that uses RD as a stopping point. Zhang in [15] saved 38% of the time by proposing a model that consisted of three phases. In [16], a model is proposed that uses the complexity of the block to form three groups. Yeh in [17] computes an RD cost using co-located information. Then, this RD cost is used to compute the stopping point.
In [18], the RDOQ module was customized. This work used the coefficients of the transform to predict RD cost. Around 63% of the time was saved in this work. Zhang in [19] performed intra prediction using the gradient of the block. In [20], Hu saved 55% of the time by proposing a regression-based algorithm. Tariq in [21] saved 35% of the time by proposing a model based on stopping theory.
In [22], Kuanar saved 45% of the time by proposing a model based on CNN. Huang in [23] performed optimization of intra modes. According to Huang, this model can be used for early decisions of CU and PU. The time-saving was around 66% on average. The author of [24] saved 58% of the time by proposing a model based on random forest. In [25], 52%of the time was saved. This work selected an angular mode for the block by using the output of the planar mode applied on the same block.
Tian in [26] saved 20.45% of the time. This work used a deviation among pixels of the block to find the intra mode for the block. In [27], Gwon saved 31.54% of the time by proposing a model that used a classifier. The Hadamard cost was used as a feature in this classifier. In [28], uncertainty-based model was proposed for fast intra-mode decision. The main contribution of this work was that it incorporates uncertainty of the real world into the algorithm. Therefore, it dynamically adjusts itself to various situations. In [29], Munagala enhanced the holoentropy of HEVC. As a result, the PSNR is approved compared to the original HEVC. Improvement to PSNR is directly related to improvement in the subjective quality of the content. Therefore, this algorithm gives a realistic experience to the user. Liu, in [30], proposed a method that accurately predicted features from video sequences. This method helped overcome stability and quantity issues of feature-matching techniques. More features mean more information and, hence, such an algorithm gives a more accurate prediction. However, more features mean more computation. Therefore, features should only be used that efficiently represent the original data. Bahce, in [31], proposed a 3D-SPECK method for the encoding of geometry videos. Jridi in [32,33] proposed an architecture for Discrete Cosine Transform (DCT) to reduce the complexity of HEVC. This work approximates DCT better than the existing architectures. Tariq, in [34][35][36][37], used statistical models to make early decisions about the intra mode.
The proposed work, in comparison to the state-of-the-art works presented above, computes the probability of each intra mode. These probabilities are then passed to the GTP algorithm to estimate the stopping point. Moreover, GTP is modified to perform early estimation by using only k options such that k ≤ N, where N is the total number of options.
Motivation
First, we present the 'German Tanks Problem' (GTP) algorithm here; otherwise, it will be difficult to know the importance of GTP and why it is being selected to perform early intra-mode decision. Statistically, GTP has great importance because it is based on a true story. In World War II (WW2), the Allies wanted to know how many tanks the Germans had. Such information was of great importance because it could affect the outcome of the war. Therefore, the British and the Americans asked statistical intelligence to estimate the number of tanks the German had. The statisticians had one key piece of information, which was the serial numbers on the captured tanks. This was enough for statisticians to make an estimate of the total number of tanks that had been produced up to any given moment.
A point estimate in sample statistics is used to estimate a population. In this case, the statisticians were trying to estimate the maximum tank serial number based on the sample of tanks. Suppose we have a sample of serial numbers of tanks S as follows: (1) One can compute many types of sample statistics for this S. For example, one can find the mean, mean + 1SD (SD: Standard Deviation), mean + 2SD, max + min, etc. To find out which of these statistics is best, we must perform some simulations and the best of them will be centered at 500. Now we look at some statistics and plot them on the graph to see how they perform. Figure 2 presents graphs of well-known statistics. The blue line in these graphs represents the population obtained from the statistics and the gray bars are the samples (S) that we have. The first statistic is the mean, as shown in Figure 2a. Figure 2a shows that this statistic estimates half the population. In Figure 2b, the maximum is used and it predicts close to maximum only. Similarly, Figure 2c,d show the graphs of '2 times the mean + 1 standard deviation (SD)' and '1 mean + 2 SD', respectively. The correct number of tanks the German had was 500, and now we see how statisticians estimated this number.
The statisticians used (2) in WW2 to estimate the population, where m is the maximum serial number and k is the total number of tanks captured. This estimate takes the population maximum and multiplies it by a number that is slightly greater than 1, i.e., (Sample Size +1)/Sample Size. The rest of the story of WW2 regarding GTP is that the statisticians at that time reported that the Germans had 529 tanks, but the standard intelligence estimate was 1400 tanks. After the war, the Allies captured the German production records, which showed that the true number of tanks produced was 545, almost exactly what the statisticians had calculated. The most famous form of (2) is obtained by simplifying it as follows: This is also known as minimum-variance unbiased estimator. The most important feature of this estimator is that it predicts a single value for a set of values, N. Now, we solve the same problem by applying the optimal stopping theory concept. The main task is to estimate the value as soon as possible, where the serial numbers are shown one at a time. Moreover, these serial numbers are sorted in ascending order. We slightly modified (3) into (4) and its output is given in Table 1. In (4), k is the number of elements seen so far, m is the highest value (serial number) among the seen elements, p i is the sum of all the serial numbers of the k elements seen so far. The working of (4) is explained with the help of an example; suppose a person has 3 cars in total and they have serial numbers on them. Equation (4) is applied to this situation and the result is presented in Table 1. Let us compute the first row of Table 1. The first row of Table 1 shows that the decision maker has seen one car, so k = 1, m = 1 because the serial number on this car is the highest serial number observed so far, and p i = 1, as the sum of serial number of seen element(s) is one. Therefore, P k = 2 ( 1 + 1/1). Then we see another car (2nd row of Table 1) with serial number 2, hence, k = 2, m = 2, p i = 3 and P k = 3.5 (2 + 3/2). It is interesting to see that the number 3 is estimated after seeing only two elements.
Let us see another example of the proposed formulation by applying it to the data given in (1), and its results are shown in Table 2. In Table 2, k is the number of elements, S is the serial number of the tank that is currently captured, Sum(S) is the summation of serial numbers up to k, and P k is the output of the proposed fast estimation. It is interesting to note that the value 461 is achieved by the 5th element. It is not accurate, but it is the best possible estimate made by looking at the minimum number of elements.
Proposed Algorithm
In this section, we will propose a fast intra-mode estimation technique using GTP. It is already shown in the previous section that it is possible to estimate an early value. Therefore, GTP concept will be extended in this section to estimate the intra mode early for the current CU.
Hadamard Cost vs. Probability
The RMD module of HEVC shortlists N number of modes for the current CU by applying the Hadamard cost. Then, the RDO module performs operations on each of these N modes to obtain the RD (Rate Distortion) cost to find the optimal intra mode. The RDO module evaluates all these N modes, because it is possible that the RD cost of the first intra mode can be large and the RD cost of the Nth intra mode can be small. Therefore, we used the probabilities for the intra modes that are shortlisted by the RMD module. These probabilities are pre-computed and saved in a 2D matrix of size 35 × 35 because there are 35 modes. This matrix is shown in Figure 3. To obtain this matrix, we initialize this 2D matrix with zeros. Then we encode any video and when a mode is selected for the current block (e.g., j) and the neighboring mode is m, then we increment the value at mth column and jth row. This will give us the count matrix but it can be converted to probability by dividing it by the sum of all the values in this row j. After that, if the intra mode J is shortlisted by the RMD module for the current block, and by using this 2D matrix (given in Figure 3), we can compute its probability. The small spikes in Figure 3 mean that these modes have less probability to be selected, and big spikes in Figure 3 mean that these modes have high probability of selection. In this work, we obtained this matrix using a BasketballDrill video sequence because it contains fast and medium motion. 4), is applied to these sets and their visual output is shown in Figure 4. In Figure 4, the values are decreasing because the 'high', 'medium' and 'low' sets are arranged in descending order. Figure 4 'high' case gives higher values for early elements and gives very small values for the last elements. This indicates that the last elements have very little chance of being the optimal element. Where the 'low' case is concerned, the P k assigns values to the last elements such that they are greater than the 'high' case. This is a very interesting factor of the proposed model, assigning large values to the last elements of the 'low' case even though the values in the 'low' set are smaller than the values present in the 'high' case. The 'low' values do not descend very fast like the 'high' case because the probability is very low, and this indicates there is no clear information about the best element. Similarly, the 'medium' case in Figure 4 represents the intermediate case where the P k values for this case are lower than 'high' but greater than 'low' values.
Trend of Proposed Early Estimator
The trend of P k for 'high', 'medium' and 'low' values are shown in Figure 5. Figure 5 shows that these values follow the natural-log trend (ln). The correlation coefficient (R) is also shown in Figure 5. This R is obtained using Microsoft Excel. Moreover, according to Wikipedia, GTP can also be written using a Bayesian formulation as follows: Therefore, we can say that the proposed early estimation (i.e., (4)) is aligned with GTP, because it follows a natural-log (ln) trend.
Early Mode Decision Using Early Estimator
In HEVC, the first element given by the RMD module is selected 60% of the time, but there is no clue as to which are those 60% cases out of a total of 100%. That is why probabilities of these modes are used to extract some extra information about these modes that might help us make the early decision. Therefore, we need a stopping mechanism that performs early termination when these probabilities are high, and delay the early termination when these probabilities are low. Figure 5 clearly illustrates that the values are smooth and can be modeled using the natural log. Moreover, the drop in these values is also dependent upon the probabilities of these elements (see Figure 4), and this makes early estimation both dynamic and flexible. Therefore, an efficient early intra-mode decision can be performed using: This model evaluates elements from a sequential list one at a time and checks the above early-decision condition. At any element k, if P k becomes greater than the remaining elements, i.e., P N−k , then this early termination is performed. It all depends upon the movement of k for different cases. For example, for 'high' case, this k will be among the early elements of the list and for 'low' case, this k will be among the later elements of the list.
The visual outputs of the proposed model (6) for three cases ('high', 'medium' and 'low') are shown in Figure 6. The movement of k is clearly visible for different cases. For 'high' case, k has moved before the 2nd element and for 'low' case, this k has moved after the 3rd element, therefore making the model dynamic and flexible. These examples show that the proposed algorithm handles various situations efficiently and dynamically, adjusting itself according to the elements. Moreover, the formulation of this early termination model is very simple and, as a result, it requires less computation.
The flowchart of the proposed algorithm is shown in Figure 7. The changes made to the RDO module are shown in the RDO box. For each CU, the RMD module evaluates 35 intra modes and shortlists up to N modes. Then, the proposed algorithm evaluates these intra modes given by the RMD module one by one until it finds the termination point, i.e., the point found using (6).
Experimental Results
Experimentation and comparison will be covered in this section. A slight change to the proposed model will be made to further improve the time-saving of the proposed model, but it will be presented in a separate table.
The proposed model is implemented in the recent HM version of HEVC i.e., 16.9. This HM software is downloadable from [5]. The HEVC dataset [38] contains various classes which include various videos. All these videos are coded using the "All Intra Main" configuration. This configuration is usually selected for intra-mode decision algorithms. In this configuration, all the CUs are intra-coded. Moreover, videos are encoded using 4 QPs (22,27,32,37) to make comparison with the existing algorithms. The performance of the proposed algorithm is evaluated using BD-PSNR, time-saving, and BD-BR as recommended in [39]. Time-saving is computed using: Experiments are conducted on a machine with 8 GB Ram, Intel i5 processor with computing power of 2.30 GHz and 64-bit operating system. First HM16.9 is executed to encode each test sequence five times and the mean of its time-saving is noted. Then, the proposed model is executed five times to encode the same sequences, and mean of time-saving is noted for comparison.
Encoding Results of Proposed Model
The compression results of the proposed model are presented in Table 3. In Table 3, ∆P, ∆R and ∆T stands for BD-PSNR, BD-BR, and time-saving, respectively. Table 3 shows that the proposed model saves 24% of the encoding time and costs 1.79% more bit-rate (BD-BR). The ∆T column in Table 3 shows that there is some uniformity among the test video sequences. The maximum time-saving of the proposed model is 26.25 and the minimum time-saving is 21.45. Where BD-BR is concerned, the maximum BD-BR cost of the proposed model is 2.88 for the 'Kimono' test video and minimum BD-BR cost is 0.99 for the 'SlideShow' test video sequence. To further improve the time-saving of the proposed model given in (6), its m term is removed to further reduce the complexity of the model. This resulted in a model given in (8).
The encoding results of (8) are given in Table 4. Table 4 shows that the time-saving is increased to 26.88%, and BD-BR increased to 1.97%. This is the maximum time-saving achieved with this model as we have tried two versions of GTP.
The comparison of both the variations of the proposed models is given in Figure 8. Figure 8 shows that if the time-saving is the need for the time, then one should prefer the model given in (8). Otherwise, one can use the model (6) to save the bit-rate overhead.
The subjective quality of the proposed algorithms is shown in Figure 9. By subjective, we mean the reconstruction quality of the video after compression. The reconstructions of the proposed models with and without m k terms are shown in Figure 9a,b, respectively. Figure 9 shows that there is not much subjective difference between the two variations.
Even if we zoom the text present in the image, there is not much difference. Therefore, we can say that the overall quality of the proposed model(s) is satisfactory.
Proposed Model vs Existing Algorithms
To make a fair comparison with the existing algorithms, the proposed model is placed in Table 5 along with three latest and state-of-the-art fast intra-mode decision algorithms. The term Proposed in Table 5 represents the proposed algorithm. The "-" in Table 5 means that the author of that algorithm did not mention the result of this test video sequence. Please note that some authors have reported huge time-saving, but that does not fall in fast intra mode decision as they combined fast CU, fast PU, and fast intra mode in a single algorithm. Moreover, the maximum possible time-saving achievable using intra mode decision is 40% [40]. The fast intra mode decision cannot be compared with a fast CU size decision because the intra mode decision is very different. The probability of selecting the right intra mode is only 0.02 (1/35). Hence, the intra-mode decision is very different and difficult. Table 5 summarizes three pure intra-mode decision algorithms that include [41][42][43]. Moreover, their years of publication are also mentioned in parentheses "()". The algorithm presented in [41] saves 15% of the time, [42] saves 22% of the time, and [43] saves 26.75% of the time. The proposed algorithm saves 26.88% of the time, which is very close to [43]'s time, but the methodology of the proposed work is unique, as this is the first article to employ GTP for fast intra-mode estimation. Moreover, the proposed algorithm is implemented in the latest HM version, i.e., 16.9, and each new version has some sort of optimization. Therefore, we can conclude that the proposed algorithm gives satisfactory results.
Analysis of Proposed Model
The proposed algorithm makes early terminations at different elements (intra modes) for different CUs. Sometimes, it will select the early element and sometimes it will select the latter element. The proposed model makes the early termination decision using (8) and if this condition is not successful then HEVC normal working continues. Therefore, the time consumed in decision-making will be the time consumed by (8). The time consumed by (8) for full search is 0.0010 seconds on average.
Finally, we conducted an experiment that is presented in Table 6. Table 6 shows that around 52.11% of the modes are skipped by the proposed algorithm. This percentage is obtained with the help of two variables; one variable (e.g., T) notes the number of modes shortlisted for the current block and another variable (e.g., S) records how many modes were remaining when early termination took place. Finally, the percentage is obtained by computing S/T. Table 6 presents this percentage for each QP, separately.
Conclusions
The intra mode for the current block is estimated using a classic technique of World War II (WW2). The German Tanks Problem (GTP) technique of WW2 was used to estimate the number of tanks the Germans had, and now it is applied to HEVC for intra-mode estimation due to its efficiency as well as accuracy. The proposed GTP-based fast intramode decision algorithm reduced the encoding time of HEVC from 23.88% (FourPeople) to 31.44% (BQSquare), with an average of 26.88%. Moreover, GTP is not only applied in the new area, i.e., estimation of intra mode, but also converted to fast estimation. The proposed estimation achieved the same goal, but its methodology is unique. It is worth mentioning that the proposed GTP-based algorithm outperformed the latest techniques [41][42][43] | 2021-08-03T00:05:51.530Z | 2021-04-21T00:00:00.000 | {
"year": 2021,
"sha1": "fd0a95d45fb4ce7366da8d7c691fa9cd347d212f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-9292/10/9/985/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "1b4fd0ff8e9fc38116218d6426408829437b2122",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
22597411 | pes2o/s2orc | v3-fos-license | The Asexual Yeast Candida glabrata Maintains Distinct a and α Haploid Mating Types
ABSTRACT The genome of the type strain of Candida glabrata (CBS138, ATCC 2001) contains homologs of most of the genes involved in mating in Saccharomyces cerevisiae, starting with the mating pheromone and receptor genes. Only haploid cells are ever isolated, but C. glabrata strains of both mating types are commonly found, the type strain being MATα and most other strains, such as BG2, being MATa. No sexual cycle has been documented for this species. In order to understand which steps of the mating pathway are defective, we have analyzed the expression of homologs of some of the key genes involved as well as the production of mating pheromones and the organism's sensitivity to artificial pheromones. We show that cells of opposite mating types express both pheromone receptor genes and are insensitive to pheromones. Nonetheless, cells maintain specificity through regulation of the α1 and α2 genes and, more surprisingly, through differential splicing of the a1 transcript.
and pRA (for the construction of HMR and sst2 deletants of ura3 strains), in which appropriate PCR fragments upstream and downstream of the targeted gene were cloned into the BamHI/KpnI sites. Constructions were controlled by Southern blot analysis (data not shown).
Mating assays. Cells from strains of opposite mating types were grown as patches on complete medium plates, collected, and mixed in a patch on plates with various media. After 4 days at 30°C, cells were collected and streaked on WO medium plates and incubated at 30°C. Plates were examined regularly during 1 week, and potential diploid cells were streaked a second time on WO medium plates.
RNA extraction. RNA from C. glabrata and S. cerevisiae was prepared as described previously (3), by using hot phenol extraction after glass bead cell lysis of mid-log-phase cultures.
RT-PCR. Four micrograms of total RNA was used per reaction. To avoid any remaining DNA in the preparation, RNAs were first treated with DNase I (RQ1 RNase-free DNase; Promega) and extracted with phenol-chloroform before being subjected to reverse transcription. Reverse transcriptase (RT) Superscript II (catalog no. 18064-014; Invitrogen) was used according to the manufacturer's recommendations. RNasin (Promega) was added to all reaction mixtures to avoid RNA degradation.
qRT-PCR. Quantitative RT-PCR (qRT-PCR) experiments were performed using an Abgene ABsolute MAX 2-Step qRT-PCR Sybr green kit. The first step of DNase I and phenol-chloroform extraction was performed as described for the RT-PCR experiments.
DNase-free total RNA (0.8 g) was used for reverse transcription to obtain cDNA. qPCR was then performed in triplicate on 10-fold dilutions of the cDNA solution. Standard curves were obtained by PCR with serial dilutions of DNA of known concentrations. In each well, 12.5 l of Sybr green was added to 5 l (ϳ10 ng) of cDNA or DNA and 8 pmol of the two specific primers (Table 2) in a final volume of 25 l. Specific primers for each gene were designed using the Beacon Designer software, v. 4.
The PCR program was 14 min at 95°C for initial denaturation and enzyme activation followed by 40 cycles of denaturation (30 s at 95°C) and of hybridization/elongation (30 s at 55°C), and a final step of 1 min at 95°C. The melting curve started at 55°C with 0.5°C increments every 10 s for 80 cycles. qPCRs were run on an iQ5 real-time PCR detection system (Bio-Rad) and analyzed with the iCycler software.
Sensitivity to pheromones and pheromone expression. Synthetic pheromones (Eurogentec SA, Seraing, Belgium, and NeoMPS SA, Strasbourg, France) of C. glabrata were synthesized as explained below and in the legend to Fig. 4. For sensitivity assays, 50 l of a 200-g/ml solution of C. glabrata a-factor or ␣-factor or of S. cerevisiae ␣-factor (Sigma-Aldrich Inc., St. Louis, MO) was spotted on plates with SC medium without tryptophan (SC-W) or SC medium without adenine (SC-ade) and, when dry, covered with a cell lawn containing 5 ⅐ 10 4 cells.
For pheromone expression experiments, cells from C. glabrata or S. cerevisiae were grown in YPD overnight at 30°C until the end of log phase. Cultures were centrifuged, and supernatants were filtered on 0.22-m nitrocellulose. Fifty microliters of filtered medium was then spotted directly on SC medium-W or SC medium-ade plates. After the spot was allowed to dry, 5 ⅐ 10 4 cells of the tester strain were spread on the plate.
Pictures were taken after 2 days of growth at 30°C.
RESULTS
There is no report of the mating of C. glabrata in the literature. We have, ourselves, tried mating MATa strains descending from BG2 with MAT␣ strains descending from CBS138 with different combinations of auxotrophic markers in order to select prototrophic diploid cells, as shown in Table 3. Mating experiments were performed as indicated in Materials and Methods on different media: SPO, V8 at two different pHs, Gorodkowa, McClary's acetate, SLAD, malt, Bacto potato dextrose agar, and WO. Subsequent streaking on minimal media never allowed the isolation of prototrophic cells. Since no difference was observed between results with different media and since C. glabrata can in fact be considered to be part of the Saccharomyces species complex (6), we decided to examine the functionality of genes involved in mating under standard conditions used for S. cerevisiae.
Expression of mating type-related genes. In order to address the question of mating type expression by haploid C. glabrata cells, we examined the expression of CAGL0E00341g, CAGL0B01243g, and CAGL0B01265g, homologs of the key regulator genes a1, ␣1, and ␣2, respectively, at the MAT locus. We have not examined the a2 gene given the absence of any known role for a2 in either haploid or diploid cells in S. cerevisiae and the fact that the gene has no start codon in C. glabrata. It has been speculated that a2 could be a pseudogene of an ancestral gene common to the S. cerevisiae-S. glabrata branch (10). We also examined the homologs of the mating pheromone receptor-encoding genes that are involved in the initial steps of cellular fusion during mating, CAGL0K12430g, a homolog of STE2 (YFL026W) encoding the ␣-factor receptor, and CAGL0M08184g, a homolog of STE3 (YKL178C) encoding the a-factor receptor (7). Finally, we examined the expression of the homolog of HO (CAGL0G05423g). In S. cerevisiae, HO is necessary for the completion of the sexual cycle in clonal populations by inducing the formation of cells of opposite mating types within a clone. Since C. glabrata infections are usually monoclonal (1), sexual reproduction may depend on the ability of some cells to switch mating types and fuse with related cells.
We performed RT-PCR on a1 and ␣1 genes in three strains of C. glabrata of the ␣ mating type, CBS138 (sequenced) and M4 and M5 (two isolates from patients), and in three strains of the a mating type, BG2 (a commonly used strain in laboratories) and M2 and M3 (two isolates from patients). For the a1 genes that contain two introns in S. cerevisiae and C. glabrata (20,26), primers amplified a fragment containing the first intron in S. cerevisiae and both introns in C. glabrata. In the three independent C. glabrata strains of each mating type examined by RT-PCR (Fig. 1A), the MAT␣ gene ␣1 is expressed in all MAT␣ strains and not in MATa strains. In contrast, the a1 gene is expressed in both MATa and MAT␣ strains. In the latter, this transcript must arise from the expression of the HMRa locus, since the MAT and HML loci contain ␣-type information. We conclude that HMRa on chromosome V is not silenced in C. glabrata. Thus, the type strain expresses genes from both mating types. Reasoning that this might contribute to its apparent infertility, we constructed a CBS138 MAT␣ strain with the HMRa locus deleted, strain HM102a. Further analyses include this strain. Control experiments with S. cerevisiae show that, as expected, there is clear-cut mating typespecific expression of the a1 and ␣1 genes (Fig. 1B) (13). The size of the major PCR band on cDNA from the a1 transcript reveals the presence or absence of the splicing of its intron (5). In S. cerevisiae, the major band is smaller than the PCR band from genomic DNA, demonstrating splicing of the intron. In C. glabrata, despite the presence of multiple bands, the major PCR bands on a1 cDNAs have the same size as the PCR band from genomic DNA, suggesting a splicing defect. More-explicit experiments on a1 splicing are described below.
In the qRT-PCR experiments (Fig. 2), we examined the expression of a1, ␣1, and ␣2 and of STE2, STE3, and HO in S. cerevisiae and their homologs in C. glabrata. As a standard, we used the actin transcript, estimated to occur at around 10 copies per cell in S. cerevisiae (21). Standard S. cerevisiae strains are ho mutants, so we included as a control the J5 strain, which contains a wild-type HO gene and ␣-type information at MAT, HML, and HMR.
In C. glabrata, a1 is expressed at similar levels in both the MATa strain and the MAT␣ strain, confirming the results of the above-described RT-PCR experiments. As expected, deletion of the HMR cassette in strain HM102a results in the absence of the a1 transcript ( Fig. 2A). Neither STE2 nor STE3 homologs display mating type specificity, as they are expressed at similar levels in MATa and MAT␣ strains. In contrast, the expression of ␣1 and ␣2 is mating type specific in C. glabrata; they are expressed, respectively, 500-fold and 1,200-fold more in MAT␣ cells than in MATa cells. Deletion of the HMR cassette does not interfere with the expression of these two genes. Control experiments show that a1, ␣1, ␣2, STE2, and STE3 exhibit mating type-specific expression in S. cerevisiae (Fig. 2B). The HO gene is transcribed at similar levels in all strains of C. glabrata examined and at levels 20-fold lower than in haploid S. cerevisiae cells, where around one transcript per 10 cells is observed.
Splicing of a1 introns. Previous experiments led us to hypothesize that a1 may be nonfunctional in C. glabrata; we observed concomitant expression of a1, ␣1, and ␣2 in MAT␣ cells, a situation that could not occur in S. cerevisiae because the a1-␣2 heterodimer represses the expression of ␣1 and the deletion of HMRa in MAT␣ cells has no effect on the expression of ␣1, ␣2, STE2, and STE3. The fact that we observed unspliced forms of a1 prompted us to examine this in more detail. Indeed, in S. cerevi- . Mating type is shown at the top, and the gene studied is shown above the gels; for C. glabrata, CgACT1 is CAGL0K12694g, Cga1 is CAGL0E00341g, and Cg␣1 is CAGL0B01243g. The presence or absence of DNase and RT is indicated by ϩ or Ϫ, respectively. Strain names are at the left of gels. Molecular size markers are shown on the right.
852
MULLER ET AL. EUKARYOT. CELL siae, when the a1 transcript is not spliced, the a1 protein is not produced (22). The a1 gene from C. glabrata contains two introns, as in S. cerevisiae, and four primer pairs were designed for each species in order to analyze the splicing of both introns (Fig. 3A). As shown on Fig. 3B, in S. cerevisiae MATa cells (FY23), the size of the only cDNA band amplified around intron 1 corresponds to the spliced transcript; i.e., splicing is total, whereas for intron 2, some unspliced forms remain, although most transcripts are spliced. In C. glabrata cells, for both introns, the major cDNA band has the same size as the genomic DNA band. Minor cDNA bands corresponding to the predicted spliced forms of 84 and 70 bp are observed only in MATa cells (BG2, M2, M3). In C. glabrata MAT␣ cells (CBS138, M4, M5), no spliced form from either intron can be detected. Amplification of the fragment of the transcript that contains both introns with the external primers (Fig. 3C) shows that, in BG2 MATa cells, the doubly spliced form of the a1 transcript exists, although unspliced transcripts are more abundant and transcripts with only one intron spliced also exist. In the latter case, we cannot distinguish between transcripts spliced for intron 1 or for intron 2 because there is only 2 nucleotides' difference in size between the two introns and therefore between the two transcripts. The band of larger size than that of the unspliced a1 transcript is assumed to be a PCR artifact, as its presence is variable and as larger bands are also sometimes detected in the S. cerevisiae experiments.
In conclusion, in C. glabrata, the a1 transcript is partially spliced in the MATa strains examined, while it is not spliced at detectable levels in the MAT␣ strains examined. Translation of the unspliced transcripts cannot give rise to a functional protein because there are in-frame stop codons in the first intron.
Response to pheromones. Since C. glabrata MATa and MAT␣ strains express both a-and ␣-factor receptor genes (Fig. 2), we wondered whether cells were sensitive to mating pheromones. The C. glabrata genome contains two genes predicted to encode pheromone precursors. The predicted translation product of CAGL0C01919g contains a 12-amino-acid long peptide similar to the a-factor of S. cerevisiae (Fig. 4A) (7,19). The predicted translation product of CAGL0H03135g contains three 13-amino-acidlong peptides similar to the ␣-factor of S. cerevisiae (Fig. 4B) (7,24). Two peptides are identical, but one differs by 2 amino acids. We used the two different sequences as putative ␣-factors for C. glabrata, form A and form B. For the synthesis of the artificial pheromones, we assumed that posttranslational modifications that occur in S. cerevisiae also occur in C. glabrata since the genes involved in these processes are conserved (7). These include precursor proteolysis of both pheromones and farnesylation and methylation at the C terminus of the a-factor (4).
We tested C. glabrata mating pheromones on S. cerevisiae cells, and we included sst1 and sst2 deletion mutants of S. cerevisiae because they exhibit greater sensitivity to pheromones (2). The SST1 gene, whose standard name is BAR1, encodes an aspartyl protease secreted into the periplasmic space of MATa cells that inactivates ␣ factor. Thus, its action is mating type specific, as shown below. SST2 encodes a G protein regulator that is required to prevent receptor-independent signaling of the mating pathway. We constructed sst2 mutants (deletion of CAGL0H00374g) in C. glabrata cells of both mating types to check for increased sensitivity.
Null sst2 mutants of S. cerevisiae shmoo constitutively, even in absence of cells of the opposite mating type, and this results in a longer generation time compared to that of the wild type. We observed no constitutive shmooing of sst2 mutants of C. glabrata when grown in YPD or SC medium.
The effect of synthetic pheromones on C. glabrata and S. cerevisiae cells is shown in Fig. 5. Drops of synthetic pheromones were put on minimal medium agar plates, and cells were spread after absorption. As shown, form A of the C. glabrata ␣-factor is active on wild-type, ss1, and sst2 MATa S. cerevisiae cells. Form B is active only on the hypersensitive S. cerevisiae sst2 cells. Neither form is active on either wild-type or sst2 C. glabrata MATa cells. The synthetic a-factor deduced from the genome of C. glabrata has no activity on wild-type S. cerevisiae MAT␣ cells or on sst1 cells but has a very strong effect on sst2 cells. It has no observable activity on either wild-type or sst2 C. glabrata MAT␣ cells. The strain of C. glabrata with HMRa deleted does not respond to pheromones any more than the original CBS138 strain. Thus, C. glabrata cells are potentially able to produce active pheromones to which S. cerevisiae cells FIG. 2. qRT-PCR experiments with several genes related to mating. Histograms of transcript quantity per cell relative to the quantity of the actin transcript (fixed at 10 transcripts per cell [21]) in C. glabrata (A) and S. cerevisiae (B). Names of genes are indicated below the histograms; for C. glabrata, CgACT1 is CAGL0K12694g, CgSTE2 is CAGL0K12430g, CgSTE3 is CAGL0M08184g, CgHO is CAGL0G05423g, Cga1 is CAGL 0E00341g, Cg␣1 is CAGL0B01243g, and Cg␣2 is CAGL0B01265g. The y axis is in the logarithmic scale. In each case, three different strains were analyzed as indicated in the boxes. ND, not detectable (a qRT-PCR experiment was performed, but since there is no corresponding sequence in the genome, no product was detected, as expected). are sensitive but are themselves insensitive to them. Interspecific sensitivity to ␣-factor between S. cerevisiae, Saccharomyces kluyveri, and Saccharomyces exiguus has been described previously (14,18), but the peptidic sequences of ␣-factors from C. glabrata are more diverged from S. cerevisiae than are the ␣-factors from S. kluyveri and S. exiguus (5 to 6 amino acids conserved out of 13 for S. cerevisiae versus 7 to 9 amino acids conserved out of 13 for S. kluyveri and S. exiguus). Despite this divergence, the sensitivity of S. cerevisiae to ␣-factor from C. glabrata is mediated by the Ste2 receptor, as ste2 mutants of S. cerevisiae do not respond to it or to ␣-factor from S. cerevisiae (not shown). We then asked whether C. glabrata cells actually produce pheromones, using S. cerevisiae as a test.
Pheromone production by C. glabrata. To test pheromone production, C. glabrata and S. cerevisiae cell lawns were plated on top of drops of filtered supernatants from late-log-phase cultures of C. glabrata and S. cerevisiae strains, as described above. The results (Fig. 6A) show that S. cerevisiae sst2 mutants exhibit sensitivity to supernatants of cultures from S. cerevisiae cells of opposite mating types. Due to the limited concentration of pheromones in the supernatant, no effect is observed on wild-type cells. In contrast, no supernatant from C. glabrata has a mating type-specific inhibitory effect on wild-type or sst2 mutants of S. cerevisiae. However, this assay reveals the inhibitory effect of the CBS138 supernatant on all cell types (see below). Figure 6B shows that C. glabrata cells are insensitive to the culture supernatants of both S. cerevisiae MATa and MAT␣ strains and C. glabrata MATa strains (BG2). The inhibition of the growth of MATa cells by the C. glabrata MAT␣ CBS138 supernatant cannot be taken as a mating type-specific effect, as this supernatant inhibits the growth of mating type cells of C. glabrata of both types (not shown) (but not of mating type cells of itself or derived strains) and of mating type cells of S. cerevisiae of both types, as explained above. Thus, strain CBS138 produces an inhibitor of the growth of other strains as the "killer" strains of other yeast species (17). This effect is observed when supernatants of rich medium cultures but not minimal medium cultures are used (not shown).
We then extended this growth inhibition assay to detect pheromone production to a collection of C. glabrata clinical isolates that we characterized independently, of which 80% are MATa (C. Hennequin, H. Muller, B. Dujon, and C. Fairhead, unpublished data). This collection contains 182 isolates that were grown in 96-well plates. Supernatants were filtered and
854
MULLER ET AL. EUKARYOT. CELL spotted before sst2 S. cerevisiae tester cells were plated. No isolate was found to produce pheromones specifically inhibiting the MATa or MAT␣ sst2 strains of S. cerevisiae, but three unrelated strains (two MATa strains and one MAT␣ strain) were found to express the same general inhibitor as CBS138 (not shown).
DISCUSSION
The reason why sexual reproduction is so common in living species remains debated, and many species in which clonal propagation is possible get by without it. This is particularly true in the fungal kingdom, where lack of an observed sexual stage is often associated with pathogenic interactions with humans or plants or with obligatory symbiosis with plant species. In this work we have examined the reasons why no teleomorph has been observed in the hemiascomycete C. glabrata despite the presence of genes homologous to those known to be involved in mating (7,26,31). We have observed several differences in the mating pathway from that of S. cerevisiae.
First, our results show that ␣1 and ␣2 are expressed in a mating type-specific manner, in contrast to a1, which is expressed in both mating types, because of a lack of silencing at HMRa, interestingly situated on a different chromosome than MAT and HML␣. Sir1p is responsible for silencing HML/HMR loci in S. cerevisiae; thus, perhaps the absence of a SIR1 homolog in C. glabrata (7) explains this phenomenon. Nonetheless, HML␣ is silenced in MATa strains, in which we do not observe the expression of the ␣ genes.
Even though the expression of the a1 gene is not subject to mating type specificity, experiments suggest that a1 is not functional in MAT␣ haploid cells. Moreover, the expression of a functional a1 gene in MAT␣ cells would logically lead them to behave as diploid cells, with the risk of their trying to undergo meiosis, a possibly fatal event for a haploid cell. Even though we do not know which conditions could induce meiosis in C. glabrata, it is likely that they would be met at some point by cells. This hypothesis is in accordance with the fact that only haploid cells are ever isolated, so that if diploid cells are formed, they must sporulate readily. In S. cerevisiae diploid cells, the coexpression of a1 and ␣2 leads to the repression of ␣1 in addition to other haploidy genes. The expression of the three transcription factors a1, ␣1, and ␣2 simultaneously in C. glabrata MAT␣ cells thus leads to the hypothesis that a1 is not functional. Additionally, the observation that the HMRa⌬ MAT␣ strain still expresses ␣1, STE2, and STE3, like the wildtype MAT␣ strain, strengthens this hypothesis. The fact that we found that the splicing of the a1 transcript is detectable only in MATa cells can explain these observations. In S. cerevisiae, it has been shown that the unspliced form of a1 is not functional in diploid cells (22). In haploid cells, there is no known role for a1, the MATa phenotype being the default one. Thus, a partial failure to splice the a1 transcript in MATa cells is not expected to have any effect. In MAT␣ cells, the lack of splicing of the a1 transcript may be necessary to compensate for the leakage of expression from HMR by functionally inactivating the transcript from the rogue gene. Alternatively, complete silencing may not be necessary because a1 expression is regulated in some other way, such as splicing. Since there is no sequence difference between MATa1 and HMRa1 that would explain differences in splicing efficacy (6,26), the differential splicing must originate either from a general splicing defect in MAT␣ strains or from a mating type-specific mechanism, with both alternatives leading to the inactivation of a1.
The pheromone receptor genes STE2 and STE3 (CAGL0K 12430g and CAGL0M08184g) are expressed in both cell types in C. glabrata, while in S. cerevisiae, STE3 is highly regulated and STE2 less tightly regulated. In S. cerevisiae MAT␣ cells, STE3 is activated by ␣1. In C. glabrata, the basal level observed in MATa cells is higher than in S. cerevisiae (100-fold, if we assume that the actin transcript is expressed at similar levels in both species) and there is no activation by ␣1. In S. cerevisiae MAT␣ cells, STE2 is repressed by the binding of Mcm1 and ␣2 on the a-specific gene operator. Putative binding sites for Mcm1 and ␣2 are found upstream of the homolog of STE2 in C. glabrata, but their spacing is different from S. cerevisiae's. This suggests that the expression of this gene could be mating type specific under some as-yet-undefined conditions (28; B. Tuch, personal communication). The expression of both receptors in C. glabrata does not make the cells sensitive to both pheromones, as our tests with artificial pheromones show. Furthermore, there is no pheromone production detectable in standard laboratory culture in our primitive test. Nonetheless, pheromones synthesized using the genome sequences of C. glabrata are active on S. cerevisiae cells. The genes encoding factors that modify the pheromones are also found in the genome of C. glabrata. Thus, C. glabrata has retained its ability to encode active pheromones but does not respond to them, possibly because the signal cascade leading to the G 1 -S cell cycle arrest in S. cerevisiae does not properly operate in C. glabrata. This is consistent with the absence of the shmooing of sst2 mutants of C. glabrata. Alternative explanations of the absence of a pheromone response in C. glabrata are that pheromones are expressed under some unknown conditions and that some additional regulation of the signal cascade exists in C. glabrata, allowing for a mating type-specific response to the opposite mating pheromone.
We also show that the HO gene is transcribed in C. glabrata. In S. cerevisiae, the HO endonuclease drives mating type switching by initiating a double-strand break at the MAT locus. This mech- (7) that match a previously published consensus sequence for the endonuclease from S. cerevisiae (TNNNYGCG/ANC/AANT/G) can be identified (23). Indeed, HO from S. cerevisiae cuts the MATa site but not the MAT␣ site of C. glabrata (our unpublished results). In vivo mating type switching events have been reported to occur in C. glabrata (1,16), always from MATa to MAT␣. In Kluyveromyces lactis, where the loci are organized as in C. glabrata (7), switching is also more likely from a to ␣ than the other way around (11). These observations lead to two hypotheses. The first is that HO from C. glabrata is active and has the same specificity as the one from S. cerevisiae, so that the endonuclease is able to recognize and cut only MATa sites, not MAT␣ sites. In this case, activity must be infrequent to account for the three-to fourfold predominance of MATa strains (26; C. Hennequin, H. Muller, B. Dujon, and C. Fairhead, unpublished data). This predominance could also be explained by a better fitness of MATa strains than that of MAT␣ strains, a possibility that needs to be tested. The second hypothesis is that HO is inactive in C. glabrata, and switching from MATa to MAT␣ is an uncontrolled event that is more likely than MAT␣-to-MATa switching because of the chromosomal configuration of the cassettes. In the latter case, there would be no active control of switching and therefore potential mating, but with the first hypothesis, the possibility of a yetunproved active control of switching is open, perhaps in the human body, as suggested previously (25).
In conclusion, although our experiments to mate C. glabrata have failed, the facts that (i) so many genes of the mating pathway remain in the genome, (ii) the modes of splicing of the a1 transcript differ between mating types (this study), and (iii) this would have no cause to arise unless some mating/meiosis pathways are at least partially active indicate that it is possible that C. glabrata cells mate under some still-to-be-discovered conditions, such as in the human body. This could be followed by a diploid phase that may be transient, allowing for meiotic or pseudo-meiotic recombination to occur, as in C. albicans (25). | 2018-04-03T06:05:10.812Z | 2008-03-28T00:00:00.000 | {
"year": 2008,
"sha1": "bc49c8f2c01f0ba6372cbb33d6abeaaf44a40291",
"oa_license": null,
"oa_url": "https://doi.org/10.1128/ec.00456-07",
"oa_status": "GOLD",
"pdf_src": "Highwire",
"pdf_hash": "c12cb0adfe0f1d22cced3f1cb85d55347dd6a90d",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
260968194 | pes2o/s2orc | v3-fos-license | Developed steady-state response for a new hybrid damper mounted on structures
Coulomb friction is considered as a mechanical approach to diminish the structural responses during the excitations. However, in case of severe oscillations supplementary mechanisms are employed besides the friction to mitigate the destructive effects of the vibrations in structures. Therefore, the main goal of this research is to develop a new Hybrid System (HS) which is a parallel combination of Viscous Damping (VD) and Coulomb friction for structures subjected to dynamic load. To achieve this goal, the effect of viscous damper is embedded in the equation of motion which is proposed by Den Hartog for a Single-Degree-of-Freedom (SDOF) Coulomb system, and has been extensively implemented for past few decades. In the considered numerical example in this study, implementing the proposed HDM in system resulted in decreasing the maximum displacement in the range of 1% to 98% for different amounts of force amplitude and viscous damping ratios. Also, applying the proposed HDM increased the time lag for about up to 24% for the frequency ratios greater than 1. The developed hybridized system in this study can be utilised as new generation of Tuned Mass Damper (TMD) to improve their energy dissipating efficiency under severe excitations.
Introduction
Various attempts have been made to understand the friction mechanism of sliding objects yet, most of them were unsuccessful [1]. It is possible to encounter the friction force as an energy dissipator in any type of industrial machinery or civil structure where the relative motion of parts comes to appear [2,3]. In structures, the energy dissipation can occur through structural material or the component friction which is classified as the inherent damping. However, in the case of severe excitations, the inherent damping is not capable to reduce the structural response to an acceptable range. Therefore, there is a need to utilise supplementary damping besides the inherent damping to overcome the shortcomings. Supplementary dampers are artificial energy dissipators that increase the total structural damping, reduce the structural response to external vibrations and contribute to structure resistance against severe translations. Various damping mechanisms such as the viscous dampers [4], yielding dampers, magnetic dampers, tuned mass dampers, friction dampers [5] and tuned liquid dampers [6][7][8] are frequently applied in the newly built structures. a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 In addition, base isolators are recently introduced techniques to separate the main structure from the base and reduce the negative effect of base motion on the superstructure [9][10][11]. Whatever the choice of damping system is, the friction performs the main role in diminishing the structural responses through the energy dissipation and therefore, its effect on the dynamic systems is required to be studied carefully.
The Coulomb system is a conventional mechanical model in which the mass is connected to support by the aid of a massless spring and the kinetic friction is applied to the mass while it is moving on a rough surface. The principal issue in oscillatory movement in systems with friction is to avoid getting into the sticking phase. Thus, to overcome this problem, it is crucial to extract a new steady-state solution of the oscillatory system and set the force amplitude ratio such that the system has a zero-duration sticking phase. However, due to the non-linearity of the equation of the motion and bilateral behaviour of the Coulomb friction force finding the steady-state response of the system is somehow complicated. In 1931, Den Hartog [12] did the early research to find a response for the Coulomb systems. By taking some assumptions finally he succeeded to obtain a useful steady-state response for SDOF vibratory system under external harmonic load and constant frictional force.
The simplicity of his proposed method is due to not only its basic assumptions which are good approximations of what happens in the reality but also its simple calculation steps. In the study conducted by Den Hartog, the formulation of Maximum Displacement (MD) was derived and the design procedure was continued with the aid of static methods, which are popular amongst civil engineers.
In addition, complementary studies have been carried out to find the response of the dynamic systems by applying a variety kind of mathematical techniques, for instance, the time-domain numerical integration methods [13] and the phase plane method [12,14,15] were used in several types of research to investigate the behaviour of the dynamic models equipped with energy dissipators and supplementary dampers. Also in this regard, H.-K Hong et al. in 2000 [5], tried to calculate the Maximum Velocity (MV) beside the Maximum Displacement (MD) in a Coulomb system subjected to the lateral loading. It is noticeable that H.-K Hong et al. also proposed a straight solution for the steady-state response of the SDOF systems in presence of the Coulomb friction [16], however, because of its complicated nature it is not frequently used by the structural designers. Then, in another recent research performed by D. J. Riddoch et al. in 2020 [17], the structural parameters of an SDOF oscillatory system subjected to friction and the base motion was found based on Den Hartog's formulations [12]. Furthermore, in another research, the multi-mass system in the presence of the friction and subjected harmonic loads has been interrogated by L. Marino et al. (2021) [18] and its steadystate response has been calculated as well. Finally, M. Ziaee et al. (2022) carried out a study to propose a new steady-state solution for hybrid systems [19], however, the method is replaced by simpler calculations in the present research and can be handled more easily. Also, the present research amends the Den Hartog formulation as in real structures always there is some inherent damping which is ignored by Den Hartog's method.
As discussed above, prior investigations were about the Coulomb systems and the necessity to find an acceptable steady-state response for them. However, in the present work, the main focus is on developing Den Hartog's method into a dynamic hybridized system with the presence of both Coulomb friction and viscous damper. As the previously investigated dampening systems were ineffective to mitigate the response of the structures under lateral loading, throughout this research the structural designers would be able to seize the opportunity to design the structures equipped with two energy dissipation mechanisms [20] which can ameliorate their performance under severe excitations. Also, the proposed system in this work can be compared to TMDs as it shows the same behaviour and can be utilised as new generation of TMD in real structures in comparison to other carried out studies [21,22]. TMDs and their effectiveness are widely investigated by researchers [23,24] as in the case of severe excitations they can be optimized to suppress the structural vibrations [25][26][27]. Conversely, the HDM demonstrates more resistant against mistuning in comparison with conventional TMDs [28].
Therefore, Den Hartog's method is expanded to this proposed hybridized dynamic system as it contains less calculation steps. By embedding the effect of the Viscous Damper (VD) in the main equation of the motion, MD and its corresponding time lag can be found easily. Besides finding the system parameters additional effort is done to calculate the the steady-state response of the developed hybrid system. And finally, to avoid the sticking phase in the calculations a boundary limit for α is determined which can be considered as a handy design criteria for the structural designers.
Developing a new hybrid damping mechanism
Nowadays, structures are frequently subjected to dynamic loads which in the long term will have harmful effect on the structure's stability and highly reduce operation life of structures. Consequently, in order to protect structure against applied excitation, it is necessary to dissipate vibration effect on structure. Therefore, the inherent damping performs a key role in exhausting the excessive energy of structures caused by the external loads. However, in case of severe excitations, the effectiveness of inherent damping does not seem enough to diminish structure oscillations and therefore there is a need to ameliorate the efficiency of vibration control using the supplementary damping system such as dampers.
Therefore, as a solution, in this research an HDM is developed by implementing and combining the supplementary dampers and Coulomb friction in structure. The proposed HDM is capable to increase the amount of energy dissipation and consequently reduces the structural response significantly.
Therefore Single-Degree-of-Freedom (SDOF) System is considered due to its simplicity to prove the efficiency of the proposed HDM for structure and development of formulations regarding structural response under applied excitations.
To commence the formulation of the SDOF system's parameters, the schematic model of the proposed hybrid system is sketched up as shown in Fig 1. In where, P 0 and ω f denote the amplitude and angular frequency of the externally applied load respectively and t points to elapsed time. Other parameters are m, c and k where the first one represents the mass of the system, the second one shows the damping coefficient and the last one is the stiffness of the mass-less spring. Also, by assuming that the SDOF structure is oscillating on a horizontal surface, the vertical load component applied to the hybrid system and perpendicular to the direction of the motion (N) is equal to the product of mass by the gravity acceleration. Multiplying the N by kinetic friction coefficient μ k results in f d which is known as the kinetic friction force or Coulomb friction. In this case, the governor equation of the motion for the proposed SDOF system with the Hybrid Damping Mechanism (HDM) can be formed as: In which _ x and € x designate velocity and acceleration which are the first and second derivatives of displacement with respect to time respectively. Then to formulate the steady-state response of the proposed SDOF system a zero-duration sticking cycle of motion is considered and it is presumed that the phase plane curve (x; _ x) has a symmetrical form as illustrated in Fig 2. Therefore, it seems sufficient to consider only one half of the curve, for instance, the lower branch which is similar to Den Hartog's assumption [2].
To begin the formulation, the steady-state solution of the equation of motion is supposed to be similar to one presented by Den Hartog in 1931 [2]. However, to develop the results for the proposed hybrid system the effect of VD is embedded in the principal equation of the motion of the SDOF system So, after the differential equation of the motion is solved for the proposed SDOF hybrid system and the steady-state response is found. The new form of steady-state response passes In which the time steps are defined using the below formula: By applying the boundary conditions to the derived steady-state solution the finalised form of the steady-state response of the hybridized SDOF system can be obtained. This derived solution is similar to the result achieved by Den Hartog in 1931 [2]. However, there are some differences between Den Hartog's solution and the proposed solution in this research which are attributed to the presence of the viscous damper in the SDOF system.
The developed damping mechanism can be also utilised in the structures as Tuned Mass Dampers (TMD) and capable to resolve the main drawback of TMDs which is the large amplitude of the motion for the tuned auxiliary mass.
Shaping the equation of motion of the hybrid system for the lower branch
In which; A ¼ P 0 k ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ð1 À b 2 Þ 2 þ ð2xbÞ 2 q ; B ¼ A ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi Where B can be expressed as static displacement of the friction force, β is the frequency ratio and ω n is the frequency of the main structure, ω d is the frequency of the HDS, ξ represents the damping ratio which is the total damping (inherent damping plus supplementary VD), t is time, ϕ is phase shift and a and b are the constant parameters. Now it is needed to take the first derivative of displacement regarding time to find the velocity. Therefore, the velocity can be arranged as: By taking into account the below initial conditions, applying them to the velocity equation and making some new assumptions, four equations with four unknowns will be obtained. Solving four equations simultaneously leads to the final results. The procedure could be described as: Then knowing that normally ξ�1, then ξ 2 ffi0. So, the following equations can be achieved: Now by introducing the foregoing boundary conditions to Eqs (4) & (7), taking π 1 = π/β, and substituting t 4 = t 2 +π/ω f in Eq (8), the below equations arise: Considering the above-mentioned equations and solving them together, results in finding the basic response parameters of the proposed hybrid SDOF system.
Finding the maximum displacement and its time lag for the structures equipped with HDM
To proceed with design for dynamic loads, it is needed to formulate MD of the single-degreeof-freedom system equipped with the new hybrid damper. As mentioned earlier, the equations are established for the lower branch of Fig 2. Rearranging Eqs (10) and (11) can result in the following statements: and confronting Eqs (14) and (15) by using the trigonometric rules will result in an important equation that contributes a lot to getting to the final results: or in another form: By following the same steps for Eqs (12) and (13) the below equations are reachable: Absinðo f t 2 þ �Þ ¼ e À xo n t 2 ðacoso n t 2 À bsino n t 2 Þ À xe À xo n t 2 ðasino n t 2 þ bcoso n t 2 Þ ð18Þ and simplifying Eq (18) leads to: Again, at this moment the Eq (13) is recalled. As it is apparent by changing the sides in this equation the below relation is obtained: and reshaping Eq (20) transforms it to: At this time employing Eqs (19) and (21) simultaneously yields to: Referring to the Eqs (14) and (15) and doing the summation of sides reveals that: or in a simplified form, it can be stated as: Introducing Eqs (17) and (22) to Eq (25) and doing mathematical simplifications yields the following statement which is revealed to be the fundamental equation to derive the subsequent formulas for the hybrid SDOF system. or On the other hand, by adding Eq (19) to Eq (21) it can be obtained that: Now by using Eq (23) and replacing it into Eq (28) it is observed: To further the process some trigonometric rules are required. As a basic rule it is known: So, by substituting Eqs (29) and (30) in Eq (31) the following formula is achievable: À � ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi Then new types of formulas are ensued by replacing Eq (32) in Eqs (29) and (30) as what follows below: sin o n t 2 þ p 1 2 � � ¼ ða À xbÞ ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi It is instructive to know that Eqs (32), (33) and (34) can convert Eq (16) to: By using Eq (35) and replacing it into Eq (32) a new formula is obtained as what is seen in Eq (36). The recently derived equation is an important benchmark to calculate the maximum displacement of the proposed hybrid mechanism (Δ 0 ) and its corresponding time (t 2 ). In the below statement the Eq (36) is represented: cos p 1 2 À � ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi To calculate the maximum displacement of the hybridized SDOF system (Δ 0 ), it is necessary to square both sides of Eqs (27) and (35). Then by doing the summation of the squared equations the maximum displacement of the developed hybrid system can be drawn as what follows: or with rearranging Eq (37) in the other way, it can be explained as: ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi By dividing the Eq (38) to B and A which are the displacement due to the Coulomb friction force and the dynamic displacement of the hybrid SDOF system respectively and with the aid of the Eq (5) two new dimensionless terms are introduced below: ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi Also, from the Eqs (27) and (35) the time corresponding to maximum displacement (Δ 0 ), can be described as: or As seen above, the maximum displacement of the hybrid SDOF system described in Eq (38) is transformed to two dimensionless forms referred to as Dynamic Amplification Factor (DAF) for SDOF equipped with hybrid damper under external loading as expressed in Eqs (39) and (40). From these mentioned equations, it is seen that the ratios of maximum displacement of the proposed hybridized system (Δ 0 ) to the displacement caused by the Coulomb friction and to the dynamic displacement of the SDOF system merely with VD, will change with variation in dimensionless parameters such as α, β and ξ.
Actually, the main aim was to focus on parameters that are effective on dynamic amplification factor. In conventional TMD system, damping ratio and frequency ratio are the most effective ones. However, when it comes to the developed HDM, beside the pre-mentioned parameters, the Coulomb friction plays an important role in fluctuating of dynamic amplification factor. The effectiveness of the Coulomb friction varies by change of mass and any change in the mass of system will result new amounts of friction (resistant) force and subsequently force amplitude ratio. Thus, damping ratio, frequency ratio, mass of structure (needed to find friction force and force amplitude ratio respectively) are the considered parameters to evaluate the behaviour of new developed damping system. Accordingly, as an illustration of the change in the displacement amplification factor (Δ 0 / A) regarding α, β and ξ a particular numerical example is applied. By taking the mass of the SDOF structure (m) 50,000 kg, the Coulomb friction force (f d ) = 15,000 N and external forcefrequency (ω f ) = 2π (rad/s), the DAF graphs for the SDOF fortified with HDM for various amounts of α, β and ξ are represented in Fig 3. As it is observed in the graphs, variation of DAF for SDOF with new hybrid damping mechanism is classified into three main ranges of frequency ratio (β) as what is explained below: i. Frequency ratio 0<(β)<0.5 As is perceived from Fig 3, for this β range the combined effect of ξ and α on the reduction of the Displacement Amplification Factor (DAF) is not tangible, however, this effect is more observable and rises after β = 0.5. On the other hand, in this range of frequency ratios, it can be seen that the curves related to different hybrid damping ratios have a horizontal intersection with the zero DAF line.
PLOS ONE
Developed steady-state response for a new hybrid damper mounted on structures It is interpreted that the SDOF dynamic system equipped with the HDM confronts consecutive sticking phases during some particular frequency ratio intervals. It is evident from the graphs as the force amplitude ratio (α) decreases the length of the sticking intervals increases, this means that by reducing the force amplitude ratio the Coulomb friction becomes more effective in comparison to external lateral load and tends to control the system. However, the sticking phase is not favourable for structural designers and therefore this range of frequency ratio is not considered in dynamic designs.
ii. Frequency ratio 0.5<(β)< inflection point
By inflection point, it is meant that the graphs change their trends. Normally for the hybridized SDOF, this turning point occurs after β = 1. In this particular example, for the α equal to two the inflection happens at the frequency ratio of 1.3, however, for the α's of 3.5 and 5, this point is shifted back to the frequency ratios of 1.15 and 1.05 respectively. Therefore, it is concluded that the α performs a significant role in changing the position of the inflection point.
The tangible range to evaluate the effect of force amplitude ratio is in the range of 2 to 5. For less than 2 the amount of friction force gets closer to amplitude of the external load and there is the possibility that system confronts sticking phase. Also, for ratios greater than 5 the effect of friction force is negligible, and the developed HDM behaves as non-frictional system. Therefore, for better illustrations of plots the range of 2 to 5 is chosen.
By considering the frequency ratios between 0.5 and the inflection point it is evident that both ξ and α have a noticeable effect on reduction of the Displacement Amplification Factor (DAF). It is comprehended that increasing the ξ and α results in the higher displacement responses due to external loading and consequently for a constant Coulomb friction force DAF increases as well. Thus, in this case, the behaviour of the SDOF system subjected to harmonic loads and with the Coulomb friction resembles the SDOF systems without friction and therefore DAF is almost 1 in this range of β.
However, by magnifying the effect of friction force the effect of the proposed hybris system is more perceivable and the Displacement Amplitude Factor (DAF) decreases subsequently and locates below 1. As the frequency ratio gets closer to 1, the frequency of external load becomes close to the natural frequency of the proposed SDOF and technically the resonance occurs. At this time the minimum ξ needed to control the structural response depends on the α since as mentioned earlier, reducing the α engender the reduction in DAF.
For instance, in this particular example, a 5% damping ratio is required to keep the DAF at 0.6 for the α ratio of 2, however, to obtain the same DAF it is necessary to increase the damping ratio to 0.1 and 0.15 for force amplitude ratios of 3.5 and 5 respectively. Therefore, the use of the proposed HDM is meaningful as the inherent damping ratio is 2% and 5% for the steel and concrete structures respectively and as a consequence, it is not sufficient to curb the structural displacements during the severe excitations.
If the SDOF oscillatory system is not equipped with a supplementary damping system like the proposed one in this research, as the frequency ratio stands near 1 (resonance case) the structure experiences uncontrollable drifts and it contributes to fatigue in structural components and is followed by damage and collapse of the structure. Applying the developed HDM to the SDOF system decreases the maximum displacement between 1% to 98% for different α and VDS damping ratios.
iii. β> frequency of the inflection point
Also from Fig 3, it is observed that when it comes to frequency ratios (β) greater than the inflection point increasing the damping ratio of the hybrid system (ξ) neutralizes the positive effect of the increment in the force amplitude ratio. In another word, increasing the damping ratio beyond the inflection point leads to a larger DAF. Therefore, the inflection point is called the economic design point and during the design procedure, the natural frequency of the main structure can be set such that the frequency ratio becomes equal to the inflection point ratio. Indeed, DAF at the inflection point is still greater than the DAF at resonance but as the real displacement in the structure is lesser than the displacement in the resonance case, it is referred to as the economic design point. It is instructive to remind that the DAF (Δ 0 /A) shows the amount of reduction in displacement of hybrid SDOF in comparison to the displacement of the non-hybrid systems. Now, this is the time to deal with the time lag of the maximum displacement. From Eq (42) it is observable that time lag (ω f t 2 ) is a function of ϕ, α, β and ξ, However, by ignoring the effect of phase shift (ϕ) it would be only a function of three dimensionless parameters (α, β and ξ). By multiplying e Eq (42) in ω f and doing some simplification the following statement is obtained: In which, f a; b; x ð Þ ¼ arccos À ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi By assuming the numerical example introduced earlier the variation of time lag for the hybrid SDOF system regarding various amounts of frequency ratio (β), damping ratio (ξ) and force amplitude ratio (α) can be seen as what is illustrated in Fig 4. From Fig 4 and its illustrated graphs the following findings can be derived: 1. Utilizing the Hybrid Damping Mechanism (HDM) contributes to decreasing the time lag of the Maximum Displacement (MD) for the frequency ratios in the range of 0.5 to 1. It can be interpreted that due to the increase in the Coulomb friction force the velocity of the hybridized SDOF decreases. On the other hand, the maximum displacement decreases as well, yet the rate of decrease in the maximum displacement is lesser in comparison to the velocity decrement. Therefore, the time lag shows a descending trend. In this particular example installing the proposed HDM on an SDOF system resulted in a decrement in time lag in the range of 0 to 36%.
2. Implementing the proposed HDM on the SDOF dynamic system increases the time lag for the frequency ratios greater than 1. Although by applying the HDM the maximum displacement decreases, the velocity decreases at a lesser rate and consequently time lag rises. In the previous example, the amount of increment is between 0 to 24%.
3.
A higher damping ratio (ξ) results in a lower or greater time lag depending on the frequency ratio, however, the effect of the damping ratio is more sensible for β between 0.85 to 1.15.
4. Without any change in the amplitude of the external force as the Coulomb friction rises, the force amplitude ratio decreases and therefore the friction controls the hybrid SDOF system. As a consequence, the length of the sticking phases in the graphs (horizontal intersection of graphs with frequency ratio axis) increases with the increase in the friction force. 5. For all amounts of the α, the time lag locates between 0 to 4.64. It implies that the total range of the time lag is the same for various α's, however, by decreasing α the SDOF system confronts more sticking intervals.
6. The minimum time lag occurs at frequency ratios close to resonance (β = 1). At this stage, the mass reaches its maximum energy, then it leads to experience of the maximum velocity as well. Therefore, the time lag corresponding to the maximum displacement is minimum at the resonance stage since the mass reaches to the peak velocity.
7. In the sticking phase as the hybrid SDOF system stops its movement, neither the maximum displacement nor the time lag can be defined and therefore there is not any amount on the graphs that can be attributed to them.
8. For all damping sudden sharp changes has been observed at β = 0.2. However, in these graphs since the colour of damping ratio equal to 0.1 is dominant, it seems that it only happens for ξ = 0.1. In fact, for frequency ratios between 0 to 0.4, there are some fluctuations in amplification factor because system is confronted with stick and release conditions frequently, due to presence of Coulomb friction. This sudden increment is also a result of experiencing the sticky phase and the subsequent release state.
Obtaining the coefficients of the steady-state response of the hybrid system (a and b)
In this section, the two coefficients for the steady-state response of the hybrid SDOF system are derived. Then by finding the coefficients, a complete steady-state solution for the SDOF system equipped with HDM can be presented, however, to obtain the mentioned coefficients some mathematical calculations are needed. Replacing Eq (27) into Eqs (14) and (15), results in Eqs (45) and (46) with two unknowns. By solving the Eqs (45) and (46) for two unknowns (a and b), the values of a and b can be determined respectively. The procedure is explained as: e À xo n t 2 asino n t 2 þ bcoso n t 2 ð Þ ¼ xBtan e À xo n t 2 asinðo n t 2 þ p 1 Þ þ bcosðo n t 2 þ p 1 Þ ð Þ ¼ À xBtan From Eq (45) b is calculated and expressed versus a: b ¼ e þxo n t 2 xBtan p 1 2 À B À � À asino n t 2 coso n t 2 ð47Þ and introducing Eq (47) into Eq (46) reveals the value of a as: a ¼ e þxo n t 2 À Bsinðo n t 2 þ p 1 Þ From Eq (48) it is evident that a is a function of the below parameters: Repeating the same steps for b leads to the following statement: b ¼ e þxo n t 2 À Bcosðo n t 2 þ p 1 Þ Again, it is seen that b is also a function of the five parameters: To verify the effect of ξ, α and β on the coefficients of the steady-state response of the hybrid system, it is assumed that ϕ = 0 and also B is a constant value.
Therefore, by considering the aforesaid assumptions the Eqs (49) and (51) reduce to: In what follows the variation of the response coefficients (a and b) with the change in the amounts of damping ratio, force amplitude ratio and frequency ratio (ξ, α and β) is illustrated.
As is seen in Fig 5 the following conclusions regarding the coefficient's graphs can be drawn: a. As the amplitude ratio of force increases (α) the range of β in which the possible amounts for the steady-state response coefficients (a and b) can be found increases as well. In this particular example, the minimum range belongs to α of 2 which is located between β of 0.5 to 1.4.
b. As it is evident there is not any amount that can be allocated to a and b for frequency ratios lesser than 0.5. This is attributed to the sticking phase that occurs in this range of frequency, therefore the SDOF hybrid system stops oscillating and there is not any steady-state response definable for the system a. Increasing the amplitude ratio of force due to a decrease in the Coulomb friction results in a lower magnitude of the coefficient a, however, meanwhile, the coefficient b has an ascending trend and reaches 36 and 37 for force amplitude ratios of 3.5 and 5 respectively.
b. The effect of the damping ratio (ξ) on the variation of the response coefficients (a and b) is more tangible for the frequency ratios in the range of 0.85 to 1.15.
Determining the minimum required force amplitude ratio to avoid the getting stuck in the SDOF systems equipped with HDM
During the derivation of the formulas for the steady-state response of the hybrid SDOF systems, the basic assumption is that the oscillatory motion has zero-duration stick phases. It means to imply that the back-and-forth motion will not trap in the sticking phase. To, fulfil this condition it is needed to set the amplitude of the external load in a way that the mass constantly oscillates in the sliding phase. Therefore, it is obvious that the following equation must be satisfied: From the oscillation trend, it is known that the most critical condition occurs at time intervals t 2 and t 4 , respectively. At these points, the system velocity becomes zero and if the externally applied force is less than the restoring force inserted by the spring, the stick phase will happen. So, recasting Eq (68) for time t 2 leads to: Knowing that x(t 2 ) = Δ 0 and by introducing Eqs (38) and (42) to the Eq (55) the following statement can be shaped: À ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi A B Then the Eq (56) is simplified and the boundary line for α will be gained. This boundary line will assist structural engineers to avoid the sticking in their calculations and is used as a handful guideline in dynamic analysis of the SDOF systems reinforced with HDM.
By assuming a particular case in which ϕ = 0, a simpler form of Eq (56) can be extracted as: À ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi A B ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi and by replacing A and B from Eq (5), the force amplitude ratio boundary can be defined as: a � ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffiffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffiffi ffi ffi ffi ffi ffi ffi ffiffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi Then Eq (59) is solved for different frequency and damping ratios (β and ξ) and the results are illustrated by the graphs as shown in Fig 6. As is seen in Fig 6 by ignoring the fluctuations of curves in the frequency range of 0 to 0.5 and separating the rapid changes that occur at resonance (β = 1) the overall trend of α regarding the variation of β is linear. Knowing that the amounts of force amplitude ratio for frequency ratios of 0.5 and 2 are 0.6 and 8.4 respectively, the linear relation can be described as what follows: Therefore, the Eq (59) is converted to a simple for as represented in Eq (60). By choosing the frequency ratio and consequently the force amplitude ratio (α) greater than the amounts obtained from Eq (60) the designing procedure of the structure will be arranged in a way that the vibrating system will not experience any sticking phase. For force amplitude ratios lesser than amounts of Fig 6(a) the hybridized SDOF system faces sticking intervals. Even at the resonance if some viscous damping ratio is added to the system the external loading will not be enough to neutralize the Coulomb friction and the system stops oscillating and the HDM is not effective anymore, therefore, adding the HDM in the Coulomb systems must be carried out with the full attention to avoid negative consequences.
Also, for 1/α the Eq (61) can be derived. In some cases, using 1/α can be easier as its range is located between 0 to 0.9 in this particular example. By dividing 1 to both sides of Eq (60) it is observed: Eqs (60) and (61) are the borderlines of the amplitude ratio of the force and its inverse respectively and eventually, they can be used as guidelines for structural designers during the designing process. The illustrated curves will aid designers to select the frequency ratio such that the minimum displacement and the non-sticking condition are satisfied simultaneously. It is useful to remind that applying the Hybrid Damping Mechanism (HDM) to the SDOF systems must be done carefully to have the best efficiency and the least sticking intervals at the same time.
Conclusion
In this research, a hybrid damping mechanism for structures is introduced. To evaluate the dynamic behaviour of the proposed Hybrid Damping Mechanism (HDM) under the external harmonic loading, the effect of the VDS was implemented in the principal equation of the motion and formulations for structural response were developed for the SDOF system as a basic and simple structure.
By solving the equation of motion for boundary conditions the main response parameters such as the Maximum Displacement (MD) and its corresponding time lag (the radian elapsed by the structural mass to reach the maximum displacement) were formulated accordingly. Besides the aforesaid parameters, coefficients of steady-state response were calculated as well. A numerical example was applied to the derived formulations of MD and its time lag. The analysis indicated that employing the HDM in the SDOF system results in decreasing the MD in the range of 1% to 98% for various force amplitude and hybrid damping ratios.
In case of β between 0 to 0.5, the effects of ξ and α on reducing the MD or the Displacement Amplification Factor (DAF) are not tangible, however, these effects are more observable at β�0.5. Also, the SDOF faces several non-zero duration sticking phases and therefore the application of HDM must be done carefully at this frequency range.
Considering the frequency ratios between 0.5 and the inflection point (the point at which the DAF graphs change their trends) it is concluded that both α and ξ have a significant impact on diminishing the DAF. On the other hand, in the pre-illustrated numerical example, increasing α shifts the position of the inflection point toward β = 1.
In a frequency ratio greater than the inflection point increasing damping has a reverse influence on the DAF and increases it. Thus, the inflection point is the optimal design point and is referred to as the economic design frequency ratio.
Changing ξ results in a lower or greater time lag depending on the tuning frequency ratio. The impact of ξ in varying the time lag and the steady-state response coefficients (a and b) is more observable for the frequency ratios between 0.85 to 1.15.
Also, the boundary line was calculated for amplitude ratio of the force to avoid sticking. The exact formulation is somehow complicated; however, it can be approximated by a linear polynomial. The derived formulas and the corresponding graphs can be utilised as guidelines for structural designers during the dynamic designing procedure.
The developed hybrid SDOF system in this carried out research can be considered as a more efficient mechanism alternative to the conventional system for the Tuned Mass Damper (TMD) with high capability to decrease the amplitude of the motion for the auxiliary mass which is the main drawback of the TMDs.
Applying the developed damping mechanism in this study contributes to higher reduction of amplitude of motion through the function of the viscous damper to dissipate vibration beside the action of the Coulomb friction to generate resistant force against the movement. Since the performance of viscous damper enhances by applied friction, therefore action of the proposed hybrid mechanism is noticeable specially in high velocity excitations, where the conventional system may experience excessive motions.
And finally, the proposed HDM mechanism and derived formulations can be expanded to the Multi-Degree-of-Freedom (MDOF) systems to implement in more structural applications. In order to extend the SDOF system to MDOF, since the mass corresponding to each floor is connected to the other floors using spring and damping components, therefore the equation of motion also is expanded to the MDOF by employing new terms. Accordingly, the mass of each floor will have a new maximum displacement and its corresponding time lag. Therefore, more unknown parameters appear in the equation of motion, and it is a little bit more complicated to solve multi-uncoupled equations to find required parameters in comparison to SDOF system. | 2023-08-19T05:08:55.459Z | 2023-08-17T00:00:00.000 | {
"year": 2023,
"sha1": "5d1a3ad4272381eb5fae90bbe758bf8cbbc4c100",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "5d1a3ad4272381eb5fae90bbe758bf8cbbc4c100",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
243875561 | pes2o/s2orc | v3-fos-license | Nutrient resorption tightens plant nitrogen and phosphorus coupling and decreases with sulfur deposition as mediated by interannual precipitation in a meadow
Purpose Sulfur (S) deposition as a global change issue causes worldwide soil acidication, nutrient mobilization and marked changes in plant nutrition. Here, we investigated how S deposition would affect leaf nutrient resorption and how this effect varies with yearly uctuations in precipitation. Methods In a semiarid meadow exposed to S addition, we measured nitrogen (N), phosphorus (P) and S concentrations in green and senescent leaves of a grass and a sedge and calculated nutrient resorption eciencies (NuRE) across two years with contrasting precipitation (13% higher and 27% lower than long-term mean annual precipitation). Two-way ANOVAs used to explore the main and interactive effects of S and sampling year on nutrient in and NuRE of two dominate with block as a random factor. Regression models were used to quantify relationships between S addition rates and leaf nutrient characteristics with the best curve-tting results chosen based on coecient of correlation. Pearson correlation analysis was used to examine the relationships between soil nutrient availability and leaf and NuRE. All these analyses were performed using
Introduction
Nutrient resorption is a key physiological process for senescing plants to conserve their nutrients, particularly in nutrient-poor habitats (Aerts 1996;Brant and Chen 2015;Drenovsky et al. 2019;Lü et al. 2012). Based on global estimates, about half of foliar N and P are resorbed during leaf senescence (Aerts 1996;Killingbeck 1996). The resorbed nutrients are readily available for subsequent plant growth, which makes a species less dependent on soil nutrient supply and thus weakens nutrient competition among plant species (Killingbeck 1996;van Heerwaarden et al. 2003). Moreover, this nutrient resorption process can improve plant nutrient-use e ciency, reduce nutrient loss with litterfall decomposition, and eventually increase plant tness in nutrient-poor environments (Vergutz et al. 2012). Therefore, nutrient resorption is an essential trait that provides an alternative strategy for plants to adapt to drier and warmer climates that substantially impoverish global ecosystems (Berdugo et al. 2020;Ren et al. 2018).
Atmosphere sulfur (S) deposition is a main driver of soil acidi cation which has been regarded as a global environmental issue (Sullivan and Gadd 2019;Vet et al. 2014). Despite large reductions in S deposition during the last few decades across the world (Dentener et al. 2006), chronic deposition continues to acidify terrestrial surface soils and its legacy effects are substantially in uencing ecosystem nutrient cycling (Xiao et al. 2020;Yang et al. 2012). Sulfur-induced soil acidi cation could inhibit soil nitri cation and reduce the ratio of soil NO 3 − to NH 4 + by favouring NH 4 + accumulation, thus unbalancing soil mineral N pool (Chen et al. 2013;Kemmitt et al. 2005;Pan et al. 2020). Furthermore, soil acidi cation could increase soil P availability via exchanging PO 4 3− from soil minerals and promoting activity of acid phosphatase (Jaggi et al. 2005). However, this direct S supply would promote soil S availability in a greater degree than that of N and P. As a result, chronic but continuous S deposition translates into asynchronous increases in soil N, P, and S availability to differentially promote plant nutrient uptake (Brown et al. 2000;Jaggi et al. 2005;Stutter et al. 2004;Wang et al. 2002).
Previous studies simulating S deposition with manipulative S addition found that S addition could enhance N concentrations in both green leaf and plant litterfall (Wang et al. 2002, because of synergistic interactions between N and S during plant assimilation (Li et al. 2019). Similarly, leaf P concentration increased with S addition as a response to soil P mobilization with acidi cation (Sherman et al. 2006;Singh et al. 2012). Therefore, leaf nutrients (such as N, P, and S) are tightly coupled in nature ecosystems (Ågren et al. 2012;Nazar et al. 2011;Sardans et al. 2012;Tallec et al. 2009). In stressful environmental conditions, however, plants would excessively store some of these nutrients (Chapin et al. 1990;, somewhat leading to the decoupling of plant nutrients (He et al. 2008;Yuan and Chen 2015). However, the N and P resorption process can drive these nutrients to be re-coupled in plants (Lü et al. 2016). As such, stoichiometric N:P ratios strongly vary with leaf physiological status and environmental stresses, which may be not feasible to be universally used as a reliable indicator for plant N and P limitation (Yan et al. 2017). However, the role of nutrient resorption in driving nutrient coupling under scenarios of S deposition still remains elusive.
As soil nutrients become more available with atmospheric S deposition, plants may reduce their dependence on nutrient resorption concurred with a decrease in nutrient resorption e ciency (NuRE, Lü et al. 2020;Wright and Westoby 2003;Yuan et al. 2015). This negative link between NuRE and soil nutrient availability as a paradigm has been found in many ecosystems (Lü et al. 2013;Ren et al. 2018;Zong et al. 2018). While these studies on plant NuRE mainly focused on the key growth-limiting nutrients of N and P (Su et al. 2021), resorption e ciency of another macronutrient S (SRE) is rarely studied and if SRE ts in this 'plant NuRE-soil nutrient availability' paradigm under S-deposition scenarios remains large unknown.
Plant NuRE is an integrator of the effects from various factors, such as climatic conditions, plant physiological status, and soil resources (Suseela et al. 2015;Yuan et al. 2005). For instance, the negative effects of soil nutrient availability on N and P resorption e ciencies were only shown in the wet years instead of the dry years (Ren et al. 2018). This is because drought may slow down internal nutrient transportation within plants or decouple plant-soil interactions , thus cutting off the above-mentioned 'plant NuRE-soil nutrient availability' feedback loop. Moreover, drought can release plant nutrient-acquisition intensity via shortening plant lifespan under water stress and therefore shows no impact on NuRE (Drenovsky et al. 2019). Because of these complex interactions, studies on temporal variability of nutrient resorption with interannual precipitation are need to achieve a mechanistic understanding of how plant NuRE responding to S deposition.
The meadow steppe in northern China is sensitive to global change (Yang et al. 2012). In the past two decades, atmospheric S deposition rate has doubled in the northeast areas of China due to economic development and energy consumption, even though the average rate across the country decreases (Yu et al. 2017). Sulfur deposition contributed to soil acidi cation across grassland ecosystems in northern China, with the largest decrease of 0.80 pH units in the meadow steppe (Yang et al. 2012). The in uence caused by the S deposition on leaf nutrient resorption is rarely considered in this grassland area. But such information is critical to develop a more comprehensive understanding of the factors regulating nutrient conservation strategies in these grasslands. Therefore, our main aim was to investigate the responses of leaf nutrient concentrations and resorption e ciency of two dominant species to S addition in two contrasting wet and dry years in a meadow steppe. We hypothesized that (1) S addition would increase N, P and S concentrations in both green and senesced leaves with stronger coupling relationships among nutrients in senesced leaves than in green leaves as driven by nutrient resorption; and (2) leaf NuRE would decrease with S addition due to increased soil nutrient availability, but this would mainly show in the wet year instead of the dry year.
Materials And Methods
Site description and experimental design The S addition experiment is located in the Erguna Forest-Steppe Ecotone Research Station, Inner Mongolia, China (50°10′ N, 119°22′ E; elevation 550-600 m). Mean annual precipitation (MAP) is 363 mm with about 70% falling between May and September. Mean annual temperature is -2.45°C. The grassland is dominated by Leymus chinensis (perennial grass), Carex duriuscula (sedge), Stipa baicalensis (perennial bunchgrass), which account for almost 75% of total aboveground biomass. The soil is classi ed as haplic chernozem according to the IUSS Working Group WRB (2015). The pH of topsoil was 6.8-7.0. No grazing and fertilizer were received prior to this experiment.
The experiment was established in a homogeneous and at eld following a randomized block design, with eight S addition rates (0, 1, 2, 5, 10, 15, 20 and 50 g S m −2 year −1 ) randomly assigned into each block. Each treatment had ve replicates. Elemental S was added once a year in mid-May since 2017 and continued each year. Puri ed S powder fertilizer (elemental S > 99%) was weighed and mixed with 200 g soil, and then spread evenly on the surface of the soil in each plot. Atmospheric S deposition at the site is approximately below 3 g S m −2 yr −1 (Yu et al. 2017) but is expected to increase due to industrial and transportation development (Yu et al. 2017). The high doses of S addition were much higher than the actual local S deposition level, and the aim is to simulate the long-term and accumulative effects of ecosystem S enrichment as caused by anthropogenic activities.
Field sampling and laboratory measurements Plant and soil samples were collected in the second (410 mm, 13% higher than the MAP) and third year (266 mm, 27% lower than the MAP; Fig. S1) of S treatments. At least 20 healthy plant individuals of L. chinensis and C. duriuscula with mature and fully extended green leaves were randomly selected in each plot and homogenized into one composite sample in early August. The senesced leaves were sampled in the same way in early October for the two sampling years. All the collected leaves were oven-dried at 65°C for 48 hr to constant weight and then ground with a ball mill for chemical analyses (Retsch M400, Retsch GmbH, Haan, Germany). Leaf N concentration was analyzed with an automatic element analyzer (Vario MACRO cube, Elementar Analysensysteme GmbH, Germany). For total P and S concentrations, 300 mg leaf samples were digested with HNO 3 -HClO 4 solution and then determined with inductively coupled plasma mass spectrometer (5100 ICP-OES, Perkin Elmer, America).
Calculations and statistical analyses
Nutrient resorption e ciency (NuRE) was quanti ed as the percentage change of a nutrient in senesced leaves relative to the green leaves using the following equation: where Nu g and Nu s are the N, P or S concentration in green and senesced leaves; MLCF is the mass loss correction factor with a value of 0.64 and 0.713 for forbs and graminoids respectively as reported by Vergutz et al. (2012).
Data were tested for normality using the Kolmogorov-Smirnov test before performing ANOVA. Two-way ANOVAs were used to explore the main and interactive effects of S addition and sampling year on nutrient concentrations in green and senesced leaves and NuRE of two dominate species, with block included as a random factor. Regression models were used to quantify relationships between S addition rates and leaf nutrient characteristics with the best curve-tting results chosen based on coe cient of correlation. Pearson correlation analysis was used to examine the relationships between soil nutrient availability and leaf nutrient concentrations and NuRE. All these analyses were performed using SPSS16.0 (SPSS Inc., Chicago, USA).
Results
Responses of plant available soil nutrients and leaf nutrient concentrations to S addition Sulfur addition signi cant decreased soil pH in the two sampling years (Table S1). Soil NH 4 + was the dominant inorganic N form compared with NO 3 − , especially with the increasing S-addition rates. Effects of S addition on plant available nutrients were similar in the two year, where soil NH 4 + , available P, and available S increased but NO 3 − decreased with S addition (Table S1).
For L. chinensis, green leaf N concentration was unaltered in the wet year (i.e., 2018), while it signi cantly decreased with S addition in the dry year (i.e., 2019) (Fig. 1a). Green leaf P concentration was unaltered in the two years (Fig. 1b). Green leaf S concentration increased signi cantly with S treatments only in the wet year (Fig. 1c). For C. duriuscula, green leaf N concentration signi cantly increased by S addition only in the wet year (Fig. 1d). Green leaf P and S concentrations of C. duriuscula increased with S addition in two years (Fig. 1e,f).
Senesced leaf N, P, and S concentrations for both L. chinensis and C. duriuscula increased nonlinearly with S addition across the two sampling years, except for senescent leaf N concentration of C. duriuscula showing a linear increase in the dry year ( Fig. 1g-l).
Coupling relationships between nutrients in green and senesced leaves
The coupling relationships between leaf N and P concentrations in senescent leaves of two species were higher than that in green leaves when two years were pooled (Fig. 2a,b vs. Fig. 2c,d). In the wet year, the relationships between leaf N and P in senescent leaves were more tightly than that in green leaves (Fig. 2). However, the coupling relationship of N and P in senescent leaves was weaker than that in green leaves in the dry year (Fig. 2).
Responses of nutrient resorption e ciency to S addition For L. chinensis, the reductions in leaf NRE, PRE, and SRE were noticeably similar in response to S addition in the dry and wet years (Fig. 3a-c; Table S2). For C. duriuscula, leaf NRE and PRE increased and then decreased with S addition in the wet year (Fig. 3d,e). Leaf NRE in the dry year and SRE in the two years decreased linearly with S addition (Fig. 3d,f; Table S2). For both species, NuRE was signi cantly higher in the wet year than that in the dry year ( Fig. 3; Table S2).
Correlation analyses
Only for L. chinensis, green leaf N concentration positively correlated with soil NO 3 − in the dry year 2019.
Only for C. duriuscula, green leaf P concentration positively correlated with plant available P in the wet year 2018. However, green leaf S positively related to plant available S for L. chinensis in the wet year and for C. duriuscula in the two years (Table S3).
For both species, leaf NRE and PRE were mostly independent of plant available N and P, except for NRE of C. duriuscula in year of 2018 (Table S4). However, leaf SRE and plant available S concentration was negatively correlated for both species in the two sampling years (Table S4).
Discussion
Nutrient resorption can decrease plant reliance on soil nutrient pool and substantially in uence plant growth, survival and reproduction (Yuan and Chen 2015). Given by the fact that atmospheric S deposition and its related legacy effects (e.g., soil acidi cation) are still an important environmental issue in the grassland ecosystems of northern China (Yu et al. 2017), information on how S deposition impacts plant nutrient resorption would be helpful in understanding plant community assembly and plant species adaption to global change factors. However, no studies as we know have yet investigated S-deposition effects on leaf NuRE, especially SRE in meadow grasslands, as mediated by interannual precipitation. Therefore this research was the rst to concern the coupling of N, P, and S concentrations in green versus senescent leaves and their resorption e ciency as affected by S addition and natural precipitation. As expected, we found that N and P concentrations were tightly coupled in the senescent leaves in the wet year as mainly driven by higher NuRE. However, NuRE decreased more sharply with S-addition gradient in the dry year than the wet year (Fig. 4), which was contrary to our hypothesis.
Leaf nutrient concentrations in response to S addition with stronger coupling of N and P in senescent versus green leaves
Partially consistent with our rst hypothesis, nutrient concentrations mostly increased with S addition for both green and senescent leaves with the exception for green leaf of L. chinensis (Fig. 1). Indeed, L. chinensis was suggested to stay more homoeostatic than other species (such as Carex korshinskyi) in response to exogenous nutrient addition (Yu et al. 2010). Green leaf N concentration even decreased with S addition for L. chinensis in the dry year (year 2019), (Fig. 1), which was mainly driven by soil NO 3 − concentration as evidenced by the positive correlation between the two parameters (Table S3). In contrast to L. chinensis, green leaf N concentration of C. duriuscula showed an increasing trend with S treatment, suggesting that the two species possibly have contrasting N acquisition strategy (Legay et al. 2014).
An increase in soil available P under S-addition induced acidi cation (Xiao et al. 2020; Table S1) might have accounted for the higher leaf P concentration herein. This was further evidenced by a positive correlation between green leaf P concentration and plant available P concentration for C. duriuscula in 2018 (Table S3). As expected, leaf S concentration (except for green leaf S concentration of L. chinensis in 2019) increased with the increasing S addition and positively correlated with plant available S (Table S3) due to plant luxury absorption of S (Wang et al. 2002). Furthermore, we found that the leaf S concentration in wet year was signi cantly lower than that in dry year (Fig. 1i,l), possibly because of higher dilution effects on plant nutrient concentrations or soil SO 4 2− leaching in the wet year (Blake-Kalff et al. 2000;Li et al. 2019). Moreover, we found N, P and S in senescent leaves of the two species consistently increased with S addition which would result in higher litter quality, subsequently increasing litter decomposition rates.
Consistent with our rst hypothesis, the positive relationship between leaf N and P was much stronger in senescent leaves than that in green leaves across the two sampling years when data were pooled (Figs. 2 and 4). However, this stronger relationship in senescent relative to green leaves was only found in the wet year instead of the dry year (Fig. 2a,b vs. Fig. 2c,d). Possibly, variations in interannual environmental conditions, e.g. water availability could show remarkable in uences on the coupling of N and P in leaves (You et al. 2018;Yuan and Chen 2015). This suggested that nutrient-resorption process could tighten the relationships of leaf N and P upon the alleviation of water limitation in this semi-arid ecosystem ( Fig. 4; Lü et al. 2016;You et al. 2018). In the dry year, the coupling pattern was inversed as shown by a stronger relationship between N and P in the green leaves than that in the senescent ones (Fig. 2). This might be a result of stronger effects from biochemical and metabolic processes in green leaves to couple N and P than that from nutrient-resorption process in senescent leaves under plant water stress (Duarte 1992;Rentería et al. 2011). Previous studies also found that drought triggered leaf senescence as a physiological response (Munné-Bosch and Alegre 2004) and resulted in nutrient imbalance and decoupling . Overall, our results provided new evidence for the role of hydrologic conditions in mediating the coupling relationships between N and P in leaves with different physiological conditions under S addition.
Decreases in NuRE with S addition vary with interannual precipitation
Consistent with our second hypothesis, leaf N, P and S resorption e ciency decreased with S addition (Figs. 3 and 4). Evidences suggested that soil nutrient availability modulated plant nutrient conservation strategies so that plants in nutrient-rich environments tended to have lower resorption e ciency (Yuan and Chen 2015). Along with elemental S gradient, soil available N (mainly NH 4 + ) increased through inhibiting nitri cation (Chen et al. 2013;De Boer and Kowalchuk 2001;Xiao et al. 2020) and available P increased through sulphate replacing phosphate ions from the colloidal surface and/or via organic P mineralization (Jaggi et al. 2005). For N and P, insigni cant correlations between nutrient resorption e ciency and soil nutrient availability for both species (Table S4) suggested that leaf NuRE might be also controlled by other soil characteristics, such as soil moisture and soil pH (Yuan et al. 2005). Similarly, inconsistent relationships between plant nutrient resorption and soil nutrient availability were also found at global scale (Vergutz et al. 2012). This inconsistency could be partly caused by the divergent response of N and P resorption e ciencies to nutrient addition in nutrient-rich versus nutrientpoor habitats (Wright and Westoby 2003). Compared to NRE and PRE, SRE was more clearly correlated with plant available S in two species (Table S4), which was directly due to the exogenous supply of plant available S (Kobe et al. 2005). Therefore, our results suggest that plants tend to rely less on nutrient internal cycling when soil nutrient availability increases (Wright and Westoby 2003), but the degree of this decrease of reliance varies among different nutrients. Furthermore, S-induced decreases in nutrient resorption would substantially alter chemical composition of plant tissues and eventually the litter decomposition processes and the coupling relationship of above-and belowground nutrient cycling (Suseela and Tharayil 2018).
We also found that the response of NuRE to S addition varied with interannual precipitation with a higher NuRE in the wet year but sharper decreases with increasing S rates in the dry year (Figs. 3 and 4). This may be due to the longer reproductive stage and stand age under lower water stress, when plants have to invest more nutrient and energy on both vegetative growth and reproductive efforts (Brant and Chen 2015). Additionally, enhanced leaching of nutrients from leaves and soils could result in higher NuRE (Lu et al. 2019). Consistently, Ren et al. (2018) suggested that N and P resorption e ciency decreased with nutrient addition in the wet year, but that these effects attened in the dry year in a desert grassland. However, we can not rule out the possibility that other factors, i.e., plant species characteristics, soil nutrient pool size (Lü et al. 2012), type and amount of S additions (Lü et al. 2013) and habitat type (Kobe et al. 2005) can in uence the response of NuRE to S addition. Therefore, the single-dimensional 'plant NuRE-soil nutrient availability' paradigm should be reconsidered as a multifaceted feedback network that nutrient availability is coordinated by other factors (e.g., precipitation in this study) to in uence plant NuRE (Fig. 4).
Nevertheless, this exibility of NuRE in response to temporal variations in precipitation and nutrient availability can shed light on the spatially resource-dependent strategies of plant nutrient use in microscale soil fertility patchiness (Lü et al. 2012). Therefore, the exible and species-speci c NuRE can help explain heterogeneity in species distribution and species turnover with exotic species colonization and native species extinction within plant communities. To our knowledge, the linkages of NuRE with plant species turnover and the consequential community assembly have rarely been tested (Lü et al. 2019). Our study calls for future working investigating the in uences of temporal and spatial variations in resources on plant NuRE at both species and community level in order to understand the role of NuRE for evidencing plant community dynamics.
Conclusion
The study found that two dominant species from the semiarid meadow grassland tended to increase nutrient (N, P, and S) concentrations in leaves but to reduce nutrient resorption e ciency with S addition during two consecutive years. Leaf N and P were more tightly coupled in senescent leaves than that in green leaves averaging across the two years, which could be mainly driven by nutrient resorption. The decrease of leaf NuRE with S addition was possibly due to mobilization of plant available nutrients in soil under acidi cation. Nutrient resorption was also regulated by interannual precipitation as evidenced by the fact of higher NuRE in the wet year and that only leaf SRE was closely correlated with plant available S. Higher NuRE in the wet year suggested enhancement of plant nutrient requirements in less waterlimited conditions, likely derived from longer plant reproductive stage and stand age in such conditions. To our knowledge, the study was the rst to reveal the distinct coupling relationships between N and P concentrations in green versus senescent leaves and that interannual precipitation substantially modulate the response of nutrient resorption to S deposition. These ndings imply the important role of nutrient resorption, as a plant-nutrition integrator comprising the in uences from multifaceted ecological processes, in affecting plant species competition, plant community assembly, and the associations between above-and below-ground nutrient cycling. Further work is clearly required to establish a linkage between nutrient resorption and plant community dynamics under global change scenarios and to verify the multidimensional feedback network of NuRE responding to global change factors as proposed in the current study.
Declarations
Author contribution YJ designed the study. RW, TL, and HL set up the eld experiment and applied fertilizer every year. XF, TL, and JC performed eld and laboratory works and data statistical analyses. RW and XL contributed to the interpretation and discussion of the results. XF and RW drafted the manuscript with suggestions from all the co-authors.
Figure 4
A multidimensional framework of the effects of sulfur (S) deposition on coupling of nitrogen (N) and phosphorus (P) and nutrient (i.e., N, P, and S) resorption e ciency (NuRE) as mediated by precipitation. Positive coupling relationships between N and P were tightened by nutrient resorption process, as shown by blue dots with plus symbol '+' showing positive relationships. NuRE decreased with increasing S addition rates as shown by red dots with a minus symbol '-' showing negative relationships. NuRE was higher in the wet year but showing a sharper decrease with S-addition rates in the dry year. | 2021-11-10T16:22:59.532Z | 2021-11-08T00:00:00.000 | {
"year": 2021,
"sha1": "63665663a299438c8e2f1ee312ce9d93c88df759",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-1038511/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "ce77280aadfc94dc930d817ce299aecb9223ca51",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
232113382 | pes2o/s2orc | v3-fos-license | Social Experience Interacts with Serotonin to Affect Functional Connectivity in the Social Behavior Network following Playback of Social Vocalizations in Mice
Abstract Past social experience affects the circuitry responsible for producing and interpreting current behaviors. The social behavior network (SBN) is a candidate neural ensemble to investigate the consequences of early-life social isolation. The SBN interprets and produces social behaviors, such as vocalizations, through coordinated patterns of activity (functional connectivity) between its multiple nuclei. However, the SBN is relatively unexplored with respect to murine vocal processing. The serotonergic system is sensitive to past experience and innervates many nodes of the SBN; therefore, we tested whether serotonin signaling interacts with social experience to affect patterns of immediate early gene (IEG; cFos) induction in the male SBN following playback of social vocalizations. Male mice were separated into either social housing of three mice per cage or into isolated housing at 18–24 d postnatal. After 28–30 d in housing treatment, mice were parsed into one of three drug treatment groups: control, fenfluramine (FEN; increases available serotonin), or pCPA (depletes available serotonin) and exposed to a 60-min playback of female broadband vocalizations (BBVs). FEN generally increased the number of cFos-immunoreactive (-ir) neurons within the SBN, but effects were more pronounced in socially isolated mice. Despite a generalized increase in cFos immunoreactivity, isolated mice had reduced functional connectivity, clustering, and modularity compared with socially reared mice. These results are analogous to observations of functional dysconnectivity in persons with psychopathologies and suggests that early-life social isolation modulates serotonergic regulation of social networks.
Introduction
The importance of social rearing has been evident since the 1960s and the now-controversial "Harlow monkey experiments," which demonstrated that early-life social isolation deprives macaques of the experiences necessary to develop into functional adults (Harlow et al., 1965). The ability to amicably behave and communicate with conspecifics is important to social cohesion, which in turn affects individual fitness and psychological wellness (Heim and Nemeroff, 2001;Bailey and Moore, 2018). Neural systems have evolved to support social cohesion (Goodson, 2013;Matthews and Tye, 2019) such that social interactions may carry positive valence (Goodson and Wang, 2006), and be reinforced by the brain's reward circuitry (Dölen et al., 2013). Early-life social immersion or deprivation may shape the circuits responsible for the appropriate expression of social behaviors.
Since the original proposal of the SBN (Newman, 1999), analytical tools have been developed to quantitatively describe functional anatomic networks (Sporns, 2010;Fornito et al., 2016). For example, metrics such as density measure functional connectivity in a given network, whereas the clustering coefficient is an indicator of "small-world" networks which display increased efficacy of communication among regions (Watts and Strogatz, 1998). Community structure/ modularity calculates the degree to which nodes assemble into functionally similar clusters (Newman and Girvan, 2004). Functional networks are disrupted in human psychopathologies (Bullmore et al., 1997;van den Heuvel and Sporns, 2019), emphasizing the importance of investigating network-level features in rodent translational models (Van den Heuvel et al., 2016). Network-based analyses in non-traditional model systems describe changes in SBN functional connectivity following presentation of socially salient vocal stimuli (Hoke et al., 2005;Ghahramani et al., 2018); however, no such studies exist in laboratory mice.
Murine vocalizations are a source of context-dependent information during social interactions (Hanson and Hurley, 2012;Finton et al., 2017;Warren et al., 2018Warren et al., , 2020Sangiamo et al., 2020). Vocal processing relies on auditory circuitry as well as functionally diverse nuclei such as the SBN. For example, receivers must extract the physical characteristics of vocal signals (e.g., frequency, duration, amplitude, etc.) and interpret them in light of their own experiences and current conditions (Petersen and Hurley, 2017). Investigating whether social isolation disrupts vocal processing in circuits such as the SBN will be important in understanding the mechanisms underlying aberrant behavior in mouse models of communicative and affective disorders (Portfors and Perkel, 2014).
Serotonergic signaling is sensitive to social isolation: socially isolated mice downregulate 5-HT receptor expression in hypothalamic nodes of the SBN (Schiller et al., 2003;Bibancos et al., 2007). As anatomically distinct regions of the dorsal raphe nucleus send serotonergic projections to the SBN (Schwarz et al., 2015;Muzerelle et al., 2016;Beier et al., 2019), serotonin may modulate SBN activity in accordance with an animal's internal state and changes in the external environment (Muzerelle et al., 2016;Niederkofler et al., 2016;Ren et al., 2018). Broadly activating serotonergic pathways affects neural activity markers across a distributed suite of nuclei including the SBN (Giorgi et al., 2017;Grandjean et al., 2019); however, it remains unknown whether social experience interacts with serotonin signaling to affect activity-dependent measures and network-level metrics such as functional connectivity.
We use immediate early gene (IEG) mapping to test the hypothesis that serotonin signaling interacts with social experience to affect patterns of cFos-immunoreactive (-ir) neurons in the SBN of male mice following presentation of female broadband vocalizations (BBVs). Increasing available serotonin increased the IEG response in several SBN nodes. This effect was more prominent in socially isolated mice regardless of drug treatment. Despite increases in cFos-ir neurons, network analyses reveal fewer functional relationships within the SBN of socially isolated mice.
Animal information
The Indiana University, Bloomington Institutional Animal Care and Use Committee (protocol #15-021) approved all of the following experiments. Individual cohorts of male CBA/J mice (Mus musculus) from different litters were shipped from The Jackson Laboratory and received at 18-24 d of age (Fig. 1A). Each cohort was assigned to 1 of three pharmacological treatment groups: saline (SAL; control), fenfluramine (FEN), or pCPA (see pharmacological details below). Upon arrival, mice were separated into either social housing of three mice per cage or into isolated housing (Fig. 1B). Mice remained in social (SOC) or isolated (ISO) conditions on a 14/10 h light/dark cycle with ad libitum access to food and water and weekly cage changes for 28-30 d before vocal playback (Fig. 1C,D). ISO mice were physically separated from conspecifics; however, all experimental animals were housed in the same room within our vivarium. While ISO mice were potentially exposed to olfactory, auditory, and/or visual stimuli from neighboring cages, similar conditions did not attenuate the effects of social isolation in other studies (Keesom et al., 2017a,b;Manouze et al., 2019).
FEN and pCPA were diluted in 0.9% sterile SAL within 3 d of use. pCPA was administered at 200 mg/kg in a volume of 5 ml/kg; FEN was administered at 100 mg/kg in a volume of 5 ml/kg (Hanson and Hurley, 2016). Sterile SAL (vehicle; 10 ml/kg) was used for all control injections. Each mouse in SOC cages received the same pharmacological treatment. Injections were administered interperitoneally following brief anesthetization with isoflurane, after which mice were returned to their home cage. Beginning 3 d before playback, mice were transferred from housing quarters to the experimental room where they received injections at roughly 24-h intervals in the morning. Over the course of these 3 d, pCPA mice received pCPA injections while FEN and SAL mice received equivalent injections of sterile SAL (Fig. 1D). Following injections, mice remained in the experimental room for 45 min before being returned to the vivarium. This process was designed to habituate mice to injections and being moved between rooms to reduce non-specific cFos expression before playback trials. On the day of playback, pCPA and SAL mice were injected with sterile SAL 45 min before trials. FEN mice received FEN injections 45 min before playback (Fig. 1E). For each treatment group n = 9 except for SOC-SAL, where n = 8. Over the course of injections, ISO-pCPA mice lost a significant amount of weight (paired t (8) = 2.82, p , 0.05, mean difference 0.43 6 0.15 g), but there was no difference between the weights of treatment groups on the day of playback (p = 0.2).
Playback trials
Ninety minutes before playback trials, mice were retrieved from animal quarters and placed in a quiet room. After 45 min, the first of three animals received an injection (as above) and was returned to its home cage (Fig. 1E). Forty-five minutes after injection, focal mice were transferred from their home cage to an identical testing cage (12 Â 6 Â 6 inches) with fresh bedding within in a sound attenuation chamber (Coulbourn Habitest) with an ultrasonic speaker (Ultrasonic Dynamic Speaker Vifa, Avisoft Bioacoustics) powered by an UltraSoundGate Player 116 (Avisoft Bioacoustics). Trials were monitored with a CCD video camera (30 fps) placed above the test cage, with SuperDVR software (Q-See, Digital Peripheral Solutions Inc.) and a Q-see four channel DVR PCI video capture card. Trials were 60 min and playback consisted of 14-15 naturalistic bursts of five female BBVs (Fig. 1F), the final number of BBVs (70-75) represented the average number of BBVs emitted during the study from which they were recorded. All mice were played back the same BBV sound file; omission of the final burst of five BBVs was counterbalanced across groups. Following trials, spectrograms of the playback were created in Avisoft, and the number of male-emitted ultrasonic vocalizations (USVs) were quantified.
Playback generation
Source BBVs were originally recorded during sociosexual interactions between male and female CBA/J mice. First, spectrograms of sociosexual interactions were generated using Avisoft SASlab Pro software; next, individual female BBVs were located, high-pass filtered to remove any potentially overlapping male USVs, and copied into a new playback audio file. We assembled naturalistic bursts of five individual BBVs and interspersed 270 s of silence between bursts. BBVs were calibrated by matching rms intensity of the playback (as recorded in the testing arena) to the intensity of the originally recorded vocalizations. The same condenser microphone (CM16/CMPA, Avisoft Bioacoustics) with an UltraSoundGate 116Hb sound card (250-kHz sample rate Avisoft Bioacoustics) was used to assess the intensities of the originally recorded vocalizations and the playback.
In naturalistic social interactions, female BBVs correlate with male-directed aggression (i.e., rejection-like behaviors), as they also emitted during mounting, BBVs are considered to be functionally ambiguous (Finton et al., 2017). Our playback file consisted of BBVs that were acquired during multiple interactions where mounting either did or did not occur. Thus, any potential structural differences in BBVs emitted during different contexts (i.e., mounting vs rejection) would not shape the overall valence of playback.
Immunohistochemistry (IHC)
Playback lasted for 60 min, at which point focal animals remained in the sound attenuation chamber for 30 additional minutes to allow for accumulation of the cFos protein (Kovacs, 2008). Ninety minutes following the onset of playback, mice were deeply anesthetized with isoflurane and transcardially perfused with ice-cold Krebs-Henseleit solution (pH 7.2) followed by 50 ml of 4% paraformaldehyde in phosphate buffer (PFA). Brains were extracted and postfixed overnight in PFA, transferred to 30% sucrose in PBS (pH 7.4) for ;48 h, and cut into three series of 50-mm sections in the coronal plane using a freezing microtome. Sections were collected throughout the rostral-to-caudal extent of the inferior colliculus (IC; approximately bregma À5.34 thru À4.36 mm) and starting at the appearance of the median eminence (bregma À1.94 mm) through the bifurcation of the anterior commissure (AC; ; bregma 10.38 mm). Sections were stored in cryoprotectant solution at À80°C until IHC. Three separate IHC runs counterbalanced across treatment groups were performed as follows.
Image acquisition and anatomy
All images were collected at 10Â with 568-nm resolution using a Leica SP8 scanning confocal microscope. cFos-ir neurons were visualized using a 680-nm laser line; DAPI and NT were visualized using 405-and 490-nm laser lines, respectively. The intensity of each laser line was identical for all images, and tissue was scanned at 12 separate z planes spaced 2.41 mm apart. When more than one confocal image was needed to capture the expanse of a nucleus (LS, mPOA, VMH, PAG) or adjacent nuclei (i.e., PVN and AH), images were automatically merged using Leica Application Suite X software (Leica Microsystems). In instances where tissue was damaged, the next available section was used.
SBN regions were identified based on cytoarchitecture at approximate rostral-caudal levels relative to bregma as per the mouse brain atlas (Paxinos and Franklin, 2004). The LS (Fig. 1G) was collected beginning at the bifurcation of the AC (bregma 0.38 mm) and continued for three consecutive sections. We captured the medial division of the BNST starting at the level where the bifurcation of AC begins to close (bregma 0.26 mm). BNST (Fig. 1H) was sampled bilaterally in two consecutive sections within boundaries established by the lateral ventricle and the stria terminalis (lateral), the fornix (medial), and ventrally by the AC. Sampling for mPOA (Fig. 1I) began at the closure of AC (bregma 0.14 mm) and continued for two consecutive sections. PVN was shot at three consecutive sections beginning at the appearance of a triangular cluster of neurons immediately adjacent to the dorsal aspect of the third ventricle (bregma À0.70 mm). We collected AH beginning at the second level of PVN (bregma À0.82 mm; Fig. 1J) and continued for three consecutive sections. The final level of AH overlapped with the first section where sampling for VMH began (bregma À1.22) and continued for two to three sections (Fig. 1K). PAG (Fig. 1L) was collected for three consecutive sections beginning approximately at bregma À4.24 mm.
Z stacks for laser lines 405 (DAPI), 490 (NT), and 680 (cFos-ir) were projected using the maximal intensity function in Fiji (National Institutes of Health; Schindelin et al., 2012), and saved as .tif files before analysis. A single observer blind to animal identity and treatment group performed all image analyses and microscopy.
Cell counting
Regions of interest (ROIs) were drawn around the boundaries of SBN nodes based on cytoarchitecture ( Fig. 1G-L) and saved using the Fiji ROI manager. cFos-ir neurons were quantified using custom macros in ImageJ as follows. First, background was subtracted from cFos images using the rolling ball function with a radius of 50 pixels. ROIs derived from NT images were transferred to the corresponding cFos channel. Using the internal clipboard function, we created a new image containing only the selected ROI. Tsai's moments threshold was applied using Fiji's Auto Threshold plugin v1.17. The thresholded image was then made binary, the watershed function was applied, and the analyze particles function was run thresholding out objects with fewer than 75 pixels and a circularity ,0.15. Cell counts were normalized by multiplying the total number of cFos-ir neurons in each region by 100 divided by the sum of the areas of the ROI(s) from which they were obtained.
Statistics
Inferential statistics were performed in JMP Pro version 14 (SAS Institute Cary) with an a = 0.05, or GraphPad Prism version 8. We used repeated measures multivariate ANOVA (MANOVA) to test the between-subject effects of housing (SOC vs ISO) and drug treatment (SAL vs FEN vs pCPA) on the within-subjects measure of cFos-ir neurons/100 mm 2 across seven nodes of the SBN. We found main effects of housing and drug treatment, as well as a significant housing-by-drug interaction (see Results). We followed MANOVA with a series of linear mixed model analyses with housing and drug treatment as fixed effects to test for group differences in cFos-ir neurons within each SBN node. Our model included IHC run as a random effect to control for potential variation introduced by separate IHC procedures. As SOC mice were housed in groups of 3, we also included cage as a random effect to control for potential within-cage influence on cFos expression. Post hoc differences between groups were assessed via independent t tests where applicable.
Next, we performed pair-wise correlations on the number of cFos-ir neurons to test for functional relationships between SBN nodes within each of our six treatment groups; p values obtained from Pearson coefficients were corrected for multiple comparisons using the two-stage linear step-up procedure of Benjamini, Krieger, and Yekutieli (Benjamini et al., 2006). To test for differences in the distribution of internodal correlations between groups, we performed principal components analysis (PCA) on the covariation matrix derived from these data.
Network analyses were performed on within-group correlations using Gephi open source network analysis and visualization software version 0.9.2 (Bastian et al., 2009). Functional relationships were visualized as unweighted, undirected graphs using the ForceAtlas2 algorithm, which spatially distributes nodes based on the overall strengths of each node's correlations (Bastian et al., 2009;Jacomy et al., 2014). Graphs were subsequently filtered so that non-significant edges (i.e., correlations p . 0.05) were excluded from visualization. The overall relatedness of any given region is indicated not only by its shared edges, but by its position relative to other nodes. In order to quantitatively describe functional relationships among groups, we performed three separate graph analyses in Gephi. First, we calculated network density, the number of significant intraregional correlations as a proportion of the total number of possible correlations, for each treatment group. This analysis, a graph-based supplement to our strength of correlation analysis (see Results), quantifies the overall functional connectivity of the SBN in each treatment group. Next, for each treatment group we calculated the average clustering coefficient as the likelihood for any pair of a node's functional connected neighbors to be connected to each other (Watts and Strogatz, 1998;Fornito et al., 2016). Finally, we performed community analysis which parses nodes into highly interconnected subgroups which is indicative of functional commonality (Fornito et al., 2016).
Results
Repeated measures MANOVA found significant effects of housing (F (6,36) = 4.45, p = 0.002) and drug treatment (F (12,74) = 5.71, p , 0.0001), as well as a significant Research Article: New Research housing-by-drug interaction (F (12,74) = 3.17, p = 0.001) on number of cFos-ir neurons/100 mm 2 across seven nodes of the SBN. Next, we performed linear mixed model analyses with housing and drug treatment as fixed factors, controlling for random effects of IHC run and cage. Below we report only p values for linear mixed models; a complete summary can be found in Table 1. Arithmetic means for housing and drug treatments are reported in Table 2. Results from the applicable post hoc comparisons can be found in Extended Data Figure 2-1.
Our model detected effects of housing in the BNST (p = 0.017), mPOA (p = 0.018), PVN (p = 0.005), and PAG (p = 0.021). ISO mice had higher numbers of cFos-ir neurons in each one of these regions (Tukey's HSD, p 0.02). We found significant effects of drug treatment in LS (p , 0.0001; Fig. 2A In each region except for AH, FEN mice had significantly more cFos-ir neurons than both SAL and pCPA mice (Tukey's HSD, p 0.0007). In AH, FEN mice had significantly more cFos-ir neurons than pCPA mice (Tukey's HSD, p = 0.005), and a trend toward an increase relative to SAL mice (Tukey's HSD, p = 0.054). Interestingly, PAG was the only region where our model did not detect a significant effect of drug treatment (p . 0.4; Fig. 2G). Drug effects were dependent on housing conditions as indicated by significant interaction terms in LS (p = 0.018) and PVN (p = 0.003). In LS, the interaction is driven by an increase in cFos-ir neurons in ISO-pCPA compared with the SOC-pCPA mice. In PVN, the interaction is driven by an almost twofold increase in cFos-ir neurons in ISO-FEN compared with SOC-FEN mice. We found a main effect of housing (F (1,47) = 5.97, p = 0.02) and a significant housing-by-drug interaction (F (2,47) = 6.03, p = 0.005; data not shown) on the number of USVs emitted by focal males during playback. The interaction was driven by a significant increase in USV production in ISO-SAL compared with SOC-SAL mice (Tukey's HSD, p = 0.004). There were no differences in USV production between SOC and ISO males in either the FEN or pCPA groups (p . 0.8). Interestingly, there were no relationship between USV production and cFos-ir within the SBN of any treatment group.
As the SBN comprises a reciprocally connected anatomic network (Hahn et al., 2019), we tested whether correlations of neural activity markers between nodes (i.e., functional connectivity) varied between treatment groups. Figure 3 summarizes these data as heatmap matrices based on the Pearson r values of each pairwise correlation. A detailed summary of pairwise correlations of cFosir neurons between SBN nodes within each of our six treatment groups can be found in Extended Data Figure Research Article: New Research 3-1. Cursory analysis of the heatmaps suggested that SOC-FEN (Fig. 3B) and SOC-pCPA (Fig. 3C) had more relatively strong correlations (i.e., more functional connectivity) than all other treatment groups. To test this, we used two-way ANOVAs on the absolute value of Pearson r scores of each treatment group (Tanimizu et al., 2017). We found a main effect of housing (F (1,120) = 6.33, p = 0.013; Fig. 3G), where SOC mice had larger Pearson r values than ISO mice (Tukey's HSD, p = 0.013). Drug treatment (F (2,120) = 2.68, p = 0.07) and housing-by-drug interaction (F (2,120) = 2.45, p = 0.09) approached but did not reach statistical significance.
While we did not find a statistically significant effect of drug treatment on the strength of internode correlations, we hypothesized that pharmacologically increasing or systemically depleting available serotonin would differentially affect the distribution of functional relationships. We thus performed PCA on the covariation matrix generated by cFos-ir counts. PC1 had an eigenvector of 543.53, which accounted for over 76% of the variation and contained eigenvalues most strongly loaded by mPOA (0.682) and PVN (0.553; Fig. 3H). We analyzed PC1 scores between groups via two-way ANOVA and found main effects of housing (F (1,41) = 14.11, p , 0.001) and drug treatment (F (2,41) = 39.28, p , 0.0001; Fig. 3I), but no significant interaction (F (2,41) = 2.43, p = 0.10). FEN mice had divergent and significantly different (Tukey's HSD, p , 0.0001) PC1 scores from both SAL and FEN mice; Thus, despite having similar overall Pearson r values, the distribution of correlations among SBN nodes was different between drug treatment groups.
We visualized the distributions of correlations using the ForceAtlas2 algorithm in Gephi, and filtered the resulting graphs to exclude non-significant edges (i.e., correlations p . 0.05) from visualization (Bastian et al., 2009;Jacomy et al., 2014). There were visible differences between groups in functional network structure: ISO mice have fewer significant functional relationships than SOC mice, which is indicated by relatively few connections between nodes ( Fig. 4A-F). For example, cFos-ir neurons in the PVN of ISO-FEN mice is significantly correlated with AH; thus, PVN shares an edge with only AH (Fig. 4E). Conversely, in SOC-FEN mice the number of cFos-ir neurons in the PVN is significantly correlated with LS, BNST, mPOA, AH, and PAG; thus, PVN shares edges with each of these regions (Fig. 4B). Further, the strength of functional relationships is indicated by the closeness of nodes in space. In SOC-pCPA mice, the significant correlation between cFos-ir in mPOA and BNST is indicated not only by a shared edge, but their relative adjacency (closeness) in space (Fig. 4C). In ISO-pCPA mice, there was a relatively weak, non-significant correlation between cFos-ir in mPOA and BNST which is reflected by a relatively large distance between these nodes in space (Fig. 4F).
We quantifiably described the network organization of the above graphs by performing three separate graph analyses. First, we quantified the number of significant functional relationships as a proportion of total possible functional relationships (i.e., functional density; Fornito et al., 2016). In each of the three SOC treatment groups, there was an over twofold increase in functional density compared with their ISO counterparts (Fig. 4G). While increased density indicates that there is more functional connectivity between regions in SOC mice, this metric tells us little about the nature of these correlations (Bullmore and Sporns, 2009). Importantly, do correlated nodes go on to form (1) additional functional relationships, and/or (2) larger functional modules?
Next, we calculated the clustering coefficient, the average number of connected pairs of a node's connected neighbors of each treatment group. SOC mice had higher clustering coefficients than ISO mice in each treatment group (Fig. 4H). Indeed, regardless of drug treatment, ISO mice had a clustering coefficient of zero; thus, even when ISO mice have significant correlations between nodes, those regions do not in-turn form additional connections with each other. Our final graph analysis assessed modularity/community structure in SOC and ISO mice. In community structure analysis, the smallest number of communities that can be formed is one, indicating a completely connected group; the maximum number of communities is equal to the number of nodes contained in the analysis, and indicates complete functional segregation. We found that ISO mice formed more communities than their socially reared counterparts in each of the drug treatment groups (Fig. 4I). Together, our results support that functional networks are more disconnected in ISO mice.
Discussion
Individual experience establishes the backdrop on which current events are interpreted. Early-life social isolation can profoundly affect an animal's behavioral phenotype (Mumtaz et al., 2018), and likely affects how social signals (e.g., vocalizations) are represented in the brain; however, the effects of social isolation on the neural response to rodent vocalizations are relatively unexplored. We tested whether social experience interacts with serotonin signaling to affect IEG expression in the male SBN following playback of female BBVs. FEN robustly increased the number of cFos-ir neurons across all nodes of the SBN except PAG. Housing treatment also affected IEG induction: ISO mice had more cFos-ir neurons in several nodes of the SBN than SOC mice. Despite a generalized increase in cFos-ir, ISO mice had lower functional connectivity among regions than SOC mice. Indeed, functional density, clustering, and community structure remained relatively low in ISO mice despite pharmacological changes in available serotonin. Importantly, drug treatment had little effect on graph analyses in ISO mice and facilitated network measures in SOC mice.
Social experience interacts with serotonin signaling to affect neural responses in the SBN
IEG mapping has established that early-life stressors alter neural activity markers at the level of individual SBN nodes. Chronic social subjugation decreased cFos expression in the LS, PVN, and PAG following openarm exposure in male mice (Singewald et al., 2009). In rats, postweaning social isolation increased both aggression and cFos-ir neurons in the BNST and PVN (but not LS or PAG) following a resident-intruder paradigm (Toth et al., 2012). Despite structural interconnectedness, we found that changes in the cFos response in the SBN of ISO mice was not global: BNST, mPOA, PVN, VMH, and PAG had increased cFos-ir neurons, whereas LS and AH did not. Interestingly, the direction (i.e., an increase) of the cFos response was similar in each region affected in ISO mice.
The effects of housing on cFos-ir neurons were modulated by drug treatment. Similar to previous studies (Li and Rowland, 1993), we found that FEN increased cFos-ir neurons in all nodes of the SBN except for PAG. However, the effects of FEN were not homogenous across housing treatments: cFos-ir neurons were increased to a greater extent in the PVN and mPOA of ISO-FEN compared with SOC-FEN mice (Fig. 3C,D). Conversely, we found no (Jacomy et al., 2014). Lines (edges) connecting nodes are indicative of statistically significant Pearson r values (p , 0.05); non-significant edges (p . 0.05) are excluded from graphs. G-I, Network measures vary between SOC and ISO mice. G, SOC mice (green) have denser functional networks than ISO mice (magenta). H, SOC mice have higher clustering coefficients than ISO mice, whose clustering coefficient is zero in each drug treatment group. I, SOC mice formed fewer thus more densely populated functional communities than ISO mice. difference in cFos-ir in the PVN of pCPA mice regardless of social experience, an observation consistent with previous studies in rats (Harbuz et al., 1993;Conde et al., 1998). Serotonin affects neural activity through a combination of excitatory and inhibitory serotonin receptor (5-HTR) subtypes (Barnes and Sharp, 1999), and 5-HTR expression patterns are sensitive to social isolation (Schiller et al., 2003;Bibancos et al., 2007); thus, the net effect of serotonergic manipulations is likely driven by complex excitatory and inhibitory interactions within and between each node of the SBN. Importantly, the increase in cFos-ir neurons was not observed exclusively in ISO-FEN mice: ISO-pCPA mice had more cFos-ir neurons in LS, BNST, mPOA, and PAG than SOC-pCPA mice.
Together, we found heterogenous effects of both pharmacology and housing conditions on the number of cFosir neurons within nodes of the SBN. An a priori assumption is that the SBN is a structurally interconnected network within which the patterns of functional connectivity are indicative of behavioral context (Newman, 1999;Goodson, 2005;Goodson and Kabelik, 2009). We therefore tested whether functional network measures differed within the SBN of SOC or ISO mice.
Social experience affects functional connectivity in the SBN
The term functional connectivity, defined as "statistical dependencies among remote neurophysiological events" (Friston, 2011), has been used extensively in human fMRI studies to describe activity patterns in resting and pathologic states. This statistical phenomenon appears to be a crucial component to adaptive social processes across multiple species, including humans (Lynall et al., 2010). IEG mapping in non-traditional vertebrate model systems demonstrates functional connectivity between neuromodulatory systems/circuits (including the SBN) during vocal-acoustic processing in fishes (Petersen et al., 2013;Ghahramani et al., 2018) and frogs (Hoke et al., 2005), as well as prosocial and aggressive behavior in fishes (Weitekamp and Hofmann, 2017;Butler et al., 2018) and lizards (Kabelik et al., 2018). In male prairie voles (Microtus ochrogaster), oxytocin receptor antagonists reduce functional connectivity within the SBN and attenuate partner preference behavior (Johnson et al., 2016). In mice, Tanimizu et al. (2017) demonstrated that functional connectivity between memory-associated regions (including nodes of the SBN) was increased following a social learning task. Finally, different clusters of functional relationships in the SBN are observed in subordinate mice who maintain their beta status compared with those who ascend through the social hierarchy (Williamson et al., 2019).
We found that functional connectivity was decreased in ISO relative to SOC mice following playback of female BBVs. Importantly, these results were not because of a global increase in IEG induction as ISO mice tended to have more cFos-ir neurons than SOC mice. That functional connectivity is disrupted in the SBN of ISO mice represents an important foundation from which to develop models of how variation in functional network architecture relates to variation in aberrant behavioral (Keesom et al., 2017b;Manouze et al., 2019) and neural phenotypes (Keesom et al., 2017a(Keesom et al., , 2018 following early-life social stress. We further analyzed patterns of functional connectivity by PCA and found that FEN mice had positive average PC1 scores, whereas SAL and pCPA mice had negative values (Fig. 4I). Thus, variation in the distribution of functional relationships differs following acutely increasing (i.e., with FEN) or systemically depleting (i.e., with pCPA) serotonin. Interestingly, the general direction of these relationships was consistent within drug treatment regardless of housing conditions. Therefore, while overall functional connectivity may be preferentially modulated by social experience, serotonin signaling drives variation in the nodes that are functionally coupled. However, the extent to which the effects of social experience and serotonin signaling are independent of each other remains unknown.
Individual nodes disproportionately affect functional connectivity PCA revealed that individual nodes disproportionately affect variation in functional connectivity. We found that eigenvectors within PC1 were most heavily loaded by mPOA and PVN. As mPOA and PVN underlie different functions, they may also drive variation in circuit-level metrics in different manners. mPOA has increased IEG induction following playback of social vocalizations in frogs and songbirds (Hoke et al., 2005;Maney et al., 2008). In mice, mPOA is a crucial site for affective-olfactory integration (Dhungel et al., 2011;McHenry et al., 2017), and activity in mPOA coincides with sociosexual investigation and facilitates mounting behavior (Wei et al., 2018). As the behavioral response to vocal signals (i.e., BBVs) is modulated by olfactory stimuli (Grimsley et al., 2013;Seagraves et al., 2016;Ronald et al., 2020), mPOA is in a functional anatomic position to integrate multisensory stimuli and effect circuit-level responses to vocal signals (Kohl et al., 2018). However, to our knowledge no studies have directly investigated mPOA involvement in rodent vocal processing.
We found that the number of cFos-ir neurons in the PVN is not only increased in ISO mice, but contributes a significant amount of variation to functional relationships within the SBN. Within the PVN, dysregulation of corticotropin-releasing factor (CRF) neurons which modulate the hypothalamic-pituitary-adrenal axis contributes to cardiovascular disease and impaired immune function in animal models of chronic social isolation as well as in persons with early-life social trauma (Heim and Nemeroff, 2001;McEwen, 2003;Cacioppo et al., 2015). Further, chemically heterogeneous PVN neurons underly different suites of behaviors: activating CRF neurons in PVN drives conditioned place aversion (Kim et al., 2019), whereas activating oxytocin neurons drives pup retrieval behavior in response to USVs (Marlin et al., 2015). Elucidating the chemical phenotypes and projection profiles of isolationsensitive neurons in the PVN (and SBN in general) will be crucial to understanding the mechanisms through which different nodes affect functional relationships within the SBN.
In conclusion, FEN and social isolation each broadly increased cFos-ir within individual nodes of the SBN following payback of social vocalizations. Importantly, by extending our analyses from individual nodes to the network level, we found that functional connectivity, clustering, and community structure within the SBN was highly dependent on social experience, whereas patterns of functional connectivity (i.e., which nodes formed functional relationships) were driven more by pharmacological manipulations. Our findings suggest the hypothesis that functional dysconnectivity may underlie psychopathological phenotypes that arise from social isolation and reinforces the importance to move beyond functional analyses limited to individual nodes. We highlight the importance of how laboratory housing conditions (SOC vs ISO) can affect functional neuroanatomical processes in rodents. | 2021-03-05T06:22:59.999Z | 2021-03-03T00:00:00.000 | {
"year": 2021,
"sha1": "a5fa50fdb564085a3d11f295b361aa385849e31f",
"oa_license": "CCBY",
"oa_url": "https://www.eneuro.org/content/eneuro/8/2/ENEURO.0247-20.2021.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fb094ecc81ec1e2d0e501627fa601730e98dc5e8",
"s2fieldsofstudy": [
"Psychology",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
54595939 | pes2o/s2orc | v3-fos-license | Isoflavones: Vegetable Sources, Biological Activity, and Analytical Methods for Their Assessment
Phytoestrogens are natural compounds found in various plant species and they have the ability to bind to the estrogenic receptors, exerting agonist and/or antagonist effects. The main classes of phytoestrogens are isoflavones, lignans, and coumestranes. Isoflavones are plant bioactive nonsteroidal polyphenolic metabolites with antioxidant properties. They have a very close structure with 17β-estradiol and possess estrogenic/antiestrogenic effects. The main dietary source of isoflavones is soy (Glycine max L.). Other legumes, such as red clover (Trifolium pratense L.), alfalfa (Medicago sativa L.), and Genista species, have important content in isoflavones, showing nutritional or phytotherapeutic interest. In plants, isoflavones can be found mainly as non-active glycosides which are converted after ingestion, in the corresponding aglycones (e.g., genistein, daidzein) that have pharmacological activity. Many studies have demonstrated the benefits of dietary isoflavones in menopause and multiple chronic pathologies, including cardiovascular diseases, osteoporosis, and hormonal cancers. Dietary intake of isoflavones is widespread, mainly due to the consumption of soybean products. Analytical methods applied for the quantification of isoflavones allow both assessment of dietary intake of isoflavones and highlighting natural sources with phytotherapeutic potential. Health benefits of isoflavones justify the interest for this class of functional food; therefore, further clinical and epidemiological studies are required.
Introduction
Phytoestrogens are natural nonsteroidal compounds able to bind to estrogenic receptors and have both estrogenic and antiestrogenic activities.They are widespread in the plant kingdom being considered ubiquitous.The main classes of phytoestrogens are isoflavones, coumestans, and lignans.
Isoflavones are plant-derived secondary metabolites with a polyphenolic structure and antioxidant properties [1].They pertain to the flavonoid class and are found mostly in plants belonging to Fabaceae family.Soy (Glycine max L.) is the major natural source of isoflavones, and the benefits associated with a soy diet occur mostly because of these phytochemicals.Other natural sources of isoflavones are red clover (Trifolium pratense L.), alfalfa (Medicago sativa L.), and species of the genus Genista.All of these plants present phytotherapeutic and nutraceutical significance, and their by-products, herbal teas, and food supplements are often used.
Several epidemiological studies have demonstrated the benefits of dietary isoflavones in menopause and multiple chronic pathologies, including cardiovascular diseases, osteoporosis, and hormonal cancers.The main mechanisms of action of isoflavones, their benefits to human health, and the factors involved in the modulation of their bioactivity are shown in this chapter.Moreover, the analytical methods used for their quantification in plant and food samples are introduced.These are very important methods to evaluate the human exposure to isoflavones and also to assess the optimum intake for human well-being.
The absorption of aglycones is fast and efficient.Plasmatic isoflavone levels increase up to micromolar-level values after the consumption of soy-based foods, compared to the nanomolar (≤40 nm) levels found in diets without soy [4].First pharmacokinetic study on isolated and purified isoflavones was performed, when a single dose of 50 mg of aglycone or the equivalent dose of β-glycoside, respectively, was given to healthy adult volunteers.The plasmatic peak values (Cmax) were 341 ± 74 ng/mL for genistein and 194 ± 30.6 ng/mL for daidzein.The times when the values reached the peaks were 5.2 and 6.6 hours (tmax) in the case of direct aglycones ingestion and 9.3 and 9.0 h in the case of the ingestion of β-glycosides, genistin, and daidzin, due to the time required for their hydrolysation.The bioavailability of genistein and daidzein (based on the area under the curve in plasma concentration versus time graph) was higher after consumption of β-glycosides [5].
Formononetin and biochanin A can be transformed to daidzein and genistein, respectively, through 4′-O-demethylation by the gut microflora or in the liver [6].Aglycones can be further metabolized through several steps: reduction, deoxygenation, hydroxylation, and C-ring cleavage.Daidzein forms S-(−)equol and O-desmethylangolensin (O-DMA) via dihydrodaidzein (Figure 2).Similarly, genistein is metabolized first as dihydrogenistein and then as 5′-hydroxy-equol and p-ethyl phenol (Figure 2).Another possible minor pathway is the hydroxylation of isoflavone rings at different positions, catalyzed by hepatic cytochrome P450 isoenzymes [2].Metabolites with phenolic or polyphenolic structures are conjugated to O-glucuronides and sulfate esters during and after absorption through the gut barrier and more intense in the liver.The conjugated metabolites are urinary or biliary excreted and have enterohepatic circulation [4,7].
Gut microbiota play a very important role in the isoflavone metabolism.The positive effects of a soy-rich diet derive from the existence of microorganisms in the gut capable of intense metabolization of isoflavones.It is the so-called equol producer phenotype, responsible for metabolizing daidzein to equol and identified through the equol/daidzein ratio in the 24-hour urine.Asian people (Japanese, Korean, or Chinese) and Western adult vegetarians are 50-60% equol producers, but equol producers are only 25-30% in Western population.This phenotype is rather stable and cannot be modulated through prebiotic or probiotic nutritional interventions [8].Otherwise, there are differences between human and animal metabolism, and therefore in vivo results are not relevant to humans [9].All tested animals had equol in urine after the ingestion of soy or clover [8].Notably in rodents, equol constitutes 70-90% from the serum isoflavones, compared to humans where only 30% of the daidzein absorbed is metabolized as equol [4].
Isoflavone content in different sources
Isoflavones can be found in legumes [10][11][12], nuts, and some fruits, such as currants and raisins [13], coffee [14], and cereals [15], but the most important dietary sources are soybeans and their by-products [10,12].The content of isoflavones in several plants and foods is presented in Tables 1 and 2. Soy can be ingested as textured soy protein, as soy milk or drink, added to many fortified foods (e.g., energized bars, cereals, baby formula), or consumed as fermented soybean products, such as miso, natto, and tempeh (Table 3) [12].Also, many food supplements containing soy isoflavones are on the market [16].
Isoflavone content in plants can vary greatly (up to threefold) for the same variety by growth conditions, geographical areas, years, biotic stress factors (e.g., pests), and abiotic stress factors, such as temperature, nutritional status, or drought [4].Dietary culture has an especially big influence on isoflavone content in the diet.Asian and vegetarian diets provide 20-50 mg isoflavones/day, in some cases reaching 100 mg/day, while the Western diet contributes only 0.2-1.5 mg isoflavones/day [2].Based on recent report of European Food Safety Authority (EFSA), in Europe the dietary isoflavone intake is usually under 1 mg/day, despite an increase in the soy food consumption [17].The differences between the types of diets refer to the amount of isoflavone in foods, as well as the type of food consumed.In the Western diet, solid processed soy products (such as tofu) and soymilk dominate the diet, and they contain both glycosides (genistin and daidzin which are stable during processing) and aglycones.
Mechanism of estrogen-like action of isoflavones
According to the xenohormesis theory, plants synthesize phytochemicals to withstand and adapt under stress.Indeed, isoflavone biosynthesis depends on the environmental conditions in which the plant grows and is stimulated by stress.The stress-induced plant compounds have the ability to upregulate stress adaptive pathways in animals and humans.In the body, the biological effects of isoflavones are exercised by modulating pathways mediated by estrogen receptors (ERs) or various key enzymes involved in cellular signaling or metabolism and antioxidant potential [4].
The estrogenic/antiestrogenic effects
Isoflavones produce both estrogenic and antiestrogenic effects through several ways.Due to their structure similar to that of 17β-estradiol, they have the ability to bind to the nuclear ERs, but their affinity for these receptors is rather weak.Only genistein shows stronger affinity for ERβ to which it binds preferentially.Its relative affinity (0.87) is closer to that of the reference hormone, 17β-estradiol.Daidzein affinity for these receptors is 0.005, but equol, its metabolite, has a 5.7 times stronger affinity, thus increasing its estrogenic potential.The affinity for ERα decreases as follows: genistein > equol > daidzein, with the values of 0.04, 0.005, and 0.001, respectively.The affinities of other isoflavones are less than 0.0001 [2,4].
Isoflavones induce agonist/antagonist effects depending on the level of the endogenous estrogen.For people with high levels of estrogen, (women premenopause, especially in the follicular phase of the menstrual cycle), the isoflavones bind to the estrogen receptors.Because of their weak estrogen potency, isoflavones exert an antagonist effect.They block the action of endogenous estrogens on their receptors.In case of low concentration of endogenous estrogens (women in menopause, after ovariectomy, or males), the estrogenic action of isoflavones becomes evident, showing additive agonist effect [34].This is the reason why isoflavones can be used as a long-term complementary or alternative hormone therapy [35].
Isoflavones and their active metabolites can bind to the membrane ERs and induce rapid non-genomic effects by which they modulate cellular metabolism.Thus, they can change the protein kinase and lipid kinase cell signaling pathways [1].It is believed that the activation of these signaling pathways by isoflavones causes some beneficial effects, in particular in the tissues that are not specific targets for the estrogens.At the circulatory system, the isoflavones induce vasodilation by increasing the production of nitric oxide (NO) after the activation of the endothelial NO − synthase.At the central nervous system, they improve the cognitive function by affecting cell membrane permeability and altering the neuronal excitability.In the skeletal system, the isoflavones inhibit the tyrosine kinase causing changes in the alkaline phosphatase activity.On the other hand, they induce the apoptosis of the osteoclasts, suppress the formation of osteoclasts [34], and prevent the bone demineralization [35].
Also, isoflavones influence the activity of some of the enzymes involved in the metabolism of the sex steroid hormones.In this way they inhibit 5α-reductase (the enzyme responsible for the conversion of testosterone to 5α-dihydrotestosterone) and aromatase (involved in the conversion of testosterone to estradiol) in low concentrations, but they increase the aromatase activity at high concentrations.Isoflavones have an affinity for sex hormone-binding globulin (SHBG) and they induce its expression.Therefore, they affect the free-steroid hormone level in the systemic circulation.But these outcomes depend on many factors, including species, gender, and the hormonal status [35].
Xenoestrogens can modulate the enzyme activity of aromatase.Thus, they induce alterations in the metabolism of fats and carbohydrates through effects on ERα.The decrease of endogenous estrogen levels on ERα, aromatase inhibition or the existence of mutations affecting the enzyme activity has been correlated with visceral obesity or truncate, hyperlipidemia, glucose intolerance and insulin resistance, low physical activity, and reduced energy expenditure.Isoflavones compensate for the deficit of estrogens and have the ability to prevent the associated negative effects.Asian diets, rich in isoflavones, are correlated with low incidence of obesity and metabolic syndrome, favorable plasma profile, and a reduced body mass index in postmenopausal women [4].
Isoflavones and their effects on diseases
Numerous epidemiological and clinical studies have demonstrated the protective role of dietary isoflavones against development of specific menopause symptoms [36][37][38] and several chronic diseases, including cardiovascular diseases [39,40], osteoporosis [38], cognitive impairment [37], and hormone-dependent cancers [41][42][43].Based on human health benefits of soy diet, the Food and Drug Administration (FDA) approved the use of the following health claim on the labels: "25 grams of soy protein a day, as part of a low in saturated fat and cholesterol, may reduce the risk of heart disease" [44].
Isoflavones, as all polyphenols, have a strong antioxidant activity.They can neutralize free radicals and prevent the lipid peroxydation by stopping the chain reactions.Also, isoflavones induce the antioxidant enzymes (glutathione peroxidase, catalase, and superoxide dismutase) and inhibit the expression of some enzymes, such as xanthine oxidase [1].The antioxidant protective action of isoflavones from soy or plant extracts, such as Trifolium pratense L. or Genista tinctoria L., was proven in clinical studies [45,46], as well as in animal models [32,47].
Anticarcinogenic activity of isoflavones
The anticarcinogenic potential of isoflavones is based on multiple actions: binding to estrogen receptors (ERs), changing of cell signaling pathways, and inhibition of the key enzymes involved in the metabolism of sex hormones.Also, the anticarcinogenic potential of isoflavones has positive effects through independent mechanisms which do not involve ERs, such as antioxidant activity, reduction in the bioactivation of carcinogens, and stimulation of detoxification [2,48].
Anticarcinogenic activity of genistein has been assessed more thoroughly among isoflavones.Genistein initiates apoptosis, alters cell proliferation and angiogenesis, and inhibits metastasis in many types of cancer cells [49].It is a tyrosine kinase inhibitor.Therefore, in breast cancer cells, it slows down tumorigenesis; in the circulatory system, it prevents tumor vascularization; in the nervous system, it induces neuroprotective effects.In addition, genistein affects tumorigenesis by inhibiting DNA topoisomerases I and II [50], alteration of epigenetic regulations (both histone methylation and DNA methylation), and activating tumor suppressor genes [51].As a polyphenol, genistein has antioxidant [1] and anti-inflammatory potential [52].Another possible action pathway for genistein is the competitive inhibition of estrone metabolism through cytochrome P450 isoenzymes by altering the 2-hydroxy-estrone (2-OH-E 1 )/16α-hydroxy-estrone (16α-OH-E 1 ) ratio, as noticed in vitro [53].While 2-OH-E 1 is a weak estrogen, 16α-OH-E 1 has an important role in carcinogenesis, showing a strong estrogen effect and genotoxic properties [54].16α-OH-E 1 covalently binds to the estrogenic receptors and thus stimulates cell proliferation [55].The ratio 2-OH-E 1 /16α-OH-E 1 has been proposed and studied as a biomarker of breast cancer risk [55][56][57][58][59], but now its significance is controversial.In high concentrations, genistein decreases the hydroxylation of estrone in position 2 in favor of hydroxylation in position 16α [55].Other studies show that genistein has no mutagenic or clastogenic activity in vivo.But in high concentration of genistein, it has clastogenic potential in vitro, explained by the topoisomerase inhibitory effect, which is known to cause chromosome damage above a certain threshold dose [60].
Anti-proliferative effects of high concentrations of genistein were demonstrated in all breast cancer cells, both ER positive and ER negative.However, there are several studies showing that genistein shows both anti-proliferative and proliferative effects, depending on the concentration, type of tumor, level of endogenous estrogens present in the tissue, or development stage.At low physiological concentrations, genistein stimulates tumorigenesis and cancels the effects of tamoxifen in ER-positive breast cancer cells [50].Similar dual effects were observed in the case of tamoxifen and other selective estrogen receptor modulators (SERMs) [16].
In fermented soybean products (e.g., natto, miso, tempeh), aglycons can suffer changes under the effect of enzymes produced by the microorganisms involved in the fermentation process.Thus, ortho-hydroxygenistein (6-OHG, 8-OHG, 3′-OHG) and ortho-hydroxydaidzein (6-OHD, 8-OHD, 3′-OHD) were identified.These compounds are not synthesized by the plants.The hydroxylation reaction that occurs in the ortho position gives molecules a high antioxidant potential and a free radical scavenging activity.Moreover, several of their abilities have been proven: to suppress cell proliferation and to inhibit tyrosinase (anti-melanogenesis properties) and antimutagenic, anti-inflammatory, and hepatoprotective properties [18].
Equol has a higher estrogenic potential than daidzein, its precursor, and a preferential affinity for ERβ, as it has already been stated.This detail is of high interest for its beneficial effect in the treatment of prostate cancer, since both isomers, S-(−)equol and R-(+)equol, can bind in vivo dihydrotestosterone without having an affinity for the androgen receptor.Therefore, equol prevents the endogenous hormone to exert its stimulating effect on prostate growth.In addition, equol possesses the highest antioxidant capacity of all isoflavones tested.It causes blood vessel relaxation and modifies the inflammatory response in activated macrophages and has beneficial effects in cardiovascular and inflammatory diseases [52].
Effects of isoflavones on hormone-dependent cancers
Clinical studies show contradictory results of the efficacy of isoflavones in the treatment of breast cancer.The effects depend on a number of factors such as age, gender, hormonal status, type of isoflavones consumed (soy proteins or isolated isoflavones), dose, diet (type of food), and extent of consumption [2].
A recent meta-analysis of 35 studies shows that soy isoflavones lower the risk of breast cancer in both premenopausal and post-menopausal women.The effect is more evident in Asian women than in those living in Western countries, probably due to differences in quality (traditionally fermented foods) and quantity of the isoflavone products ingested [41].In Asian women, a diet rich in soy food lowers breast cancer risk with 30% [61].A higher prevalence of equol-producer phenotype in Asian population can be an essential factor.Equol-producer phenotype is associated with a substantial reduction in the risk of breast cancer.Several specific biomarkers are favorable modified, such as sex hormone-binding globulin (SHBG) and steroid hormone levels in plasma, a higher urinary 2-hydroxy-estrone/16α-hydroxy-estrone ratio, and a lower mammographic breast density [2].However, because several studies have provided mixed or contradictory results, the general recommendation for patients diagnosed with estrogen-dependent breast cancer is to avoid consuming high quantities of products containing isoflavone.Indeed, isoflavones are selective estrogen receptor modulators (SERMs), and their effects would depend on multiple factors.
Another meta-analysis of five cohort studies that included more than 11,000 female patients diagnosed with breast cancer focused on the post-diagnostic relationship between consumption of soy foods and mortality or cancer recurrence.The study concluded that the ingestion of soy foods reduced mortality and recurrence in all types of breast cancer, especially in the ER-negative, ER-positive/PR-positive, and postmenopausal patients [42].In women diagnosed with breast cancer under tamoxifen treatment, the consumption of plants containing isoflavones did not alter plasma levels of the drug and its metabolites [62].Moreover, a recent study shows that a moderate intake of soy isoflavones (5-10 g soy protein/day) would have an optimal effect on tamoxifen treatment on these patients [63].
In some studies [64], excessive consumption of soy was associated with a negative impact on male fertility and reproductive hormones and the disruption of the thyroid gland function.In other studies these effects were inconsistent [65].
Isoflavones can modulate the toxicity of other xenoestrogens, but the interactions are complex and difficult to predict relying only on in vitro steroid receptor affinities [66].In these kinds of interactions, multiple mechanisms are involved, both estrogen and non-estrogen type, such as oxidative stress [32,47,53].European Food Safety Authority (EFSA) has recently conducted a systematic study of published medical literature, focusing on the correlation between the intake of soy isoflavones and the induced effects on the breast (mammographic density, proliferative marker Ki67 expression), uterus (endometrial thickness, histopathology changes), and thyroid (the thyroid hormone).Results showed that the intake of 35-150 mg isoflavones/ day does not affect these organs in peri-and postmenopausal women [17].Isoflavones have demonstrated prostate cancer efficacy in several studies: in vitro, on prostate cancer cell lines, in vivo, and in numerous clinical trials [43,67,68].Conclusion of a recent meta-analysis suggests that phytoestrogen intake, mostly genistein and daidzein, can be correlated with a decreased risk of prostate cancer [69].
Isolation of isoflavones in foods and vegetable materials
In recent years, due to the health benefits provided by isoflavones, higher attention has been paid to the analytical methods that allow identification and quantification of isoflavones from different types of samples: (a) food, for dietary intake assessing [15,70]; (b) food supplements, for standardization of nutraceuticals [5,71]; (c) vegetable products, for phytotherapeutic evaluation [19,20,28]; and (d) human biological samples (plasma, urine) [5].These analytical methods are commonly used for isoflavone bioavailability assessing and in pharmacokinetic or pharmacological studies.
The methods used to isolate isoflavones from food are selected function of the nature of the food, the type of the isoflavones analyzed (the total of aglycones or aglycones and glycosides), and the instrumental method used for identification and quantification.Several examples are presented below.
Liggins et al. isolated isoflavones from cereals and derivatives after a prior sonication in a polar solvent (methanol/water 4:1, v/v), in order to break apart the cellular material, followed by filtration and evaporation of the solvent under nitrogen.In order to determine the total aglycones, glycosides were hydrolyzed in an acid medium (0.1 M acetate buffer, pH 5) by overnight incubation at 37 °C in the presence of cellulase (enzyme used for hydrolytic removal of the hydrolysis resulted carbohydrates).Aglycones were extracted into ethyl acetate and were derivatized and analyzed using GC-MS [15].Otieno et al. analyzed isoflavones from fermented and unfermented soy milk.For the solubilization of analytes, the freeze-dried sample was refluxed in methanol for 1 hour and filtered, and after adding the internal standard, the solvent has been evaporated to dryness under nitrogen.The residue has been suspended into a buffer (10 mm ammonium acetate containing 0.1% trifluoroacetic acid) and centrifuged, and the supernatant was filtered and analyzed using high-performance liquid chromatography (HPLC) [74].
Extraction and analysis of isoflavones in soybeans can be realized through maceration of the powdered beans with 70% ethanol at room temperature, for 24 hours under constant stirring.After centrifugation and filtering, the supernatant is analyzed directly by HPLC [70].Also, analysis of isoflavones contained in food supplements requires a simple preparation of the samples: fine powdering of tablets, refluxing in 80% methanol for 1 hour, filtering, and injection into the HPLC system [5].
Hydroalcoholic extracts or tinctures can be prepared from either fresh or dry and pulverized vegetable materials.The hydroalcoholic extracts can be made in 70% ethanol or methanol, by refluxing and filtration; by cold maceration, pressing, and filtration [20]; by percolation [28]; or using modern methods, such as ultrasound-assisted extraction in 50% ethanol [19].The extracts can be analyzed directly by LC-MS/MS, after an adequate dilution [20], or they can be subjected to an acid hydrolysis [19] in order to release aglycones.Further, the aglycones can be assessed directly or after liquid-liquid extraction, for a concentration of the analytes [19].
In biological samples (e.g., plasma and human urine) isoflavones can be found in different forms: as aglycones (active metabolites), aglycone derivatives (with or without bioactivity), or conjugated metabolites (β-glucuronides and sulfate esters).Isoflavone analysis can focus on individual quantification of aglycones and their metabolites or quantification of aglycones after the hydrolysis of conjugated forms.Hydrolysis of conjugated metabolites is achieved by incubation at 37 °C with a mixture of β-glucuronidase/sulfatase in the presence of a buffer (0.5 M acetate) at pH 4.5 for several hours or overnight.Isolation of free forms and/or of those freed after hydrolysis can be done by liquid-liquid extraction or solid-phase extraction [5].
Quantification of isoflavones and their derivatives can be achieved in two ways: (a) by determining the free aglycons after a prior acid hydrolysis [19,70,72], alkaline hydrolysis [72], or enzymatic hydrolysis [72] of the glycosides in the sample and (b) by simultaneously analyzing the glycosides and aglycones present in the sample [20,28].GC-MS methods are used less lately, because they require an additional step of isoflavone derivatization to the volatile compounds [5,15].This additional step increases both the time and the cost of the analysis and represents a potential source of error [28].
Generally, HPLC-UV is not sensitive enough (Table 4) for the quantification of small levels of isoflavones from plant extracts [19] or human plasma [5].This method often requires a hydrolysis step to transform glycosides into aglycones followed by the quantification of total aglycones from the sample [71].
In order to correctly identify new isoflavones or isoflavone derivatives present in the samples analyzed, liquid chromatography coupled with mass spectrometry (LC-MS) and tandem mass spectrometry (LC-MS/MS) are the preferred methods (Table 4), due to the advantages: speed, selectivity, sensitivity, and robustness.In addition, mass spectrometry detection allows sure determination of the compounds based on molecular weight and ion charge.For the quantification of isoflavones, the pseudo-molecular ions or the ionic fragments resulted after fragmentation are monitored.In LC-MS/MS analysis, compound identification can be achieved even if their separation is not complete, and it is an advantage [74].A shorter analysis can be realized by ultra-performance liquid chromatography (UPLC) [19,28].This method uses columns with very small size of the packing particles (1.7 μm) and consequently performs separations with superior resolution in a shorter time and a lower consumption of the mobile phase.
The isoflavones have polyphenolic structure and can easily lose a proton to form negative pseudo-molecular ions [M-H] − [20].However, they can also be detected after ionization in positive mode to [M + H] + [74].Isoflavones are polar compounds and they form ions in solution.For these type of compounds, electro-spray ionization (ESI) is the most commonly used source to obtain analytical ions.Atmospheric pressure chemical ionization (APCI) is the source preferred for non-polar analytes that ionize in the gas phase.The isoflavones often give poor response in this ionization source [28].The fragmentation patterns of isoflavone glycosides (malonyl-glycosides, acetyl-glycosides, glycosides, aglycones) follow a similar trend.
However each compound has a unique fragmentation pattern that allows their accurate identification (Table 5) [74].
Conclusion
Dietary intake of isoflavones is widespread, mainly due to the high consumption of soybean products.Health benefits of isoflavones justify the interest for this class of bioactive compounds, but the controversial outcomes of some clinical and epidemiological studies require further investigations.In the context of these researches, the analytical methods applied for assessment of isoflavones are very valuable.They allow for the evaluation of dietary intake of isoflavones, equating the health benefits and the circumstances in which they are exerted, and highlight the natural sources of isoflavones with phytotherapeutic potential.
Table 4 .
HPLC and UPLC methods applied for analysis of isoflavones in different samples. | 2019-02-25T23:15:11.044Z | 2017-02-22T00:00:00.000 | {
"year": 2017,
"sha1": "15c58be912e98dbb66e995f2944d29fd706c11aa",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/53352",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "8f7750f94aabd9db2169c60add3c3d119320c88d",
"s2fieldsofstudy": [
"Chemistry",
"Agricultural and Food Sciences",
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
15893634 | pes2o/s2orc | v3-fos-license | Asymmetric-detection time-stretch optical microscopy (ATOM) for ultrafast high-contrast cellular imaging in flow
Accelerating imaging speed in optical microscopy is often realized at the expense of image contrast, image resolution, and detection sensitivity – a common predicament for advancing high-speed and high-throughput cellular imaging. We here demonstrate a new imaging approach, called asymmetric-detection time-stretch optical microscopy (ATOM), which can deliver ultrafast label-free high-contrast flow imaging with well delineated cellular morphological resolution and in-line optical image amplification to overcome the compromised imaging sensitivity at high speed. We show that ATOM can separately reveal the enhanced phase-gradient and absorption contrast in microfluidic live-cell imaging at a flow speed as high as ~10 m/s, corresponding to an imaging throughput of ~100,000 cells/sec. ATOM could thus be the enabling platform to meet the pressing need for intercalating optical microscopy in cellular assay, e.g. imaging flow cytometry – permitting high-throughput access to the morphological information of the individual cells simultaneously with a multitude of parameters obtained in the standard assay.
H igh-throughput measurement or screening is routinely utilized in clinical diagnostics and basic research in life sciences, such as pathology, drug discovery, aberrant cells screening in stem cell research, rare cancer cell detection, and emulsion droplets/particle synthesis [1][2][3] . Very often, these applications demand enumeration and characterization of large population of specimens (e.g. .100,000 particles) with a high-degree of statistical accuracy. Yet, current approaches have largely been restricted by a trade-off between throughput and accuracy. This is particularly exemplified in the context of cellular assay (or cell-based assay), which is a valuable tool for studying cellular characteristics and dynamics by measuring a multitude of parameters (usually based on fluorescent and scattered light intensity signals) from each cell in a sizable population. In most scenarios, improved measurement accuracy comes with the ability of gaining access to the morphological information of the cells, i.e. to capture images of the cells -facilitating better cellular identification/discrimination and thus yielding high-confidence statistical data. However, the acquisition of more spatial information would inevitably lower the measurement throughput, or vice versa. For example, some cell-based assays allow measurement of multiple parameters per cell with a very high throughput, but without spatial information of each cell. Some others, on the other hand, can capture high-resolution cellular images but with low throughput. This is because of the intrinsic trade-off between imaging speed and sensitivity in the image sensors of the optical microscope systems 4,5 . This explains why the current state-of-the-art imaging flow cytometers can only reach an imaging throughput ,1000 cells/sec, compared with the throughput of ,100,000 cells/sec of the classical non-imaging flow cytometers 3,6,7 . Realizing image-based cellular assays, which provide access to detailed morphological information of the individual cells and enable delivery of multiparametric cytometry without sacrificing the throughput, is hence of significant value. Such high-throughput combinatorial and complementary measurements would particularly benefit accurate rare cells detection and single-cell analysis within a large single or heterogeneous population of cells [8][9][10] .
In this regard, a new optical imaging modality called time-stretch microscopy, which bypasses the intrinsic limitation of the image sensor, has recently been developed for ultrafast imaging. It has been found particularly pertinent to microparticle imaging in microfluidic flow, with an imaging throughput comparable to conventional OPEN SUBJECT AREAS: BIOPHOTONICS BIOMEDICAL ENGINEERING IMAGING AND SENSING non-imaging flow cytometry 11 . It is achieved by ultrahigh-speed retrieval of image information (at MHz line-scan rate or frame rate) encoded in the spectrum of a broadband and ultrashort optical pulse (femtoseconds to picoseconds) by converting it into a serial temporal data in real time. This technique has been proven to be able to perform high-throughput image-based cancer cell screening, which is an ideal complementary tool to typical multiparametric flow cytometry 12 . However, time-stretch microscopy has so far mostly been operated in bright-field (BF) imaging mode in longer wavelength range 11,12 , and is thus incapable of revealing high-contrast and detailed morphology of the transparent cells -hindering accurate cell recognition and screening. As a result, effective use of timestretch imaging to-date has been limited to microparticle or cell screening in high-speed flow with trivial size and shape differences, especially when the targeted cells are labeled with contrast agents 12 . Similar to the classical optical imaging modalities, the time-stretch image quality, which is typically characterized by image resolution and image contrast 13 , is still compromised by the higher imaging speed. With the aim of enlarging the scope of time-stretch imaging applications, we here introduce a new time-stretch imaging approach called asymmetric-detection time-stretch optical microscopy (ATOM), for obtaining label-free, high-contrast image of the transparent cells at ultrahigh speed, and with sub-cellular resolution.
The central idea is to generate the enhanced phase-gradient contrast in the time-stretch image based on a simple asymmetric detection scheme (Fig. 1). The phase-gradient contrast results in three-dimensional (3D) appearance in the image, resembling the contrast-enhancement effect in Schlieren imaging 14,15 . Moreover, by time-multiplexing two ATOM images with opposite phase-gradient contrasts, we further obtain two different contrasts from the same specimen: one with differential (enhanced) phase-gradient contrast and another with absorption contrast, simultaneously. This method decouples the phase-gradient information from absorption, resulting in further enhancement of the image contrast [16][17][18] . Together with operating in the 1 mm wavelength regime, a more favorable spectral window for biophotonic applications as opposed to the telecommunications wavelength band (,1.5 mm) used by most of the previously reported time-stretch imaging systems, the ATOM system presented here is able to achieve higher diffraction-limited resolution and highcontrast cellular time-stretch imaging. Similar to the original timestretch imaging, the detection sensitivity in ATOM is not compromised by the high-speed operation because of the in-line optical image amplification -a feature rarely implemented in typical optical microscopy. We demonstrate the unique capability of ATOM in visualizing the detailed cellular morphology (e.g. normal human blood cells from fresh blood and human leukemic monocytes) without contrast agent in ultrafast microfluidic flow (up to ,10 m/s), which is yet to be demonstrated in the existing time-stretch imaging modality. The achieved flow speed here is equivalent to an imaging throughput of ,100,000 cells/sec. ATOM thus represents a significant advancement in bringing the essential imaging metrics -high resolution and high contrast -to high-speed time-stretch imaging, making it a genuinely appealing platform for realizing high-throughput image-based cellular assays.
Results
General working principles. In its original configuration, timestretch imaging is accomplished by a two-step signal mapping process: the spatial information of the specimen is first mapped to the spectrum of a broadband laser pulse by using the diffractive optical element (diffraction grating in our case, as shown in Fig. 1). The encoded spectrum of the pulse is then mapped (stretched) into a serial temporal data format in real time via group velocity dispersion (GVD) in a dispersive optical fiber. The time-stretch pulse, now encoded with spatial information of the image, is captured by a high-speed single-pixel photo-detector instead of the image sensors ( Fig. 2(a)) 11 . In contrast, ATOM further creates phasegradient contrast of the time-stretch images by asymmetrically detecting the spectrally-encoded pulses prior to the time-stretch process. It is done by off-axis coupling of the encoded pulsed beam into the dispersive fiber core, which essentially acts as the confocal pinhole of the ATOM system (Fig. 1). By introducing an oblique angle h between the fiber axis and the laser beam propagation axis, the cone of the coupling light becomes asymmetric. In effect, it is equivalent to partially blocking the beam detection path -a common approach taken in Schlieren imaging. Such detection scheme results in images with phase-gradient contrast, which has similar characteristic 3D appearance in differential interference contrast (DIC) microscopy 16,18 . Compared to classical phase-contrast and DIC microscopy, the contrast enhancement mechanism in ATOM does not rely on interference. Instead, it reveals the phase-gradient contrast by detecting the wavefront tilt through a simple asymmetric detection. It also requires no additional polarizing optics to generate the differential phase contrast, as in the case of DIC microscopy. Therefore, ATOM, operating at an imaging speed of orders-ofmagnitude faster than that of ordinary DIC microscopy, represents a simple but robust technique of realizing ultrafast high-contrast optical microscopy, and particularly benefits cellular flow imaging applications.
In ATOM, all the encoded wavelengths are recombined by the diffraction grating and are collected by the dispersive fiber within the same asymmetric coupling light cone. Hence, the same phasegradient contrast for all wavelengths is preserved (Fig. 1). Note that we choose the asymmetric detection scheme over asymmetric illumination, which can also yield phase-gradient contrast 16,18 , because it requires no modification in the original imaging path of the timestretch microscope, and thus provides more flexibility to optimize the image contrast by simply adjusting the off-axis fiber coupling angle h. The optical loss due to the off-axis coupling misalignment is not a critical issue in ATOM because it can readily be compensated by the optical gain provided by fiber amplification in-line with the time-stretch process 11 .
The general schematic of an ATOM system is shown in Fig. 2(a). It consists of standard spectral shower illumination for time-stretch imaging 11,12,[19][20][21][22] , which is operated in a double-pass transmission mode (see Methods). The double-passed spectral shower, which encodes the information of the sample, is then restored back to original pulsed beam by the same grating and is collected by a dispersive fiber using a fiber collimator lens (see Methods). This pulsed beam is coupled into the fiber with an adjustable off-axis angle h in order to fine-tune the asymmetric detection condition, and hence the phase-gradient contrast of the final image. Specifically, this is termed as single-angle ATOM. The spectrally-encoded pulse is then mapped to the time within the dispersive fiber with GVD of 0.35 ns/nm. The time-stretch signal is also amplified by an in-line fiber-based semiconductor optical amplifier (SOA) which achieves an on-off gain as high as ,500, in order to compensate the off-axis fiber coupling loss as well as the dispersive loss in the fiber. Without such optical (image) gain, the time-stretch signal is too weak to be detectable (see Supplementary Fig. S1). Finally, the signal is detected by a photo-detector (electrical bandwidth: 8 GHz) and a real-time oscilloscope (sampling rate: 40 GS/s). With the high GVD as well as the high-bandwidth oscilloscope, our current system achieves the diffraction-limited image resolution at ,1.2 mm 21, 23 .
We stress that a wide range of choices of optical amplification can be used in-line with ATOM operated in the 1-mm wavelength regime. They can be either discrete amplifiers (namely SOA, ytterbiumdoped fiber amplifier, or optical parametric amplifier) or distributed amplifiers, such as distributed fiber Raman amplifier 11,20,24 . Optimal choice depends on the required gain bandwidth, total gain, as well as the noise characteristics of the amplifiers 24,25 . Note that the GVD achieved is high enough to ensure the final image resolution is www.nature.com/scientificreports SCIENTIFIC REPORTS | 4 : 3656 | DOI: 10.1038/srep03656 diffraction-limited, and is not limited by the GVD or the detector's bandwidth 21,23 . Furthermore, one can access different phase-gradient contrasts simultaneously within each line scan (i.e. each laser repetition period) by time-multiplexing more than one time-stretch signal. An interesting configuration is to generate two time-multiplexed, i.e. time-delayed, replicas which give opposite phase-gradient contrasts, from the same line scan (see Methods and Supplementary Fig. S2). This is simply done by first splitting the spectrally-encoded pulsed beam into two paths -one is time-delayed with respect to the other. The two beams are then launched into the same dispersive fiber with opposite coupling orientations with respect to the fiber axis ( Fig. 2(a)). Consequently, we can simultaneously obtain two separate time-stretch images of the same specimen, with different contrasts in a single-run of ATOM measurement: one with differential phasegradient contrast (subtraction of the two signals) and another with absorption contrast (addition of the two signals) ( Fig. 2(b)). This method based on two opposite coupling angles, dual-angle ATOM, thus allows the absorption information of the specimen to be decoupled from the phase-gradient information 16,17 . It should be emphasized that such time-multiplexing scheme results in no compromise on the imaging speed as long as the total duration of two replicas does not exceed one line-scan period (i.e. total duty cycle should be kept ,100%). Thus, ultrafast operation can be maintained, i.e. at the line-scan rate of .MHz), as shown in Supplementary Fig. S2.
Basic performance of ATOM. We first performed ATOM of the unstained immortalized normal hepatocyte cells (MIHA) fixed on a glass slide, with two different coupling orientations. In this case, twodimensional (2D) ATOM images are acquired by scanning the sample stage orthogonal to the spectral shower direction at a single-shot line-scan rate of 1 MHz (see details in Methods). By tuning the fiber coupling angle above and below the fiber axis, we can capture the single-angle ATOM images with two opposite phasegradient contrasts, i.e. the shadow-cast orientation can be switched to the opposite side of the MIHA cell ( Fig. 3(a) and (b), see also the corresponding line profile comparison between Fig. 3
(c) and (d)).
Computing the sum and difference of these two opposite-contrast single-angle ATOM images allows us to simultaneously obtain two dual-angle ATOM images of the MIHA cell with the absorption and differential (enhanced) phase-gradient contrasts, respectively ( Fig. 3(e) and (f)). The enhanced phase-gradient contrast revealed by ATOM closely resembles that obtained by the white-light DIC microscopy ( Fig. 3(g)) but with ultrafast imaging speed, i.e. each line scan is as short as ,4 ns.
We also performed ATOM of the MIHA cells in ultrahigh speed flow in a polydimethylsiloxane (PDMS) microfluidic channel to demonstrate the contrast enhancement of ATOM compared to ordinary time-stretch microscopy (BF mode, i.e. on-axis fiber coupling). The channel is designed in a way that the balance between the inertial lift force and the viscous drag force is achieved for manipulating the positions of the individual cells and focusing them in high-speed flow 26,27 . This microfluidic technique is essential for ensuring robust imaging by ATOM at the record high microfluidic flow speed, as high as ,10 m/s, which is limited only by the pressure that the channel can withstand (see Methods, Supplementary Figs. S3 Figure 1 | Key approach of enabling phase-gradient contrast in ATOM. In a typical configuration of time-stretch optical microscopy, the spatial information of the specimen is first mapped to the spectrum of a broadband laser pulsed beam by using a diffraction grating through bright-field illumination. The spectrally-encoded pulsed beam is then coupled on-axis into the fiber core of the dispersive fiber, which acts as the confocal pinhole of the imaging system. In contrast, the spectrally-encoded pulsed beam is coupled off-axis into the fiber core in ATOM. By introducing an oblique angle h between the fiber axis and the beam propagation axis, the cone of the coupling light becomes asymmetric (darker gray area of the coupling beam shown in the figure). In effect, it is equivalent to partially blocking the beam detection path, and thus to asymmetrically capturing the light from the sample through the grating -giving rise to the phase-gradient contrast (see the areas enclosed by the dashed lines of both the red and blue components). As all the encoded wavelengths (e.g. blue and red components depicted in the figure) are recombined by the same diffraction grating and are collected by the fiber within the same asymmetric coupling light cone. Hence, same degree of phase-gradient contrast for all wavelengths is preserved (the areas enclosed by the dashed lines of both the red and blue components emerging from the specimen should have the same cone angles). and S4 for detailed design and fabrication steps). Note that this flow speed corresponds to an imaging throughput up to ,100,000 cells/ sec, which is orders-of-magnitude higher than any existing imaging flow cytometers 3 . Here, 2D images are acquired by continuous 1D line-scan at a rate of 26 MHz (governed by the repetition rate of the laser), which is naturally provided by the unidirectional cell flow, without any laser beam or sample stage scanning. The enhanced contrast in the single-angle ATOM images compared to the BF time-stretch images is clearly evident in Fig. 3(h) and (i) (see more examples in Supplementary Fig. S5) -enabling visualization of detailed cellular morphology under such an ultrahigh flow speed.
The contrast enhancement can also be further verified by investigating the 2D spatial Fourier transforms (FT) of the ATOM images, which shows enhanced amplitudes as well as spectral bandwidths within the diffraction-limited passband of the system, compared with that of the BF time-stretch images (see Supplementary Fig. S6). Dual-angle (differential) ATOM, as shown in Fig. 3 and Supplementary Fig. S6, can further enhance the image contrast. Hence, it demonstrates the obvious advantage of ATOM over BF time-stretch -accessing phase gradient of the specimen to boost the time-stretch imaging contrast without sacrificing the speed (maintained at tens of MHz). It should be noted that the ability to shown in box 1 of (b)) is first spatially dispersed by a diffraction grating to generate a 1D spectral shower. It is then focused by an objective lens for illumination such that different wavelengths are focused on different positions on the sample, i.e. the flowing cells in this example. The spectrally-encoded light double-passes the cell through another objective lens and a mirror. The double-passed spectrally-encoded pulses, which are now encoded with the spatial information of the sample, are then restored back to an undispersed pulsed beam by the grating (see box 2 in (b)). They are further split into two different paths (beams A and B) such that the two beams are coupled into the fiber core with the equal angles but opposite orientations. Such off-axis coupling angles of both beams are controlled by the steering mirrors shown in (a). One of the beams is time-delayed so that they are multiplexed in time without temporal overlap. The two time-multiplexed signals reveal opposite phase-gradient contrasts because of the opposite off-axis coupling orientations of beams A and B (see box 3 in (b)). The time-multiplexed signals then undergo the time-stretch process and the in-line optical image amplification to perform wavelength-to-time mapping and to compensate the intrinsic loss in the fiber (see box 4 in (b)). A photodetector and a real-time oscilloscope are used for detecting the signals. Note that the time-stretch signal in each pulse period corresponds to one single-shot line-scan. In each linescan, two different signals, which have the opposite phase-gradient contrasts (A and B), are obtained simultaneously. The 2D image captured through either beam A or B is named as single-angle ATOM image. By calculating the difference of the two signals (A 2 B) for each line scan, one can obtain a 2D dual-angle ATOM image with differential (enhanced) phase-gradient contrast. By calculating the sum of the two signals (A 1 B), one can on the other hand obtain a 2D dual-angle ATOM image with absorption contrast. A homogenous sphere is depicted in (b) as an example. perform high (phase-gradient) contrast microscopy in an exceptionally high flow speed (up to ,10 m/s) is unique to ATOM and it could not be readily accomplished by any state-of-the-art high-speed camera (see the image captured by a high-speed CMOS camera, as shown in Supplementary Fig. S5(b)).
High-contrast cellular microscopy in ultrafast flow by ATOM. We further demonstrate ultrafast and high-contrast ATOM by imaging stain-free human hepatocellular carcinoma cells (BEL-7402) and human cervical cancer cells (HeLa), flowing at an ultra-high speed of 7-8 m/s in the same microfluidic channel. The representative ATOM images are shown in Fig. 4 (More images are shown in Supplementary Figs. S7 and S8). The images were captured by averaging every 4 single-shot line-scans in order to further improve the signal-to-noise ratio, resulting in an effective line-scan rate of ,6.5 MHz. The ATOM images have negligible image blur even at such a high flow speed, thanks to its ultra-short exposure time of ,20 ps, which is governed by the time-bandwidth product of the each spectrally-resolvable subpulse of the spectral shower 11,21 . Such exposure time for real-time imaging is essentially unattainable by any existing image sensors. More importantly, the ATOM reveals enhanced time-stretch image contrast, particularly apparent in the dual-angle (differential) ATOM images (Figs. 4(b) and (e)), in which the absorption information has been separated (see Figs. 4(c) and (f)).
Moreover, high-contrast (differential phase-gradient-contrast) ATOM allows unambiguous imaging of the individual cells in an aggregate, and thus enables clear identification of such aggregate event in a high-speed flow (Figs. 4(b) and (e)). We note that conventional non-imaging flow cytometers are incapable of identifying such aggregated-cell/cluster events, which are often falsely regarded as a large single entity. Consequently, these aggregated cells would easily be missed by standard forward-scatter or side-scatter gating analysis -resulting in erroneous statistical results 2,6 . Showing the ability to visualize single cells with high contrast and in high speed, the results presented here thereby demonstrates the unique strength of ATOM for realizing imaging flow cytometry with high throughput (,100,000 cells/sec) as well as high accuracy of image-based cell identification and screening.
Finally, we captured the ATOM images of acute monocytic leukemia cells (THP-1) as well as normal human blood cells, from fresh blood obtained from a healthy donor, flowing in the microfluidic channel at a speed as high as 10 m/s. It is worth noting that the differential phase gradient contrast in the dual-angle ATOM images of the THP-1 cells in general enables us to visualize the nuclei (Figs. 5(b) and (d), and more images are shown in Supplementary Fig. S9) -imaging such sub-cellular structure without labeling is yet to be demonstrated in ordinary BF time-stretch imaging. In addition, ATOM is also able to identify the biconcave disk shape of the red blood cells (RBCs) (Fig. 5(g)) and can differentiate them from the swelled RBCs in the flow, which are in either spherical or elliptical shapes (Fig. 5(h)). More images are shown in Supplementary Fig. S10. These different morphologies are confirmed by comparing ATOM images with the ordinary DIC images of the same blood sample (see Supplementary Fig. S10).
Discussion
We have demonstrated a new ultrafast optical imaging technique, in the context of imaging in flow, called ATOM, which delivers highcontrast single-cell imaging in an ultrahigh-speed microfluidic flow up to 10 m/s -orders-of-magnitude faster than the typical optofluidic cellular imaging techniques (,mm/s) 28 . Although time-stretch optical microscopy has been developed and has targeted highthroughput cell screening applications, its BF imaging mode, which results in low image contrast, hinders accurate image-based cellular identification. Inevitably, low false-positive rate in the statistical measurement cannot be guaranteed, albeit its unusually fast imaging speed. Without resorting to an exogenous contrast agent, ATOM exploits label-free/stain-free phase-gradient contrast of the cells by a simple and robust asymmetric detection scheme. Thanks to its ultrafast line-scan rate (.tens of MHz), ATOM can further access both the differential (enhanced) gradient-phase contrast and the absorption contrast simultaneously through time-multiplexing two spectrally-encoded pulses -without compromising the final imaging speed. Both phase-gradient and absorption contrasts provide additional information of the individual cells and thus can potentially be used as the useful parameters for cell identification and screening. We note that the phase-gradient information of the specimen is accessed in ATOM without the need for inteferometry. Together with that the time-multiplexing scheme can be realized in a fiberbased implementation (e.g. replacing the free-space delay arms with the fiber delay-lines and miniaturized relay optics), such compact and flexible asymmetric detection scheme can thus be easily incorporated in any existing BF time-stretch microscopes, without significant system modification.
We also stress that the ATOM system reported here is operated in the shorter near infrared window, ,1 mm, in contrast to most of the prior work on time-stretch imaging which operates in the telecommunications band (,1.5-1.6 mm), primarily because of the wide availability of the low-loss dispersive fibers -the key element for the time-stretch process. By exploiting proper dispersive elements (e.g. specialty dispersive fiber and few-mode fibers 21,22 , we here demonstrate that high GVD (,0.35 ns/nm), and thus high-resolution time-stretch imaging, can also be feasible in the 1 mm window. As opposed to the wavelength range of ,1.5-1.6 mm, operating ATOM and other time-stretch imaging modalities in the 1 mm regime could also avoid the complication introduced by photothermal effect on the fluid (mostly aqueous) in the microfluidic channel, that is non-negligible, yet often overlooked. This can be understood by that the water absorption at ,1.5-1.6 mm (absorption coefficient ,10 cm 21 ) is significantly higher than that at 1 mm (absorption coefficient ,0.01 cm 21 ) 33 . Even in a microfluidic channel with a typical dimension of 100-200 mm, the absorbed power at 1.5-1.6 mm in the microfluidic channel can be high enough to generate local heating (e.g. .10uC for a power of .10 mW) 34 , that can in turn influence the variability of optofluidic bioassays 35 and also creates local fluidic flow for microparticle manipulation (based on photothermal tweezer) 36 . In contrast, such photothermal effect can be neglected at 1 mm. Not to mention the higher achievable diffraction-limited resolution at shorter wavelengths, ATOM and time-stretch imaging at 1 mm not only is favorable for optofluidic imaging, but also opens up a wider scope of biophotonic applications, especially for in-vivo imaging.
The unique feature of ATOM, i.e. the enhanced phase-gradient image contrast, together with the higher diffraction-limited resolution, permits accurate image-based cellular assay with genuinely high throughput. This is particularly exemplified by its capability of visualizing the individual mammalian cells in an aggregate under highspeed flow (Fig. 4), which is otherwise easily missed in standard non-imaging flow cytometry as they do not have access to the spatial information of the cells -diminishing the measurement accuracy and reliability. We have also further demonstrated the feasibility of imaging stain-free normal human blood cells as well as leukemic cells Enabling high-contrast cellular imaging in conventional flow cytometry allows accurate cell population identification and classification without substantially relying on subjective manual partitioning (or gating) of cell events. Such gating method is universal in flow cytometry data analysis, but commonly results in misinterpretation of the collected statistical data 29,30 . The results of ATOM presented here are of great significance for advancing time-stretch imaging for high-throughput imaging flow cytometry with high statistical precision. It is particularly envisaged for rare cell screening in early metastasis detection, or post-chemotherapy detection of the residual cancer cells, a concept called minimal residual disease (MRD) detection 31 . The image processing and reconstruction of ATOM is currently done off-line, and the total data size is limited by the memory capacity of the real-time oscilloscope. Real-time ATOM and automated image analysis could be made possible by integrating the system with parallel digital signal processing based on field-programmable gate array 12 or graphic processing unit. As a result, it would be a powerful tool for high-throughput imagebased cellular assay, particularly complementary to the multiparametric analysis of the existing non-imaging flow cytometry.
Methods
Illumination and imaging optics of ATOM. The pulsed beam of a home-built ytterbium-doped mode-locked laser (repetition rate 5 26 MHz; center wavelength 5 1064 nm) with a 3-dB bandwidth of ,10 nm and a pulse width of 4 ps is spatially dispersed in 1D by a transmission holographic grating (1200 lines/mm) to generate a spectral shower which is then focused by an objective lens (numerical aperture (NA) 5 0.66) for illumination. Another identical objective lens (NA 5 0.66) and a mirror are added behind the sample in order to operate ATOM in a double-pass transmission mode ( Fig. 2(a)). The double-passed spectral shower is then collected by a dispersive fiber using a fiber collimator lens (NA 5 0.25).
Time multiplexing the time-stretch temporal signals. The double-passed spectrally-encoded light is split into two different paths by a beam splitter (power splitting ratio 5 45555). One of the beams (beam A in Fig. 2(a)) is incident to the fiber collimator lens with an angle range of 14u, whereas the other beam is time-delayed by ,3.6 ns with respect to beam A and is incident to the same fiber collimator lens with an angle range of 24u (beam B in Fig. 2(a)) The tilting angles of the two beams are independently controlled by the two steering mirrors, as shown in Fig. 2(a). The imaging region of interest is mainly near the center of the microfluidic channel, within a size of ,30 mm (i.e. the cell size in most of our experiments), which corresponds to a wavelength bandwidth of ,5 nm and thus a temporal width of ,1.75 ns (with GVD 5 0.35 ns/nm). Therefore, a time delay of 3.6 ns guarantees no temporal overlap between the two pulses. The pulses are then time-stretched within a dispersive fiber module, consisting of a 5-km single-mode fiber (SMF) in 1 mm (Nufern) and a standard telecommunication SMF (Corning, SMF28), which acts as a few-mode fiber 22 . The total GVD achieved is ,0.35 ns/nm. The time-stretch pulses are also amplified by a fiber-based SOA (Superlum) which achieves an on-off gain as high as ,500. Finally, the signal is detected by a photo-detector (Picometrix, electrical bandwidth: 8 GHz) and a real-time oscilloscope (Agilent Technologies, sampling rate: 40 GS/s). Having a bandwidth of 10 nm, the stretched pulse has a temporal width of ,4 ns (with GVD 5 0.35 ns/nm). Therefore, given the laser repetition rate of 26 MHz (i.e. a period of ,40 ns), the line-scan duty cycle in the current ATOM system is ,2 3 (4/40)% 5 20%. The factor of 2 refers to the two time-multiplexed replicas.
Image processing of ATOM. The individual time-stretch waveforms are first subtracted and normalized by the background pulse (which has the spectral shape of the laser source). The final 2D ATOM image is obtained by digitally stacking all the pulses, i.e. the line scans, along the flow direction. The differential phase-gradient and absorption contrasts are further obtained by calculating the difference and sum between the two neighboring time-stretch waveforms, (having opposite phase-gradient contrasts), respectively. The digital signal processing and image reconstruction are done off-line by a custom program in MATLAB. Imaging the fixed MIHA cells with ATOM. For the ATOM images shown in Fig. 3, the MIHA cells are fixed on a glass slide, which is scanned perpendicular to the spectral shower direction during imaging by ATOM. The MIHA sample is scanned for 70 lines with 1 mm step size by a mechanical scanning stage. 2D images are obtained by digitally stacking the 1D line-scans. Averaging of 25 sequential line-scans is taken for realizing an effective single-shot line-scan rate of 1 MHz.
Microfluidic channel design and fabrication. We designed and fabricated a PDMS microfluidic channel platform in which the balance between the inertial lift force and the viscous drag force is achieved for manipulating the positions of the individual cells and focusing them in ultrafast flow inside the channel. This microfluidic technique is essential for ensuring robust imaging by ATOM at the record high microfluidic flow speed (as high as ,10 m/s). Detailed fabrication steps are depicted in Supplementary Fig. S3. The microfluidic platform consists of two parts: a focusing section followed by an imaging section (see Supplementary Fig. S4(a)). The focusing section consists of multiple pairs of connected curved channels with radii of curvature 400 mm and 1000 mm, respectively. There are 16 turns in total. (see Supplementary Fig. S4(a)). The width (150 mm) and height (50 mm) of the channel were chosen such that the channel is suitable for focusing cells with a size ranging from ,5-30 mm. In the imaging section in which the spectral shower is illuminated onto the channel, the channel width is narrowed to 45 mm to further boost the flow speed. Note that laminar flow condition is still satisfied at such an ultrafast flow. The Reynolds number of our current microfluidic channel design is at 600, which is far below the limit of 2000, beyond which the turbulence flow occurs 32 . The thicknesses of the top and bottom channel walls have been minimized to accommodate the high NA objective lens, which typically has a working distance of less than 1 mm (see Supplementary Fig. S4(b)).
Preparation of mammalian cell lines -MIHA, BEL-7402 and HeLa cells. The BEL-7402 cells, HeLa cells, and MIHA cells were cultured on 100 mm cell culture dish (Corning) in DMEM-HG supplemented with 10% fetal bovine serum (FBS), 100 U/ ml penicillin and 100 mg/ml streptomycin. Sodium pyruvate was added to the culture media when culturing MIHA cells. Cells were grown in a humidified incubator at 37uC and 5% CO 2 . After reaching confluence, cells in cell culture dish were trypsinized to suspend in aqueous environment. Trypsin was removed by centrifuging the cell sample at 250 g for 5 minutes. Glutaraldehyde was added to resuspend the cells and fix the cells. Glutaraldehyde was removed by centrifuging the cell sample by 250 g for 5 minutes. Phosphate buffered saline (PBS) was added to resuspend the cells which are then loaded to the microfluidic channel for ATOM experiments.
Preparation of cell lines (THP-1), and human whole blood samples. THP-1, a human monocytic cell line obtained from patients with acute monocytic leukemia, was cultured in Roswell Park Memorial Institute (RPMI) medium 1640 (Gibco) supplemented with 10% fetal bovine serum (Hyclone, Thermo scientific), penicillin streptomycin (Gibco), 2 mM Glutamax TM (Gibco) at 37uC, 95% humidity and 5% CO 2 until confluence was achieved. Optimal conditions were maintained until the cells were utilized for the experiment. Fresh blood was collected from the right median cubital artery of a healthy donor amounting to 3 mililitres and kept at 2-8uC in an ethylene diamine tetraacetic acid (EDTA) anticoagulated evacuated tube (Greiner Bio-one). Blood was drawn at least eight hours prior to the experiment. | 2017-06-16T21:05:05.280Z | 2013-09-22T00:00:00.000 | {
"year": 2014,
"sha1": "ea7953ee9dbd61867fc89bab74fdfb0f2daeb5ed",
"oa_license": "CCBYNCND",
"oa_url": "https://www.nature.com/articles/srep03656.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "212f69c410d02376cd0fba6b4afbcc3f3251151c",
"s2fieldsofstudy": [
"Physics",
"Biology"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine",
"Physics"
]
} |
13198675 | pes2o/s2orc | v3-fos-license | Assessing Diet as a Modifiable Risk Factor for Pesticide Exposure
The effects of pesticides on the general population, largely as a result of dietary exposure, are unclear. Adopting an organic diet appears to be an obvious solution for reducing dietary pesticide exposure and this is supported by biomonitoring studies in children. However, results of research into the effects of organic diets on pesticide exposure are difficult to interpret in light of the many complexities. Therefore future studies must be carefully designed. While biomonitoring can account for differences in overall exposure it cannot necessarily attribute the source. Due diligence must be given to appropriate selection of participants, target pesticides and analytical methods to ensure that the data generated will be both scientifically rigorous and clinically useful, while minimising the costs and difficulties associated with biomonitoring studies. Study design must also consider confounders such as the unpredictable nature of chemicals and inter- and intra-individual differences in exposure and other factors that might influence susceptibility to disease. Currently the most useful measures are non-specific urinary metabolites that measure a range of organophosphate and synthetic pyrethroid insecticides. These pesticides are in common use, frequently detected in population studies and may provide a broader overview of the impact of an organic diet on pesticide exposure than pesticide-specific metabolites. More population based studies are needed for comparative purposes and improvements in analytical methods are required before many other compounds can be considered for assessment.
Introduction
Pesticides are manufactured to be toxic to living organisms, but are not necessarily specific to their target species. They are deliberately released into the environment where their ubiquitous presence may endanger other living species, including humans [1]. It is unsurprising then that numerous published studies suggest a link between pesticide exposure and human health risks such as cancer [2], and adverse genotoxic, neurologic, and reproductive effects [3]. Obvious health risks may be due to acute poisoning or high level occupational exposure, while there is the possibility of more subtle health risks through general exposure via the food chain.
Globally around three million accidental or intentional pesticide poisonings occur each year resulting in around 260,000 deaths [4]. The vast majority occur in developing countries, which use only a fraction (20%) of the world's agrochemicals [5]. However, these figures do not take into account chronic or cumulative health effects or effects arising from exposure during critical periods of development [6].
Occupational Exposure to Pesticides
There are numerous examples cited in the scientific literature regarding occupational exposure to pesticides and adverse health outcomes such as various cancers, Parkinson's and other chronic diseases, as well as potential adverse effects on mental health and reproduction [7][8][9][10][11][12].
The United States Agricultural Health Study (AHS), a large prospective cohort study of pesticide applicators and their spouses, identified links between various pesticides and cancer incidence (lung, pancreatic, colon and rectal, all lymphohaematopoietic cancers, leukaemia, non-Hodgkin lymphoma, multiple myeloma, breast, bladder, prostate, brain, melanoma and childhood cancers). Outside the AHS, epidemiologic evidence remains limited with respect to many of these associations, but animal toxicity data support the biological plausibility of these relationships [7].
In addition to cancer, pesticides have been associated with a number of other health effects in animals and humans. The AHS has investigated conditions as widespread as Parkinson's Disease, depression, diabetes, respiratory disorders and other health conditions [7]. Links to Parkinson's Disease have been supported by experimental studies indicating that high exposure to paraquat (herbicide) and maneb (fungicide) may increase the risk in genetically susceptible individuals [8,9] highlighting concerns of potential epigenetic effects (gene-environment interactions). That a number of pesticides directly target the nervous system as their mechanism of toxicity may provide additional concerns. Studies in pesticide workers have also demonstrated effects on neurotransmitters which may be involved in mood regulation [10,11].
The risks of pesticide exposure at occupational levels may be of specific concern during critical developmental periods. Despite safeguards for pregnant farm workers, current measures may not be sufficient to protect the developing foetus from endocrine disrupting agents. For example a Danish study has reported that sons of women occupationally exposed to pesticides have a statistically significant decrease in penile length and a trend towards reduced testicular volume and serum concentrations of testosterone [12].
There are many uncertainties however, due to the limited number of research studies conducted on specific exposure-outcome relationships and methodological limitations such as crude exposure measurements, small sample sizes, and limited knowledge and control of potential confounders [13]. Furthermore, the sheer number of chemicals and variety of chemical actions involved, and the attribution of some adverse health effects to pesticides that are no longer in current-use in many regions make it extremely difficult to generalise about the health effects of pesticides.
Other Sources of Pesticide Exposure
While occupational exposure is likely to incur a greater risk, all humans are exposed to pesticides whether they be ingested from food sources, absorbed through the skin or inhaled from polluted air.
Dietary exposure from the ingestion of contaminated food (more so than water or other beverages) is considered to be the primary route of exposure for most pesticides although additional environmental exposure is also likely [14][15][16]. Food can be contaminated by pesticides used during production, transport or storage. While diet has been shown to be a significant predictor of pesticide exposure in all age groups, specific foods and food choices must also be considered as some foods may have a greater impact on exposure levels [17]. Food consumption patterns will vary among and within individuals for economic, seasonal, regional, cultural, ethical and personal reasons.
Non-dietary pesticide exposure can occur as a result of residential pesticide use (home, garden, pets, personal insect repellents), proximity to agricultural areas, time spent in parks and recreational areas or fumigated buildings, or hand to mouth activity (generally higher in young children). With the exception of residential use, most of these factors are outside the reasonable control of the average individual, whereas diet represents a modifiable risk factor that may be under individual control.
Monitoring Pesticide Exposure
Biological monitoring techniques (biomonitoring) assess pesticide levels in human tissue, and provide a measure of an individual's total exposure to pesticides through dietary and non-dietary sources. Unfortunately biomonitoring data is not available for all pesticide classes or for all regions. Some European countries [15], the CDC in the USA [18], and Health Canada [19] have conducted large scale biomonitoring studies assessing pesticide exposure in the general population, although such studies have not been conducted in countries such as Australia or most developing countries. These studies frequently detect pesticides or their metabolites in human tissue. The mean levels are almost always lower than those found in occupationally exposed individuals although those in the higher range can be similar to some occupationally exposed workers [15]. As the half-lives of modern pesticides are very short (often <24 hours), these data suggest that the population is continually and routinely exposed to pesticides [15].
Non-Occupational Exposure to Pesticides
Identifying health risks in non-occupationally exposed populations is difficult as pesticide exposure is diffuse and the source of exposure (dietary, environmental etc.) is not always clear. Of particular concern is the increased risk associated with pesticide exposure during critical periods of development, such as preconception, prenatal and early childhood. For example, high urinary levels of atrazine, alachlor and diazinon have been associated with abnormal sperm [20]. In the US a significant association has been reported between the months of increased risk of a birth defect and increased levels of pesticides (especially atrazine) in surface water [21]. Higher prenatal urinary concentrations of dialkyl phosphate (DAP) (which are metabolites of organophosphate pesticides [OPs]) have been associated with poorer intellectual development in 7-year-old children [22] and elevated levels of DAPs have also been associated with an increase in the prevalence of ADHD in children aged 8 to 15 years [23]. These DAP concentrations were within the range of levels measured in the general U.S. population although the reasons for these elevated levels are not clear.
In recent times there has been considerable media attention around obesity and insulin resistance. These are common conditions which can influence other disease processes and impact on quality of life and mortality. In rats chronic administration of low concentrations of atrazine has been shown to increase body weight, intra-abdominal fat and insulin resistance and reduce basal metabolic rate. While obesity and insulin resistance were further exacerbated by a high-fat diet they also occurred without changing food intake or physical activity level [24]. Adding to these concerns, data from the Center for Disease Control and Prevention (CDC) shows an apparent overlap between areas of heavy atrazine use in the USA and the prevalence obesity (BMI > 30) [25].
As the primary route of exposure for most pesticides is via the ingestion of food exposed through conventional agricultural practices [14][15][16], such findings in addition to uncertainly about the evaluation of pesticides [26], raise concern amongst some consumers.
Organic Diets as an Intervention
Organic farming practices do not use synthetic pesticides and data from food residue surveys confirm that organic produce has reduced pesticide levels [27][28][29]. This provides a rationale that organic food consumption should result in reduced pesticide exposure. However, studies describing reduced risk of developing pesticide related diseases, or improved health outcomes as a result of consuming organic foods are lacking. Despite a lack of supporting research, adopting an 'organic diet' appears to be an obvious way to reduce pesticide exposure for a growing number of concerned individuals. Some believe that 'on the basis of the precautionary principle alone, choosing organic food appears to be an entirely rational decision' [30]. Assessing the efficacy of such an intervention, however, is not a simple feat.
In a recent attempt initiated by the Food Standards Authority (FSA) in the UK to investigate the 'putative health effects' of organic food, studies that were primarily concerned with chemical residues (including pesticides) were specifically excluded. The focus on nutrition-related health effects yielded only twelve relevant articles [31]. In one study the consumption of organic dairy products within the context of a general organic diet was associated with a 36% lower risk of infantile eczema in children who exclusively consumed organic dairy products (i.e., weaned on organic milk, cheese and yoghurts and who were breastfed by mothers eating organic dairy products). However, the authors attributed these results to increased levels of omega-3 fatty acids and conjugated linoleic acid in organic compared to conventional milk and the likely reduction in pesticide exposure was not discussed [32].
Understanding the health impact of dietary pesticide exposure, and therefore any potential benefit of reducing exposure by adopting an organic diet, begins with determining actual exposure levels. While monitoring of pesticide residues in food may provide a useful insight into the potential sources of dietary exposure, biomonitoring is more likely to correlate with adverse health effects as it directly measures the amount of a pesticide (or its metabolites or degradation products) in human tissue. However, it should be stated that high levels of these markers have not been consistently associated with adverse health effects [15].
Regarding organic consumers only a few published reports in children have utilised biomonitoring [33][34][35]. These have examined urinary metabolites of OP and synthetic pyrethroid insecticides (PYRs). Dietary exposure to other classes of pesticides such as carbamate insecticides; fungicides and herbicides has not been formally evaluated in organic consumers.
In 2003 Curl et al. reported that children who consumed organic fruit, vegetables and juice had a mean total urinary dimethyl alkylphosphate metabolite (DMAP) concentration (a non-specific measure of OP exposure) that was approximately nine times lower than children consuming conventional foods. This corresponded to a reduction in the children's exposure levels from above to below the U.S. Environmental Protection Agency's guidelines, shifting exposures from a range of uncertain risk to negligible risk [33].
The results of the Curl study are supported by the Children's Pesticide Exposure Study (CPES) [34] which also reported reductions in urinary pesticide metabolites in children consuming organic produce. This study included measurements of select urinary OP and PYR metabolites in 23 children aged 3-11 years over a 15-consecutive-day sampling period. Children consumed their usual conventional diet with an organic intervention phase for five consecutive days, at which time organic food items were substituted for most of the children's conventional diet (fruit, vegetables, juice, wheat and corn products). The organic intervention resulted in a decrease in certain pesticide-specific OP metabolites to non-detectable or close to non-detectable levels [14] and a reduction of approximately 50% in PYR exposure [35]. These results confirm that consumption of organic produce appears to provide a relatively simple way to reduce children's exposure, especially to OP pesticides [14,33], and that this occurs relatively quickly. However, drawing any general conclusions from these biomonitoring studies to support the hypothesis that organic diets reduce pesticide exposure will require further studies in different population groups.
Complexities and Limitations of Biomonitoring
Designing biomonitoring studies to assess the efficacy of an organic diet in reducing pesticide exposure must be carefully devised. Appropriate study design requires consideration of the limitations of biomonitoring and the complexities involved in contextualising the results. This includes careful selection of which pesticides will be targeted and the most appropriate analytical methods to use. Ideally the methods chosen should be able to attribute the source of exposure to dietary intake. Study design must also consider confounders such as the unpredictable nature of chemicals and individual genetic and environmental factors that might influence susceptibility to disease. Contextualising the results also requires consideration of the data available for comparison purposes.
Study Design
The population of interest needs to be clearly defined with careful consideration of factors that may affect exposure and susceptibility. Consideration should be given to whether the study will use an organic intervention or observe free living organic consumers eating their usual diet. As free-living consumers are unlikely to consume a 100% organic diet detailed survey instruments need to record dietary intake to quantify the level of organic consumption. Other sources of exposure and potential confounding factors such as age, health status, medication use and other factors that may influence the metabolism of, and susceptibility to, pesticides must also be determined.
Targeted pesticides need to be selected based upon the likelihood of dietary exposure in the general public. This is likely to vary from region to region and over time depending on prevalence of use but may be informed by studies which monitor pesticide residues in food. Seasonal and regional variations can be anticipated depending on the time of the year and the nature of pest infestations. Priority might be given to assessment of pesticides with high prevalence of use, those with the greatest public health concerns or to newer chemicals so that potential human health risks can be more accurately determined. Once chosen the most appropriate methods of testing these pesticides must be considered.
Analytical Methods
There is an increasing amount of biomonitoring data available and Barr [36] and Aprea et al. [1] have previously described biomonitoring methods for assessing pesticide exposure. However choosing and conducting such tests requires a high level of technical expertise. Scientists do not always agree on the most appropriate methods for assessing pesticide exposure, limits of detection may vary and data collection and analysis can be laborious, expensive and place unacceptable demands on study participants.
In humans, most current-use pesticides are excreted within 24 hours as either the parent pesticide, a mercapturic acid detoxification product or as a metabolite [36]. Therefore collecting samples that degrade quickly requires a level of urgency. While many herbicide compounds are poorly metabolised and are excreted largely unchanged in the urine [1], the parent compounds of many other pesticides are metabolised very rapidly, making their measurement impractical. As a result, metabolites are often used as surrogate markers for exposure. Several methods have been reported which measure intact OP pesticides in blood, serum, or plasma, however, for the most part these tests are used for detecting acute poisoning or very high levels of exposure [37]. Similarly, occupational exposure to PYRs can be assessed by monitoring intact PYRs, yet due to their rapid elimination, unmodified compounds are less sensitive indicators of exposure than the metabolites [1] and thus may not be suitable for detecting differences in dietary exposure.
Determining the most appropriate tests is not always straightforward. For example according to Barr [36], atrazine mercapture is often tested but may not be the best marker for atrazine exposure, recommending instead analysis of dealkylated or hydroxylated metabolites of triazine herbicides, mercapturic acids of the dealkylation products or free atrazine. Determining suitable limits of detection (LODs) may also be open to conjecture. As it is difficult to confidently determine the levels of pesticide exposure that are safe under all circumstances [26], the LODs should be set as low as possible. Lower detection limits will yield more samples with detectable metabolites, and lower LODs will more accurately reveal differences in dietary exposure between consumers of organic and conventional food [15].
Defining appropriate sampling times and collecting representative samples can be difficult; and pure standards for measuring pesticide metabolites are not always available [1]. Analytical methods often involve gas chromatography (GC) or high-performance liquid chromatography (HPLC) following sample preparation and extraction requiring specialised equipment and technicians. The choice of analytical methods must also consider practicalities such as financial restraints and the potential burden on study participants and researchers. This may include whether invasive methods are required to collect samples and the timing and costs of such procedures.
Attributing the Source of Exposure
Although useful in determining an individual's total exposure (dietary and non-dietary) to pesticides, biomonitoring methods are not always able to attribute the source of exposure, especially when metabolites are used. Metabolites may reflect exposure to more than one parent pesticide, may be markers for substances other than pesticides, or may be preformed or result from biological processes in the body.
Some metabolites are markers for specific pesticides while others are representative of a number of pesticides. Urinary 3-phenoxybenzoic acid (3PBA) is a non-specific metabolite common to a number of PYRs, and trans-3-(2,2-dichlorovinyl)-2,2-dimethylcyclopropane carboxylic acid (trans-DCCA), is common to permethrin, cypermethrin, and cyfluthrin [14]. With OPs the most commonly reported method is to measure DAP metabolites which are formed in the human body during the metabolism of OP pesticides and excreted in urine [18]. The data generated can provide a cumulative index of exposure to most members of the OP class but are not pesticide-specific. Each DAP metabolite is associated with a number of OPs, and many OPs can form more than one of these metabolites [37]. Specific biomarkers for individual pesticides in this class are also available, such as 3,5,6-trichloro-2-pyridinol (TCPy) for chlorpyrifos and malathion dicarboxylic acid (MDA) for malathion. However urinary DAPs may provide a more useful assessment for exposure to the class in general and this may be advantageous in providing an overview of the impact of an organic diet. If the purpose however, is to determine the effect of the diet on individual pesticides then pesticide-specific markers may be more useful.
Some metabolites utilised in biomonitoring studies are not entirely specific to pesticides. For instance, 1-naphthol (1NAP), a metabolite of carbaryl is also a marker for the ubiquitous naphthalene (found in mothballs, petroleum and cigarette smoke) [15]. A further consideration is the potential contribution from preformed metabolites. This can occur with OP metabolites such as DAPs which may be detected as a result of the metabolism of ingested parent compounds but may also result from the ingestion of preformed metabolites which may be present on food as a result of environmental degradation. In addition sources of inorganic phosphate may be alkylated within the body to form dimethylphosphate (DMP), and this may also contribute to urinary DAP levels [15].
Unpredictable Nature of Chemicals
When attempting to understand the impact of individual pesticides on human health, consideration must be given not only to the specific chemicals targeted in the biomonitoring study but also to the potential impact of other chemicals and risk factors for disease progression. We have previously described some of these factors including: the effects of exposure to mixtures of chemicals; the dose, duration and timing of exposure; the complexities and lack of complete safety assessment data; as well as variations in the exposure, metabolism and susceptibility of different individuals [38]. Humans are exposed to a unique and ever changing cocktail of chemicals. This cocktail may include pesticides and other chemicals acquired through ingestion, inhalation or dermal absorption. Some of these substances may have similar mechanisms of action or may interact via toxicokinetic (absorption, distribution, metabolism and excretion) or toxicodynamic (binding, interaction and induction of toxicity) processes to produce additive, antagonistic or synergistic effects [39]. For instance the synergistic effects of mixtures of sub-lethal doses of OPs in juvenille salmon are sufficient to cause anticholinesterase intoxication and death [40].
Although most pesticide formulations are mixtures of chemicals, most safety assessment methods focus on individual 'active' chemicals rather than 'whole formulations' including their adjuvants, metabolites and degradation products. A case in point is glyphosate. The adverse effects associated with glyphosate appear to be more dependent on the formulation tested than on the glyphosate concentration [41,42]. It is possible that these effects may be more appropriately attributed to other compounds in the formulation or to the environmental breakdown product of glyphosate, aminomethyl phosphonic acid (AMPA) [42][43][44]. Similarly recent studies suggest that prenatal exposure to piperonyl butoxide (a PYR synergist) has been negatively associated with neurodevelopment [45].
Depending on the disease process in question non-chemical risk factors such as physical inactivity or nutrient deficiencies or excesses as well differences in genetic susceptibility, may also confound results. Using biomonitoring data from a few select targeted chemicals is unlikely to provide sufficient data to deal with the inherent complexities of disease progression.
Individual Factors
In addition to chemicals behaving in potentially unpredictable ways, an individual's response to chemicals may also be unpredictable. Although a 100-fold safety factor is taken into account when establishing acceptable daily intakes for humans [26], this must overcome differences between experimental and real world conditions, as well as account for individual variability in exposure and metabolism. There is currently insufficient data from epidemiological studies to confidently predict the levels of pesticides (either the parent compounds, metabolites, degradation products or adjuvants) that might be associated with human health risks and such levels are likely to be highly variable. For example levels of 3PBA are known to be influenced by factors such as tobacco use, time spent gardening and the use of cytochrome p450-inhibiting medications [17]. This may in part reflect differences in exposure but also differences in the metabolism of pesticides and these are likely to vary not only between different individuals but also within the same person over the course of their lifetime. A progressive increase in DMAP metabolites at 6, 12 and 24 months of age has been positively associated with the number of children's daily servings of fruits and vegetables [46]. At the same time the activity of enzymes which play an important role in the detoxification of many pesticides are known to be impaired in children [47].
To a limited extent biomonitoring can account for poorly understood processes such as bioaccumulation, excretion and metabolism [37], but demonstrating pesticide exposure at a specific time point does not provide information about the lifetime exposure to pesticides or the increased risk of exposure during critical periods of development (such as in utero). Assessing risk relies not only on determining individual exposure but must also consider variations in an individual's ability to metabolise, detoxify and excrete mixtures of chemicals as well as their susceptibility to disease which may vary with genetic, developmental, physiological and environmental conditions.
Comparative Data
Once measurements have been collected the results must be carefully interpreted. Where possible, results from organic consumers may be compared with reference values of the general population although such studies do not enquire about levels of organic food consumption [1].
OPs are frequently detected in general population studies [15,18,19] and have been assessed in comparative studies of children consuming organic and conventional diets [14,33]. In the CPES the pesticide-specific OP metabolites TCPy and MDA had the highest frequency of detection representing chlorpyrifos and malathion exposure from the conventional diet [14]. PYR metabolites have also been detected with varying frequency in population studies [15,18,19] and differences have been observed in children when switched from a conventional diet to an organic diet for 5 days [48].
For both general population and organic consumption studies there may be significant heterogeneity with regard to the pesticides chosen for monitoring and the methods and LODs used. Methods and detection performance have improved over time, especially for OP metabolites, so care must be taken when attempting to compare results from older studies [15].
Conclusions
The effects of pesticides on the general population, largely as a result of dietary exposure, are unclear. If the precautionary principle is applied then adopting an organic diet appears to be an obvious solution for reducing pesticide exposure and this is supported by biomonitoring studies in children. However the few attempts that have been made to determine the efficacy of such an intervention are difficult to interpret in light of the many complexities.
Biomonitoring cannot be considered an end in itself but simply a tool for integrated health assessment; an intermediate step for establishing a link between exposure and adverse health effects. The limitations of biomonitoring and the complexities involved in interpreting the results must be acknowledged. As previously mentioned, both dietary and non-dietary sources of exposure can vary among individuals. While biomonitoring can account for differences in overall exposure it cannot necessarily attribute the source. Due diligence must be given to appropriate study design and selection of analytical methods to ensure that the data generated will be both scientifically rigorous and clinically useful, while minimising the costs and difficulties associated with biomonitoring studies. Currently the most useful candidates for assessment are urinary DAPs and urinary 3PBA and trans-DCCA. These assessments provide evidence of exposure to OP and PYR insecticides respectively and as they are in common use they can provide a broader overview of the impact of an organic diet on pesticide exposure than pesticide-specific metabolites. As previously discussed these metabolites have frequently been detected in population studies and have been assessed in children consuming organic foods providing useful data for comparison. However the contribution of preformed metabolites in the diet must be considered.
Depending on the prevalence of use in the region of interest, specific metabolites for chlorpyrifos (TCPy) and malathion (MDA) may also be incorporated. In addition select herbicides may be useful, although comparative data from similar studies is not currently available and the frequency of detection in population studies tends to be relatively low.
Despite its limitations, biomonitoring remains the most useful surrogate indicator of pesticide exposure currently available. The above discussion highlights some of the many issues encountered when selecting biomonitoring methods for assessment of pesticide exposure. It provides an outline of some of the complexities encountered when attempting to ascertain the efficacy of an organic diet intervention in reducing such exposure. | 2014-10-01T00:00:00.000Z | 2011-05-25T00:00:00.000 | {
"year": 2011,
"sha1": "de555d13d627e83769c25aab5c12cbf412da14ef",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/8/6/1792/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c7ee48461892cc5223f7279feb3a7f93a872d253",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
208591468 | pes2o/s2orc | v3-fos-license | A biosynthetic platform for antimalarial drug discovery.
Advances in synthetic biology have enabled production of a variety of compounds using bacteria as a vehicle for complex compound biosynthesis. Violacein, a naturally occurring indole pigment with antibiotic properties, can be biosynthetically engineered in Escherichia coli expressing its non-native synthesis pathway. To explore whether this synthetic biosynthesis platform could be used for drug discovery, here we have screened bacterially-derived violacein against the main causative agent of human malaria, Plasmodium falciparum We show the antiparasitic activity of bacterially-derived violacein against the P. falciparum 3D7 laboratory reference strain as well as drug-sensitive and resistant patient isolates, confirming the potential utility of this drug as an antimalarial. We then screen a biosynthetic series of violacein derivatives against P. falciparum growth. The demonstrated varied activity of each derivative against asexual parasite growth points to potential for further development of violacein as an antimalarial. Towards defining its mode of action, we show that biosynthetic violacein affects the parasite actin cytoskeleton, resulting in an accumulation of actin signal that is independent of actin polymerization. This activity points to a target that modulates actin behaviour in the cell either in terms of its regulation or its folding. More broadly, our data show that bacterial synthetic biosynthesis could become a suitable platform for antimalarial drug discovery with potential applications in future high-throughput drug screening with otherwise chemically-intractable natural products.
future drug development. However, commercial violacein samples can only be obtained through laborious purification from bacteria (Chromobacterium sp. [7,8] or Janthinobacterium sp. [9]) because of the complexity of its highly aromatic structure (Fig. 1a). Purification from these bacteria requires specialized equipment and high-level biosafety equipment since these bacteria themselves can cause deadly infections (10). As such, commercially available violacein is extremely expensive. Alternative strategies of violacein synthesis are being explored, in particular, the use of synthetic biology to engineer industrial bacterial species that can express nonnative violacein. Several groups, including ours (11), have been successful in implementing a five-gene violacein biosynthetic pathway (vioABCDE) into Escherichia coli or other heterologous hosts (12)(13)(14), providing a route for robust, in-house, and inexpensive compound production.
We have previously extended the success of this biosynthetic pathway by generating combinations of 68 new violacein and deoxyviolacein analogs. These combinations are achieved by feeding various tryptophan substrates to recombinant E. coli expressing the violacein biosynthetic pathway or via introduction of an in vivo chlorination step-the tryptophan 7-halogenase RebH from the rebeccamycin biosynthetic pathway (13,(15)(16)(17). This biosynthetic approach is able to produce large quantities of compound derivatives using simple, inexpensive, and nonhazardous bacteria compared with native-producing strains in a sustainable and flexible approach.
Here, we set out to explore whether the use of this biosynthetic system could be developed as a route to antimalarial compound production and testing by measuring the activity of derivatives on the growth of P. falciparum sexual and asexual parasites. We have confirmed the viability of the system, ensuring there is no background antiparasitic activity in bacterial solvent extracts lacking violacein. We then tested the biosynthetic violacein extract from E. coli and confirmed its 50% inhibitory concentration (IC 50 ), which is in agreement with a commercial violacein standard and previous studies (14). Finally, as well as using this approach to explore the mode of action of violacein, we show that extracts representing a diverse series of biosynthetically derived variants show various effects on parasite growth, with 16 of the 28 compound mixtures inhibiting growth to a greater level than the parent violacein molecule. Indeed, one purified compound, 7-chloroviolacein, exhibits an ϳ20% higher inhibition activity than the underivatized violacein compound. The screening approach used in this study suggests that biosynthetic systems may, therefore, provide an, as yet, untapped resource for screening complex compounds and optimizing them for antimalarial discovery.
RESULTS
Violacein expressed using synthetic operons kills P. falciparum 3D7 parasites. Previous work has shown that violacein is able to kill asexual Plasmodium parasites in vitro and in a mouse model in vivo (9,13). Violacein cytotoxicity is highly dependent on cell type, ranging from an IC 50 value of around 2.5 M in HepG2 cells to up to 12 M in fibroblasts and potential erythrocytic rupture at concentrations above 10 M (13,18). Taking this into consideration, we used concentrations of 2 M or less of violacein to explore the growth inhibition of P. falciparum asexual stages, noting no phenotypic effect on erythrocyte morphology at the highest final concentration (see Fig. S1 in the supplemental material). Our biosynthetic system for violacein production requires chloramphenicol drug pressure, which is known to affect parasite viability (19). We first set out to ensure that presence of this known antibiotic did not affect parasite growth. Extract from bacteria lacking the violacein-producing enzymes but grown under chloramphenicol pressure (i.e., background) did not affect parasite viability (see Fig. S2 in the supplemental material). This gave us confidence that extracts from biosynthetically modified E. coli would report only on the activity of a drug produced but not from background chloramphenicol contamination. To test this, we compared the activity of a commercial violacein standard (Vio-Sigma) with violacein derived from bacterial solvent extracts from E. coli biosynthesis (Vio-Biosyn) on wild-type P. falciparum 3D7 growth, using a well-established asexual growth inhibition assay. No difference in the IC 50 values between the two violacein samples was seen ( Fig. 1b and c). We further tested the two violacein samples on sexual parasites by measuring exflagellation (20) and saw no difference in the IC 50 values of around 1.7 M (see Fig. S3 in the supplemental material), but complete IC 50 curves could not be generated without going above the cytotoxic threshold of 2.5 M. These data demonstrate that solventextracted violacein from E. coli (Vio-Biosyn) is active and that its production provides a suitable platform for developing and testing potential antimalarial compounds.
Biosynthetic violacein extract kills both artemisinin-resistant and -sensitive field isolate parasites. To explore whether violacein has utility for addressing emerging ACT drug resistance in the field, we tested the efficacy of Vio-Sigma and Vio-Biosyn on two parasite clinical isolates derived from the Greater Mekong Subregion, where ACT resistance is concentrated. Both clinical isolates have been phenotypically characterized in the clinic setting, showing either treatment failure or success, adapted to culture, and genotyped for the C580Y Kelch-13 resistance marker (21) that is known to correlate with sensitivity to artemisinin-based drugs. Both the artemisinin-sensitive isolate (ANL1; Kelch13 wild-type) and the artemisinin-resistant isolate (APL5G; Kelch13 C580Y) were sensitive to Vio-Biosyn and Vio-Sigma, with similar IC 50 values ( Fig. 2a to d). Activity against artemisinin resistance provides support for the development of the violacein scaffold to address emerging drug resistance in the field.
Violacein derivatives show potent antimalarial activity. To explore whether bacterial biosynthesis could be used further to generate compound derivatives, increasing the throughput of complex molecule testing, we obtained extracts from 28 bacterial strains, each modulated to synthesize a mixture of violacein analogues ( Fig. 3a; see Fig. S4 in the supplemental material) (22). The bacterial extracts were produced by feeding corresponding tryptophan substrates as described previously (14), and violacein concentrations in the extracts were calibrated against a violacein standard. Asexual growth assays were, again, carried out by testing each extract at the IC 50 of biosynthetic violacein, 0.50 M. We saw a large variation of inhibition of parasite growth, with 8 compound mixtures exhibiting Ͼ95% inhibition, while 12 others showed a decreased effect on parasite growth (Fig. 3b). As a proof-of-concept, one of the more active extracts was used to purify the violacein derivative 7-chloroviolacein ( Fig. 3c). 7-chloroviolacein exhibited an IC 50 value of 0.42 M. This purified derivative is at least equipotent to the parent violacein compound (Fig. 3d). Given the speed and low cost of extracting these violacein analogs and purifying them directly from bacteria, these data, therefore, suggest an entirely new approach to complex compound drug testing for antimalarial discovery and optimization.
Biosynthetic violacein affects actin dynamics in the cell but does not affect polymerization in vitro. The mode of action of violacein against P. falciparum parasites has not previously been characterized. The treatment of a variety of human-derived cell lines with violacein show a range of responses, with one patient-derived glioblastoma cell line having compromised motility and increased rounding up, attributed to a disruption of the filamentous actin network (23). Towards exploring the phenotype associated with its activity, we performed flow cytometry and immunofluorescence assays (IFAs) to observe any changes in the parasite under biosynthetic violacein treatment. A 3D7-derived parasite line expressing a constitutive cytoplasmic green fluorescent protein (sfGFP) marker was labeled with DNA marker 4=,6-diamidino-2phenylindole (DAPI) and a monoclonal antibody that preferentially recognizes filamentous actin (6,24) to explore overall parasite morphology in the cytoplasm, nucleus, and actin cytoskeleton, respectively. Parasites were then treated with either a negative control, dimethyl sulfoxide (DMSO), or a positive control, actin filament stabilizing compound jasplakinolide (JAS), as well as Vio-Biosyn. Parasites were checked by flow cytometry for any differences in overall signal (see Fig. S5 in the supplemental material). A low actin-positive signal was seen with DMSO treatment, as expected given the predominance of short, transient filaments and globular actin in asexual parasites (25). However, the intensity of actin labeling following JAS and violacein treatment both showed marked increases compared with that of DMSO (Fig. S5).
To explore the nature of flow cytometry changes in actin intensity following violacein treatment, IFAs were undertaken. In the DMSO-treated parasites, the GFP signal is spread throughout the cytoplasm along with a clearly defined nucleus, as expected (Fig. 4a). The actin signal is diffuse with a low background (Fig. 4a). Following JAS treatment, actin filaments are known to be stabilized (25), producing an expected concentrated overall actin signal (Fig. 4b), indicative of high local concentrations of polymerized actin. Parasites treated with Vio-Biosyn also gave a much higher actin signal than untreated controls, although it was distinct from that following JAS treatment (Fig. 4c). The concentrated signal from Vio-Biosyn was broader across the cell and less focused in localized regions of the cell periphery. This matched the overall intensity of signal seen by flow cytometry and relative to sfGFP signal as a control for parasite size (Fig. 4d). In the DMSO-treated control, the diffuse actin signal is 3% of the total GFP signal. This increases to 27% upon JAS treatment, which is indicative of an increased number of filaments, whereas parasites treated with Vio-Biosyn reach a mean average signal 98%, representing a huge increase in actin accumulation in the cell. This broad concentration of actin signal would be indicative of either massively increased filament nucleation or actin aggregation, as caused by actin misfolding. To test whether Vio-Biosyn directly affects actin filament formation (as JAS does), both drugs were assayed using a pyrene-labeled actin assembly assay that was used previously to test compound derivatives for actin activity (25). No effect on actin polymerization was seen with Vio-Biosyn compared with either JAS (filament nucleating) or the monomerstabilizing drug latrunculin B (Fig. 4e). Together, these observations suggest Vio-Biosyn does not directly interact with actin. It is, therefore, possible that Vio-Biosyn interacts with actin indirectly, such as via the known actin-binding partners in the parasite cell (26) or via an alternative pathway involved in actin folding, which would give rise to actin aggregation within the cell.
DISCUSSION
The emergence of resistance to front-line artemisinin-based drug treatments for malaria is a major threat to global health. As such, new antimalarial treatments are in urgent demand. Here, we tested violacein, a compound with known antibacterial, antitumorigenic, and antiparasitic activity, against P. falciparum parasites, validating its potential utility for antimalarial drug development. We showed that biosynthetically (e) Biosynthetic violacein shows no effect on actin polymerization in vitro, as measured by pyrene-actin polymerization, compared with two known actin binders (latrunculin B, which binds to the monomer and prevents filament formation, and jasplakinolide, which increases nucleation and stabilizes actin filaments). LatB, latrunculin B; JAS, jasplakinolide. Scale bar ϭ 5 m.
produced violacein was as effective as commercially available violacein, with a mode of action that affects the actin cytoskeleton of the parasite. We also successfully tested 28 violacein analog mixtures using a high-throughput growth assay on asexual parasites, suggesting this method of biosynthetic production is a suitable platform for antimalarial discovery and optimization.
Previous work has shown that violacein is capable of killing lab-derived chloroquineresistant P. falciparum parasites (14). Here, we showed that patient-sourced clinical isolates, sensitive or resistant to artemisinin, could equally be killed by both commercial and biosynthetic violacein, with similar IC 50 values. Our results show violacein inhibits asexual parasite growth with an IC 50 value of at least an order of magnitude more potent than fibroblasts and lymphocytes found in circulation in the blood (0.5 M versus Ͼ10 M) (13,18). Furthermore, although full IC 50 curves could not be generated, the compounds both inhibit development of the sexual stages of the parasite life cycle, with an IC 50 of around 1.7 M (Fig. S3). Any compound identified using this assay with an IC 50 of less than 2.0 M is considered for further compound development (27). Combined, these data suggest violacein is a potential drug that could be developed to antagonize resistance in the field and target both asexual-and sexual-stage parasites. While the derivative library tested consisted of mixtures of violacein analogs, it is encouraging to see some of these compound mixtures have considerably more potent antimalarial activity than violacein itself. Critically, when we tested a purified compound (7-chloroviolacein), we saw at least equipotent activity of the derivative compound, illustrating the potential of biosynthetic production of antimalarial compounds for rapid screening and rational drug optimization.
Interestingly, violacein-treated parasites have cytoskeletal deformities that suggested a disruption to actin modulation within the parasite. Given violacein has no effect on actin polymerization kinetics in vitro, it is possible that the phenotype observed is as a result of actin aggregation in the cell, which could be a side effect of actin misfolding. P. falciparum requires actin as an essential part of its motor complex and for other processes in the cell (9). Unlike most proteins, actin requires a dedicated chaperonin system to fold into its native state (28). Of note, this entire pathway is highly upregulated in artemisinin-resistant parasite isolates (29) and would constitute a well-justified target for antagonizing drug resistance in the field. Further work in testing the effects of violacein on actin folding or modulation are clearly required to explore whether this is the target for the drug. Ultimately, the ability of violacein to affect such a major pathway as actin dynamics in the cell, as well as kill drug-resistant parasites, provides an encouraging outlook for its therapeutic development.
In summary, our data show that a bacterial biosynthetic platform for creating compounds and their derivatives is suitable for testing for antimalarial drug development. As the need for novel therapeutics increases and the interest in natural compounds, often complex in nature, grows, we hope to use this approach to develop novel chemical scaffolds in a high-throughput manner toward finding the next generation of antimalarials.
MATERIALS AND METHODS
Generation and extraction of violacein and derivatives. Violacein and derivatives were generated and extracted as previously described (30). Briefly, E. coli JM109 strain (Promega) was transformed with the violacein pathway plasmid [pTU2S(VioAE)-b-VioBCD] and grown overnight before being inoculated into LB broth until the optical density at 600 nm (OD 600 ) reached 0.5. These cultures were then supplemented with either tryptophan or a synthetic tryptophan analog at 0.04% (wt/vol) and grown at 25°C for up to 65 h before pelleted at 4,000 rpm. The cell pellet was then resuspended with 1/10th volume of ethanol to extract violacein, followed by centrifugation to separate the ethanol supernatant containing violacein extract from cell debris. The supernatant was then dried in vacuo and stored at -20°C for long-term storage or reconstituted in DMSO for growth inhibition assays. Concentrations of violacein in the bacterial solvent extracts were calibrated against a commercial violacein standard (Sigma) based on absorbance at 575 nm. Compound mixtures used in the growth inhibition assay consist of mixtures of violacein derivatives (Fig. S4), as described previously (14).
Plasmodium falciparum growth inhibition assays. P. falciparum parasite lines 3D7, ANL1, and APL5G were used for the growth inhibition assays (GIAs). A 3D7 sfGFP line was used for immunofluorescence assays (31). All parasite lines were cultured in complete RPMI (RPMI-HEPES culture media [Sigma] supplemented with 0.05 g/liter hypoxanthine, 0.025 g/liter gentamicin, and 5 g/liter Albumax II [ThermoScientific]) and maintained at 1% to 5% parasitemia and 4% hematocrit. For the GIA, 96-well plates were predispensed with a serial dilution of the compound and normalized to 1% DMSO. Double-synchronized ring-stage parasites (1% parasitemia, 2% hematocrit, complete medium, and sorbitol synchronized at ring stage at 0 h and 48 h) were added to the wells to a total volume of 101 l. Cultures were incubated for 72 h at 37°C in a gas mixture of 90% N 2 , 5% O 2 , and 5% CO 2 . Red blood cells were lysed through a freeze-thaw step at -20°C, and parasites were resuspended and lysed with 100 l lysis buffer (20 mM Tris-HCl [pH 7.5], 5 mM EDTA, 0.008% saponin, and 0.8% Triton X-100) containing 0.2% SYBR green and incubated for 1 h at room temperature. SYBR green fluorescence (excitation, 485 nm; emission, 535 nm) was measured using a Tecan infinite M200 Pro instrument. Data shown are the mean average of 3 biological replicates (Ϯ SEM), each of which is a mean average of 3 technical replicates (unless stated otherwise), and is normalized to a positive control (cycloheximide) and a negative control (DMSO only). IC 50 values were calculated using GraphPad Prism version 8.0.
Immunofluorescence assays. A total of 100 l of a mixed-stage sfGFP parasite line (5% parasitemia and 2% hematocrit) was incubated for 24 h with 2 M the compound of interest. At 0 h and 6 h, parasites were fixed with 4% paraformaldehyde and 2% glutaraldehyde (Electron Microscopy Sciences) and incubated on a roller for 20 min at room temperature (RT), before being pelleted at 3,000 rpm and washed three times in 100 l 1ϫ phosphate-buffered saline (PBS). The cells were subsequently permeabilized in 0.1% Triton X-100 (Sigma) for 10 min at RT before being pelleted and washed three times in PBS as before. Cells were blocked in 3% bovine serum albumin (BSA) in PBS for 1 h at RT on a roller before being incubated with a primary antibody (1:500 mouse anti-actin 5H3 [14]) for 1 h at RT. Cells were washed three times with PBS before the addition of the secondary antibody (1:1,000 anti-mouse Alexa 647 conjugated) for 1 h at RT. Cells were washed three times in PBS and resuspended in 100 l PBS with 0.05% DAPI. Cells were diluted 30-fold and loaded onto polylysine-coated coverslips (ibidi) before being imaged. Imaging was performed on a Nikon Ti-E microscope using a 100ϫ Plan Apo 1.4 NA oil lens objective with DAPI, fluorescein isothiocyanate (FITC), and Cy5 specific filter sets. Image stacks were captured 3 m on either side of the focal plane, with a z-step of 0.2 m. Image analysis was conducted on raw image data sets in ImageJ, calculating a ratio between Alexa Fluor 647 and FITC by measuring the mean signal intensity in a defined area of 88 nm 2 . A total of 62 images were captured for each sample from 2 wells from 2 biological repeats. Images shown in Fig. 4 were deconvolved in Icy using the EpiDemic Plugin, and a maximum intensity projection was made in ImageJ.
SUPPLEMENTAL MATERIAL
Supplemental material is available online only. SUPPLEMENTAL FILE 1, PDF file, 0.9 MB. The 3D7 sfGFP line was provided by Kathrin Witmer (Imperial College London). We acknowledge Thomas Blake for his help conducting the flow cytometry and Alisje Churchyard for conducting the exflagellation inhibition assay.
ACKNOWLEDGMENTS
The data sets used during the study are available from the corresponding author(s) upon reasonable request.
M.D.W. planned and performed the experiments and wrote the manuscript under the guidance of J.B. H.-E.L. created the violacein constructs and extracted violacein and its derivatives under the guidance of P.S.F. All authors read, edited, and approved the final manuscript.
We declare that we have no competing interests. | 2019-10-24T09:17:53.325Z | 2019-10-22T00:00:00.000 | {
"year": 2020,
"sha1": "3a568105559c0ccf5a40bf2c406ca7d7659d898c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1128/aac.02129-19",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "6813257724001be4138cc83d752f51576e049d91",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Chemistry",
"Medicine"
]
} |
237342259 | pes2o/s2orc | v3-fos-license | Phytohormone Production Profiles in Trichoderma Species and Their Relationship to Wheat Plant Responses to Water Stress
The production of eight phytohormones by Trichoderma species is described, as well as the 1-aminocyclopropane-1-carboxylic acid (ACC) deaminase (ACCD) activity, which diverts the ethylene biosynthetic pathway in plants. The use of the Trichoderma strains T. virens T49, T. longibrachiatum T68, T. spirale T75 and T. harzianum T115 served to demonstrate the diverse production of the phytohormones gibberellins (GA) GA1 and GA4, abscisic acid (ABA), salicylic acid (SA), auxin (indole-3-acetic acid: IAA) and the cytokinins (CK) dihydrozeatin (DHZ), isopenteniladenine (iP) and trans-zeatin (tZ) in this genus. Such production is dependent on strain and/or culture medium. These four strains showed different degrees of wheat root colonization. Fresh and dry weights, conductance, H2O2 content and antioxidant activities such as superoxide dismutase, peroxidase and catalase were analyzed, under optimal irrigation and water stress conditions, on 30-days-old wheat plants treated with four-day-old Trichoderma cultures, obtained from potato dextrose broth (PDB) and PDB-tryptophan (Trp). The application of Trichoderma PDB cultures to wheat plants could be linked to the plants’ ability to adapt the antioxidant machinery and to tolerate water stress. Plants treated with PDB cultures of T49 and T115 had the significantly highest weights under water stress. Compared to controls, treatments with strains T68 and T75, with constrained GA1 and GA4 production, resulted in smaller plants regardless of fungal growth medium and irrigation regime.
Introduction
The establishment of microbial symbioses to promote plant growth and nutrient acquisition by beneficial microbes have been correlated to the biosynthesis of plant growth regulators and phytohormones [1,2]. It is well established that, in addition to inducing host hormone synthesis, pathogenic and symbiotic fungi can also modulate the hormonal network of plants, as they produce by themselves small amounts of phytohormones to serve their purpose. Jasmonic acid (JA), auxin (indole-3-acetic acid: IAA), cytokinins (CK), gibberellins (GA), ethylene (ET), abscisic acid (ABA) and salicylic acid (SA) of fungal origin are involved in favoring tissue colonization and nutrient uptake, by means of plant development control and activation of signaling events during biotic and abiotic stresses [3]. Thus, auxin and GA producing endophytic fungi can enhance host plant growth and alleviate adverse effects of an abiotic stress, opening up the possibility of their use to improve agricultural productivity under adverse soil conditions [4]. In the same case is Trichoderma, a fungal biocontrol agent that includes species that are well known for their ability to produce fungal and oomycete cell wall degrading enzymes [5], scavenging reactive oxygen species (ROS) and causing plant cell wall hydrolysis [6,7] to facilitate the endophytic colonization of root tissues in competition with pathogens [8]. Selected Trichoderma species also produce effector molecules capable of triggering signaling cascades in the plant [9][10][11] that lead to the induction of systemic resistance to biotic and abiotic stresses as well as growth promotion [12,13]. In this regard, rhizosphere competent species have evolved to manipulate root development, plant immunity and stress tolerance by producing phytohormones [14]. It has been shown that T. atroviride, T. virens and T. harzianum produce IAA, T. parareesei produces SA and Trichoderma sp. produces IAA and GA without any inducers, although it is known that their production levels depend on the amount of tryptophan (Trp) present in the medium [15][16][17][18][19][20][21]. T. asperellum also releases ABA together with IAA and GA into the culture medium, and its application to cucumber promoted seedling growth and alleviated the effects of salt stress [22]. The production of IAA by T. harzianum has been related to the biocontrol of anthracnose disease and improved growth of sorghum plants [21]. The application of T. parareesei T6 or T. harzianum T34 to tomato seeds also improved the tolerance of plants to salt stress and enhanced the growth when plants grew under this adverse condition [23,24]. T. afroharzianum (formerly T. harzianum) T22 improved tolerance of tomato seedlings to water deficit [25]. The colonization of cocoa seedlings by T. hamatum DIS 219b enhanced seedling growth, altered gene expression, and delayed the onset of the cocoa drought response in leaves [26]. Similarly, T. atroviride ID20G inoculation of seeds ameliorated drought stress-induced damages by improving antioxidant defense in maize seedlings [27]. The same happened with the improved drought tolerance observed in rice genotypes inoculated with T. harzianum Th-56, in which the antioxidant machinery was activated in a dose-dependent manner [28].
IAA is the phytohormone that regulates the plant's development of the primary and lateral roots [29]. It has been described in other fungi such as Serendipita indica that plant IAA levels have little or no effect on the beneficial fungus-mediated growth promotion, as the plant is very sensitive to changes in IAA concentration and a slight increase in this phytohormone, rather than stimulate, can limit growth [30]. It is well known that additional Trichoderma metabolites and proteins are involved in the regulation of IAA signals in the plant, leading to root hair growth and increased root mass development [31][32][33]. This evidence seems to indicate that rather than a major function in root morphogenesis, IAA and the other phytohormones of fungal origin play a role in interconnecting plant development and defense responses as a component of the complex Trichoderma-regulated phytohormone networking in plants [12,13].
To further complicate the understanding of this issue, ethylene (ET) is a phytohormone which regulates plant growth, development, and senescence, and it is well established that low ET concentrations in the root zone correspond to higher shoot growth [34]; therefore, limiting the levels of ET serves to increase agricultural production. A strategy followed by many rhizospheric microorganisms to favor plants consists of reducing the concentration of 1-aminocyclopropane-1-carboxylic acid (ACC), the precursor molecule of ET, by means of the ability to produce the enzyme ACC deaminase (ACCD). Trichoderma strains have the capacity to produce ACCD. This is the case of T. longibrachiatum TL-6, involved in promoting wheat growth and enhancing plant tolerance to salt stress [35], and T. asperelloides (formerly T. asperellum) T203 that by being able to regulate the endogenous ACC levels stimulates root elongation of cucumber [36] and T. asperellum MAP1, which enhanced wheat plant tolerance to waterlogging stress [37].
Wheat is one of the most important crops in the world, providing one-fifth of proteins and calories in human diet, and its extensive production is often subjected to non-irrigation conditions [38]. ROS are key players in the complex signaling network of plant responses to drought stress, so it is essential to maintain ROS at non-toxic levels in a delicate balancing act between ROS production, involving ROS generating enzymes and the unavoidable production of ROS during basic cellular metabolism, and ROS-scavenging pathways [39]. The application of Trichoderma to wheat triggers systemic defense pathways [40] and seems to be a good choice to minimize damage caused by abiotic stresses [35,37], also limiting environmental pollution. There is sufficient evidence to consider that Trichoderma association can help plants in sustaining drought stress by increasing: (i) the expression of antioxidative enzymes that alleviate the damage caused by the accumulation of ROS and modulating the balance of plant's phytohormones [25,41]; (ii) the absorption surface that leads the plant to improve water-use efficiency [33]; and (iii) the synthesis of phytohormones and phytohormonal analogues to promote plant performance.
In the present work, we have used four Trichoderma strains of four different species representing the genetic diversity of the genus, in which we analyzed their capacity for wheat root colonization, measured ACCD activity, and production levels of the phytohormones GA 1 , GA 4 , ABA, SA, IAA and the CK dihydrozeatin (DHZ), isopenteniladenine (iP) and trans-zeatin (tZ) in medium supplemented or not with Trp. We then analyzed the ability of PDB and PDB-Trp cultures of these strains to favor wheat plants in their growth and their adaptation to grow under water stress. In addition, activities related to the reduction of ROS levels in plants were measured as an indication of good performance of plants inoculated with Trichoderma strains.
Molecular Characterization of Trichoderma Strains
The identity of the four soil-isolated Trichoderma strains used in this study was confirmed at the species level by analysis of the sequences of ITS1-ITS4 region and a fragment ca. 600 bp in length of tef1α gene. They had sequences identical to those of ex-type strains or representative species available in databases. They were identified as: T. virens T49, T. longibrachiatum T68, T. spirale T75 and T. harzianum T115, and the accession numbers of their sequences in the GenBank are shown in Table 1. These strains showed significant differences in growth and degree of sporulation after culturing in three different culture media (Table 2). Strain T49 showed the highest growth rate when cultivated on PDA, PDA-Trp and MEA while T75 was the lowest growing on these media. The growth differences observed for T68 between PDA and PDA-Trp indicate that the addition of Trp negatively affected the growth of this strain. The effect of culture medium was also observed on the sporulation degree, with T75 being the strain that significantly showed the lowest values on PDA or PDA-Trp, and T49 the highest on MEA.
Differences in Colonization of Roots of Wheat Seedlings by Trichoderma Strains
In order to perform a comparative analysis of the wheat root colonization ability among the four Trichoderma strains, we determined the proportion of fungal DNA vs. plant DNA from qPCR data in 10-day-old seedling roots at 42 h after fungal inoculation. As Table 3, strains T49, T75 and T115 colonized the roots, with the highest rates for T49 and T75 (p < 0.05), while T68 showed no colonization.
Differences in ACCD Activity and Phytohormonal Profiles in Trichoderma Strains
The ACCD activity was calculated for all four strains after growing them for four days in synthetic minimal medium. Strain T115 showed significantly higher specific ACCD activity (1.8 mmol of α-ketobutyrate per mg of protein) compared to that of the other three strains (0.09 to 0.20 mmol α-ketobutyrate per mg of protein) (Tukey test at p < 0.05), which showed no significant differences between them.
The production of eight phytohormones by the four Trichoderma strains was measured in PDB medium with and without Trp. Since the PDB medium is composed of plant material, uninoculated media were used as controls. Under these two conditions, a comparative analysis of the production profiles of GA 4 , GA 1 , ABA, SA, IAA, DHZ, iP and tZ by strains T49, T68, T75 and T115 is shown in Figure 1. When compared to the control conditions of each culture medium in a one-way ANOVA, not all Trichoderma strains exhibited production of the eight phytohormones in both media. There was an effect of the variable "strain" (p < 0.001) and variable "medium" (p < 0.001), and their combination on the production of seven of the phytohormones investigated, according to a two-way ANOVA (p < 0.001).
Particularly, the CK iP was the only one that showed no significant effect for the combination of the two variables. T. virens T49 significantly exhibited the highest levels of GA 4 in both media, being much higher in PDB-Trp (p < 0.001). Considering the phytohormone production profiles as a whole T. longibrachiatum T68 did not stand out for any of them. In addition, GA 4 levels were lower for this strain than those detected in its controls, which would be indicative of the metabolization of this molecule present in the medium. Similar behavior was observed only for GA 1 with strains T75 and T115 in PDB-Trp medium. T. spirale T75 showed the highest production levels of SA, IAA and CK. The biosynthesis of SA and CK by this strain did not respond to the addition of Trp to the culture medium. However, strain T75 in PDB-Trp increased IAA levels by about 80 times. On the contrary, strain T49 showed higher levels of IAA production in PDB than in PDB-Trp. T. harzianum T115 was the strain in which the levels of GA 1 and ABA production in PDB were significantly the highest. Data are calculated from n = 3 replicates per strain and culture medium. For each phytohormone and culture medium, different letters above the bars indicate significant differences according to one-way analysis of variance (ANOVA) followed by Tukey's test at the 0.05 alpha-level of confidence. For each phytohormone, significant effects were determined by a two-way ANOVA for . Data are calculated from n = 3 replicates per strain and culture medium. For each phytohormone and culture medium, different letters above the bars indicate significant differences according to one-way analysis of variance (ANOVA) followed by Tukey's test at the 0.05 alpha-level of confidence. For each phytohormone, significant effects were determined by a two-way ANOVA for Trichoderma strain, culture medium and the combination strain per culture medium (***: p < 0.001; **: p < 0.01; ns: no statistical differences).
The Effect of Trichoderma Strains on Wheat Plants under Drought Stress
Greenhouse-grown wheat plants were used to evaluate the effect of PDB and PDB-Trp cultures of the four Trichoderma strains when they were applied to the plant substrate. Plant fresh and dry weight and conductance parameters were measured after 30 days of growth under optimal irrigation and 1/3 of the watering applied during the third and fourth weeks (water stress) (Tables 4 and 5). Representative phenotypes observed in wheat plants treated with the different Trichoderma cultures and irrigation regimes are shown in Figure 2. In a broad sense, the one-way ANOVA results showed the existence of significance for the factors "strain" (p < 0.001), "culture medium" (p < 0.001) and "stress" (p < 0.001). Two different plant responses were observed for Trichoderma cultures from both PDB and PDB-Trp media. Therefore, plants treated with T68 and T75 PDB cultures significantly showed the lowest fresh and dry weight compared to the other treatments under optimal irrigation conditions ( Table 4). On the other hand, under water stress conditions, plants treated with PDB cultures of T49 and T115 had significantly the highest weights, with an increase of ca. 100%. Regarding conductance values, wheat plants showed significantly higher numbers with T49 and T115 PDB cultures under optimal irrigation conditions, whereas the control presented a significant reduction compared to any of the four Trichoderma strains applied under water stress. Table 4. Effect of 4-days PDB cultures of Trichoderma strains on mean fresh and dry weight and conductance values of 30-day-old wheat plants grown in greenhouse with optimal irrigation and water stress conditions (1/3 in the last two weeks). In a similar way, wheat plants treated with T68 and T75 PDB-Trp cultures had significantly lower fresh and dry weight values than control plants or those treated with T49 and T115 PDB-Trp cultures under optimal irrigation conditions ( Table 5). However, no differences in weight and conductance values under water stress were observed among treatments with the sole exception of those plants treated with T68 or T75 PDB-Trp cultures, which gave significantly lower dry weight values (Table 5). A two-way ANOVA for dry weight data showed significance for "culture medium" × "stress" (p < 0.05); and for conductance data, all combinations ("strain" × "culture medium", "strain" × "stress", "culture medium" × "stress"; p < 0.001) were significant. Additionally, a three-way ANOVA showed significance for the combination "strain" × "culture medium" × "stress" for fresh and dry weight (p < 0.05) and for conductance (p < 0.01).
Treatment Fresh Weight (g) Dry Weight (g) gs (mol H 2 O m
Endogenous H 2 O 2 content in wheat leaf from 30-day-old plants did not show variation in unstressed plants, neither in the control nor with Trichoderma regardless of the presence of Trp in the medium to grow the fungus (Figure 3). Water stress control plants from the PDB condition showed a significant increase in H 2 O 2 content compared to those challenged with Trichoderma. However, PDB-Trp condition stressed control plants showed lower levels of H 2 O 2 than plants treated with Trichoderma cultures, which in turn were significantly different, with the highest levels for the T115 treatment. The two-way ANOVA showed significance of the three considered factors and their pairwise combinations (p < 0.01) with the only exception of "culture medium" × "stress", while the three-way ANOVA was significant for the three factors together (p < 0.01).
Pathogens 2021, 10, x FOR PEER REVIEW 8 Endogenous H2O2 content in wheat leaf from 30-day-old plants did not show va tion in unstressed plants, neither in the control nor with Trichoderma regardless of presence of Trp in the medium to grow the fungus (Figure 3). Water stress control pla from the PDB condition showed a significant increase in H2O2 content compared to th challenged with Trichoderma. However, PDB-Trp condition stressed control pla showed lower levels of H2O2 than plants treated with Trichoderma cultures, which in t were significantly different, with the highest levels for the T115 treatment. The two-w ANOVA showed significance of the three considered factors and their pairwise comb tions (p < 0.01) with the only exception of "culture medium" × "stress", while the th way ANOVA was significant for the three factors together (p < 0.01). The values calculated for three antioxidant enzymes in wheat plants are shown Figure 4. Compared to the respective controls, in the absence of water stress and w Trp was not added to the fungal culture medium, the application of Trichoderma cultu resulted in significant SOD activity increase, except for the T115 treatment. Trichode application significantly decreased POD without changing CAT activity. Unstres plants treated with Trichoderma PDB-Trp cultures increased SOD activity compared to control. However, POD activity only decreased significantly in T115-treated plants, w CAT activity being lower than that of the control in all cases. Differences were also served among Trichoderma treatments as the decrease in CAT activity was significan lower in plants challenged with T68. Under the condition of water stress, no signific differences were detected in SOD, POD and CAT activities of plants subjected to any the PDB-Trp treatments compared to the control. However, in absence of Trp in Tri derma cultures, the stressed plants responded to Trichoderma by lowering POD and C activities, and only strain T49 and T68 were able to significantly rise SOD activ ANOVA values indicate that the factor "strain" had significance in the three tested en matic activities of plants (p < 0.001), while the factor "stress" had significance in SOD 0.001) and CAT (p < 0.05), and factor "culture medium" in SOD (p < 0.001) and POD 0.05). The values calculated for three antioxidant enzymes in wheat plants are shown in Figure 4. Compared to the respective controls, in the absence of water stress and when Trp was not added to the fungal culture medium, the application of Trichoderma cultures resulted in significant SOD activity increase, except for the T115 treatment. Trichoderma application significantly decreased POD without changing CAT activity. Unstressed plants treated with Trichoderma PDB-Trp cultures increased SOD activity compared to the control. However, POD activity only decreased significantly in T115-treated plants, with CAT activity being lower than that of the control in all cases. Differences were also observed among Trichoderma treatments as the decrease in CAT activity was significantly lower in plants challenged with T68. Under the condition of water stress, no significant differences were detected in SOD, POD and CAT activities of plants subjected to any of the PDB-Trp treatments compared to the control. However, in absence of Trp in Trichoderma cultures, the stressed plants responded to Trichoderma by lowering POD and CAT activities, and only strain T49 and T68 were able to significantly rise SOD activity. ANOVA values indicate that the factor "strain" had significance in the three tested enzymatic activities of plants (p < 0.001), while the factor "stress" had significance in SOD (p < 0.001) and CAT (p < 0.05), and factor "culture medium" in SOD (p < 0.001) and POD (p < 0.05).
Discussion
Trichoderma is a very complex fungal genus that includes nearly 400 species [42 practical application of Trichoderma needs a correct molecular characterization as th control, biostimulation and other beneficial effects to plants should not be conside broad terms, but at the level of strain. We have included in our study four strains b ing to four phylogenetically distant species to explore their behavior regarding how promote growth and favor water-stressed wheat plants. Modern Trichoderma taxo replicates for each strain, culture medium and plant growth condition. For each fungal culture medium and plant growth condition, different letters above the bars indicate significant differences according to one-way analysis of variance (ANOVA) followed by Tukey's test at the 0.05 alpha-level of confidence. SOD: superoxide dismutase, POD: peroxidase, and CAT: catalase.
Discussion
Trichoderma is a very complex fungal genus that includes nearly 400 species [42]. The practical application of Trichoderma needs a correct molecular characterization as the biocontrol, biostimulation and other beneficial effects to plants should not be considered in broad terms, but at the level of strain. We have included in our study four strains belonging to four phylogenetically distant species to explore their behavior regarding how they promote growth and favor water-stressed wheat plants. Modern Trichoderma taxonomy suggests the analysis of three DNA barcodes (ITS, tef1 and rpb2) [42], and we have achieved unambiguous species identification by ITS1-ITS4 and 600 bp in length of tef1α gene sequencing. Two out of four strains identified belong to T. harzianum and T. virens, two species widely used as biocontrol agents in commercial practice [43,44]. The other two strains belong to species less used in biological control, although there is recent work on the efficacy of T. spirale and T. longibrachiatum in the control of plant pathogenic fungi [45,46].
Our study has been focused on the abilities of these strains to stimulate the growth of wheat plants and alleviate them from water stress. Root colonization ability is often a criterion for selecting Trichoderma strains beneficial to plants [12], and we found that wheat was not a host for strain T68. An important and little studied aspect of Trichoderma is the capacity to produce phytohormones that may be involved in plant interactions. Depending on the strain of Trichoderma and the composition of the culture medium, with or without addition of Trp, or the combination of both, the production of phytohormones was affected. The observed differences in phytohormone production could be affected by the degree of growth of the different strains. However, strain T68 showed good growth and sporulation performances on PDA and PDA-Trp and did not stand out in the production of any of the eight phytohormones tested in PDB and PDB-Trp. PDB has been used because it is a common medium for Trichoderma growth and because the production of IAA has been described in this medium supplemented with Trp [16]. As PDB contains molecules of plant origin, the uninoculated medium has been used as a control, with and without Trp addition, to subtract possible phytohormones already present in the fungal culture media. Although Trp-containing media seem to favor the production of IAA, this is not a rule, as strain T49 showed a behavior contrary to the other three Trichoderma strains. T49 and T115 were the only strains that produced GA 4 and GA 1 , respectively, in medium not supplemented with Trp. However, the addition of Trp to the growth medium of the fungus induced GA 4 , but not GA 1 , production in both strains. Production of GA 3 has been described in T. harzianum, and accumulation of this phytohormone in combination with IAA has been related to plant growth promotion [15,19]. The production of gibberellic acid by Trichoderma also cooperates with IAA and ACCD in the modulation of defense responses in wheat seedlings [18]. Production of GA 1 and GA 4 have been described in other fungi such as Phoma, Penicillium and Aspergillus as plant growth promoters under stress conditions [47,48]. In our case, we have seen that the production profiles of GA 1 and GA 4 are antagonistic, and in the strains that produce them, T49 and T115, their biosynthesis seems to be compensated. It is well known the antagonistic regulation of GA and ABA in plants [49], and this also occurs in Trichoderma for GA 4 and ABA production. However, this was not the case of GA 1 , as strain T115 reached in PDB the highest levels of this phytohormone and ABA simultaneously. Regarding CK, it has been described that their production in fungi is related to hyphal growth and branching, and their accumulation allows better adaptation to stress and colonization of the roots, although the effect on fungal growth is made in a dose-dependent manner [3]. T. spirale T75 produced the highest amount of the three CK analyzed, DHZ, iP and tZ, in the two media used and this was accompanied by the slowest significant growth on PDA and PDA-Trp. As seen in plants [1], this strain showed the typical IAA-CK antagonism when cultured in PDB. However, strain T75 showed the highest IAA and CK production values in PDB-Trp. It should be noted that the production of IAA by strain T75 in PDB-Trp was particularly high and that the trend in all strains was that the addition of Trp reduced the CK levels.
Trichoderma can manipulate the phytohormone regulatory network decreasing the ET precursor ACC through the ACCD activity [12,36]. The four Trichoderma strains exhibited ACCD activity although strain T115 showed 20 times more activity than the other three under identical growth conditions in a synthetic medium. These results are also a consequence of working with strains from genetically very distant species, given the great diversity that exists within the Trichoderma genus [50]. Trichoderma ACCD has also been described as a mechanism in enhancing wheat tolerance to salt stress [35] and wa-terlogging [37]. Our study has included the application of Trichoderma to wheat plants to analyze the effect on growth and tolerance to water stress. The greenhouse assay was conducted using mycelium plus culture supernatant of Trichoderma to inoculate the substrate where wheat plants were grown, and it is therefore difficult to assess the role of Trichoderma phytohormones in wheat plant responses. Under optimal irrigation conditions, none of the treatments with Trichoderma appeared to promote the growth of wheat plants. Moreover, two of the strains, T68 and T75, performed worse than the PDB and PDB-Trp control plants (Figure 2). Perhaps the smaller size and weight of plants compared to their controls (Tables 4 and 5) may be because these two Trichoderma strains show no GA 1 and GA 4 production. It is noteworthy that strain T75, which produced as indicated above the highest concentrations of IAA in PDB-Trp, did not promote plant growth, which would indicate that fungal IAA contributes to the total concentrations of this phytohormone, but it is not the major player in root development as plant IAA does. The high levels of SA and CK reached by this strain (Figure 2) could be the cause of the phenotype observed in T75-treated plants. Since strain T68 was unable to colonize the wheat root, it may be releasing some other metabolites that could limit the growth of the plant. PDB cultures from strains T49 and T115, those producing maximum amounts of GA 4 , and GA 1 and ACCD activity, respectively, were the ones that best increased plant tolerance to water stress, also being the ones that provided higher conductance and weight values in plants ( Table 4). The importance of selecting a suitable strain of Trichoderma is a key point in this type of study, as it has been observed that the colonization of Arabidopsis, tomato and maize roots by T. virens Gv29.8 led to reduced growth of both roots and stems [7,51,52]. Nevertheless, plants treated with PDB cultures of strains T68 and T75 did not show increased growth but did show high conductance (Table 4) and a water stress tolerance phenotype compared with PDB control plants (Figure 2). The significant increases in conductance that we observed in plants from the Trichoderma PDB treatments compared to their control under water stress conditions agree with previous reports indicating that Trichoderma can ameliorate the conductance decline in drought stressed plants [26,53].
Plants treated with Trichoderma PDB cultures under water stress conditions significantly decreased the H 2 O 2 content compared to the control, although no differences were detected under optimal irrigation condition. This result is in line with what has been described in maize treated with T. atroviride under drought stress [27]. In the present study, all Trichoderma strains were able to produce to a greater or lesser extent SA (Figure 2), this phytohormone being very important in the establishment of a plant oxidative burst in response to stress, but also in the upregulation of antioxidant metabolism [13]. The antioxidant level in plant was analyzed by measuring SOD, POD and CAT activities. In a broad sense and as expected, Trichoderma increased the SOD antioxidant activity of the plants under water stress conditions. These results would agree with those reported in stressed or infected tomato plants inoculated with Trichoderma [25,54]. Like in maize inoculated with T. harzianum under salt stress [55], we have also seen that Trichoderma application decreased POD and CAT activities under water deficit conditions. Considering the profiles observed for the three enzyme activities in wheat plants treated with Trichoderma PDB cultures, it seems that the effect of Trichoderma prevails over the stress condition in driving the plant's antioxidant machinery. The addition of Trp to Trichoderma cultures did not appear to modify plant antioxidant enzyme profiles, upregulation of SOD and downregulation of CAT, under non-stressed conditions. However, stressed plants did not modify their antioxidant activity with respect to the control, and it seems that the Trp effect prevailed over the Trichoderma application. Finally, Trp is shown to play a prominent role in the response of wheat plants to water stress as the PDB-Trp control plants had higher weight and conductance values than the PDB control plants. The phenotype of PDB-Trp control plants agrees with the collapse observed in tomato plants over-stimulated with NPK fertilization and Trichoderma under salt stress [24]. However, the phenotypes of the Trichoderma-treated plants did not appear to be greatly affected by Trp supplementation.
The production of the phytohormones GAs, ABA, SA, IAA and CKs by Trichoderma species is a strain-specific characteristic and depends on the composition of the culture medium. These differences are a factor to be considered when exploring the beneficial effects of Trichoderma on plants. In this way, the T. virens T49 and T. harzianum T115 cultures were the best performers in alleviating wheat plants from water stress and it was precisely these two strains which exhibited GA 1 and IAA, and GA 4 and ABA production, respectively, in media not supplemented with Trp. The present work contributes to highlighting the role that the balance of phytohormone levels, to which Trichoderma contributes with its own production, plays in beneficial plant-Trichoderma interactions. In any case, the growth promotion and plant protection effects of Trichoderma are mechanisms with complex regulation that depends on other Trichoderma traits and not only on the production of phytohormones by this fungus. The results of this work are an example of the usefulness of Trichoderma strains in the protection of crop plants against abiotic stresses.
Trichoderma Strains
Four Trichoderma strains isolated from soil and representing different genotypes were used in this study: T. virens T49, T. longibrachiatum T68, T. spirale T75 and T. harzianum T115 (references of our collection, CIALE, University of Salamanca, Spain). Three out of four strains (T49, T68 and T75) have been included in a previous genetic diversity study and their ITS (internal transcribed spacer) 1 sequence was available [56]. Strains were routinely grown on potato dextrose agar (PDA, Difco Laboratories, Detroit, MI, USA) at 28 • C in the dark. For long-term storage, the strains were maintained at −80 • C in a 30% glycerol solution.
Assays of Trichoderma Growth and Sporulation
For the determination of fungal growth, 5-mm-diameter PDA plugs of fungi were placed at the center of Petri dishes containing PDA, PDA-Trp or malt extract agar (MEA, Difco Laboratories Inc., Detroit, MI, USA) medium, plates were incubated at 28 • C in the dark, and colony diameters were recorded at two days. After 10 days of incubation at 28 • C, fungal spores were harvested and counted as previously described [23]. For each strain and medium, four replicates were performed.
Molecular Characterization of Trichoderma Strains
DNA was obtained from mycelium collected from cultures in potato dextrose broth (PDB, Difco Laboratories Inc.) medium for 48 h as previously described [57]. The ITS regions of the nuclear rDNA gene cluster, including ITS1 and ITS2 and the 5.8S rDNA gene, and a fragment of the tef1α gene were amplified with the primer pairs ITS1/ITS4 and EF1-728F/tef1rev, respectively, as described previously [56,58].
PCR products were electrophoresed on 1% agarose gels, the amplicons were excised from the agarose gels, and DNA purified and sequenced as previously described [58]. The sequences obtained were analyzed considering homology in the NCBI database with ex-type strains and taxonomically established isolates of Trichoderma as references. All sequences obtained in this study have been submitted to GenBank, and their accession numbers are indicated in Table 1.
Root Colonization Assay
The quantification of Trichoderma DNA in wheat roots was performed by quantitative PCR (qPCR) as previously described [6,20], with some modifications. Wheat roots were collected from 10-day-old seedlings cultured in 10-mL flasks containing 8 mL of liquid Murashige and Skoog medium (MS, Duchefa Biochemie BV, Haarlem, Netherlands) supplemented with 1% sucrose, and inoculated with 10 5 conidial germlings mL −1 of Trichoderma strain or not (control). Three seedlings per flask were used. Trichoderma germlings were obtained from 15 h cultures in PDB at 28 • C and 200 rpm. After 42 h of fungal inoculation, the wheat roots were collected, washed with sterile water, homogenized under liquid nitrogen, and kept at −20 •C until DNA obtainment. DNA was extracted using the Fast DNA Spin Kit for Soil (MP Biomedical LLC, Irvine, CA, USA). Four independent wheat-Trichoderma strain co-cultures were used for each fungal strain.
qPCR were performed with a Step One Plus thermocycler (Applied Biosystems, Foster City, CA, USA), using KAPA SYBR FAST (Biosystems, Buenos Aires, Argentine) and the previously described primer couples Act-F//Act-R (5 ATGGTATGGGTCAGAAGGA-3 and 5 ATGTCAACACGAGCAATGG) [6] and Act-Fw//Act-Rw (5 -TGACCGTATGAGCAAGGAG-3 //5 -CCAGACACTGTACTTCCTC-3 [40], which amplify a fragment of the actin gene from Trichoderma and wheat, respectively. Reaction mixtures, prepared in triplicate with 1:10 diluted DNA, and PCR conditions were as previously describe [20]. Ct values were calculated and the amount of fungal DNA was estimated using standard curves; and finally values were normalized to the amount of wheat DNA in the samples. Each sample was tested in quadrupled.
ACCD Activity of Trichoderma Strains
The ACCD activity of T49, T68, T75 and T115 strains was carried out as previously described [35,36] with some modifications. For each strain, 100 µL of spore suspension (1 × 10 6 spores/mL) were inoculated in 10 mL of synthetic medium [59], and the cultures grown at 28 • C and 180 rpm for 4 days. The mycelia were collected, resuspended in 2.5 mL of Tris buffer 0.1 M (pH 8.5) and homogenized for 1 min. Toluene (25 µL) was added to a 200 µL aliquot and vortexed for 30 s, and 20 µL of 0.5 M ACC was added (Tris buffer was added in the control). The following steps, including the additions of HCl, 2,4-dinitrophenylhydrazine and NaOH, centrifugations, and the incubation periods of reactions, were as previously described [36]. ACC activity was analyzed quantitatively by measuring the amount of α-ketobutyrate produced by the deamination of ACC. αketobutyrate (10-200 µmol) was used for the standard curve and absorbance was measured at 540 nm. ACCD activity was expressed as mmol α-ketobutyrate mg −1 protein h −1 . The Bradford protein assay was used to measure the protein total concentration in the samples [60] using the BioRad Promega Biotech Ibérica, Alcobendas, Madrid, Spain) reactive. Three independent replicate cultures were analyzed.
Determination of Phytohormone-like Compounds by Trichoderma
The strains were grown in 200 mL of PDB and PDB with 200 mg/L of tryptophan (PDB-Trp) media at 28 • C and 200 rpm for 4 days, and culture supernatants were collected by filtration. In parallel, uninoculated PDB and PDB-Trp media were used as controls. The supernatants were lyophilized, the dry weight was measured, and they were kept at 4 • C until hormones extraction.
Fifty mg (dry weight) of fungal cultures and media supernatant (control) were suspended in 80% methanol-1% acetic acid containing internal standards and mixed by shaking during 60 min at 4 • C. The extract was kept a −20 • C overnight and then centrifuged and the supernatant dried in a vacuum evaporator. The dry residue was dissolved in 1% acetic acid and passed through the Oasis ® HLB (reverse phase) column as previously described [61].
For GA, IAA, ABA and SA quantification, the dried eluate was dissolved in 5% acetonitrile-1% acetic acid, and the hormones were separated using an autosampler and reverse phase UHPLC chromatography (2.6 µm Accucore RP-MS column, 100 mm length × 2.1 mm i.d., ThermoFisher Scientific) with a 5 to 50% acetonitrile gradient containing 0.05% acetic acid, at 400 µL/min over 21 min. For CK, the extracts were additionally passed through the Oasis ® MCX (cationic exchange) and eluted with 60% methanol-5% NH 4 OH to obtain the basic fraction. The final eluate was dried and dissolved in 5% acetonitrile-1% acetic acid and CK were separated with a 5 to 50% acetonitrile gradient over 10 min. The hormones were analyzed with a Q-Exactive mass spectrometer (Orbitrap detector, ThermoFisher Scientific, Waltham, MA, USA) by targeted selected ion monitoring (SIM). The concentrations of hormones in the extracts were determined using embedded calibration curves and the Xcalibur 4.0 and TraceFinder 4.1 SP1 programs. The internal standards for quantification of each of the different plant hormones were the deuterium-labelled hormones. Three independent replicate flasks were analyzed for each strain and culture medium.
Wheat-Trichoderma Greenhouse Assay
The ability of four Trichoderma strains, T49, T68, T75 and T115, to promote the growth of wheat plants and induce tolerance to water stress was evaluated in a in vivo assay. Wheat (Triticum aestivum L., variety Berdún) seeds were surface disinfected by shaking in 2% sodium hypochlorite for 20 min followed by an additional step of 1 min in 0.1 N HCl, and then rinsed them five times with sterile water. The seeds stratification was conducted for 3 days at 4 • C. Trichoderma was applied to the plant growth substrate, and Trichoderma cultures were obtained by the inoculation of 0.5 L flasks containing 250 mL of PBD or PDB-Trp medium with 1 × 10 6 spore/mL and growing of the strains at 28 • C and 180 rpm for 4 days. Then, 250 mL of Trichoderma culture (mycelium and supernatant) were used for inoculating 10 pots.
Surface-disinfected seeds were sown in conical pots (two seeds per pot) of 250 mL capacity containing as substrate a sterile mixture of commercial (Projar Professional, Comercial Projar SA, Fuente el Saz de Jarama, Spain) peat: vermiculite (3:1). The assay initially included 20 treatments and a total of 200 plants, distributed in two blocks (100 plants per block with 10 replicates per treatment) as follows: five for PDB, four PDB cultures and one PDB medium (control); and five for PDB-Trp, four PDB-Trp cultures and one PDB-Trp medium (control). Plants were maintained in a greenhouse at 22 ± 4 • C, as previously described [24], and watered as needed for 2 weeks. Thus, plants from the above indicated two blocks were distributed into 2 sub-blocks as follows: (i) plants from PDB cultures with optimal irrigation; (ii) plants from PDB cultures with water stress (1/3 watering during the third and fourth weeks); (iii) plants from PDB-Trp cultures with optimal irrigation; and (iv) plants from PDB-Trp cultures with water stress. This assay included 10 replicates per condition and lasted 30 days.
Physiological Parameters of Plants
Stomatal conductance (gs) data were taken on 30-day-old wheat plants (10 plants per condition). The gs was measured in the abaxial leaf using a leaf AP4 porometer (Delta-T Devices Ltd., Cambridge, UK). The total shoot of wheat plants was taken at 30 days to record fresh weight (five plants per condition) and dry weight (five plants per conditions), after maintaining plants at 65 • C for 5 days.
Biochemical Analyses of Plants
Wheat plants of 30 days from the greenhouse assay were used to analyze several enzymatic activities. An intermediate leaf of four wheat plants was collected from each treatment and each considered condition (optimal irrigation and water stress), immediately frozen in liquid nitrogen and ground. Proteins were extracted by homogenizing 50 mg of leaf material in 1 mL of 50 mM potassium phosphate buffer (pH 7.8) and centrifugation at 10,000 rpm for 20 min at 4 • C, and later the supernatant was taken and used for the estimation of activity of superoxide dismutase (SOD), catalase (CAT) and peroxidase (POD) antioxidants enzymes. The activities of CAT and POD were determined by using a spectrophotometer as previously described [62], and one unit defined as the change of 0.01 absorbance unit per min. The activity of SOD was measured according to the previous procedure reported [62] with minor modifications. The mixture reaction contained 2 mL of 50 mM potassium phosphate buffer (pH 7.8), 13 mM methionine, 80 µM nitro blue tetrazolium chloride (NBT), 15 µM riboflavin, and 50 µL of protein extract. One unit of SOD was considered as the amount of enzyme needed to cause 50% inhibition in the photochemical reduction of NBT. The activities of CAT, POD and SOD were expressed as unit per min per mg protein and data were calculated for four biological replicates per considered treatment-condition.
H 2 O 2 Contents in Wheat Plants
The quantification of H 2 O 2 was assayed using potassium iodide and by monitoring the absorbance at 390 nm as reported previously [63]. For each sample, fresh plant material was ground in liquid nitrogen and 50 mg used for each sample. Four biological replicates per considered treatment-condition were assayed.
Statistical Analysis
IBM SPSS ® Statistics 27 (IBM Corp.) was used for statistical analyses, through an analysis of variance (ANOVA), to test for possible interactions between the main effects (strain, culture medium, stress water) followed by a mean separation using Tukey's test (p < 0.05).
Conclusions
Four Trichoderma strains belonging to genotypically distant species such as T. virens T49, T. longibrachiatum T68, T. spirale T75 and T. harzianum T115 were able to produce to a greater or lesser extent not only the already known IAA and SA, but also the CK iP and tZ. However, not all strains produced the phytohormones GA 1 , GA 4 , ABA and the CK DHZ. In addition, the four Trichoderma strains displayed ACCD activity. Phytohormone production depended on the strain and/or the composition of the culture medium. Trichoderma strains showed different root colonization behavior, with wheat not appearing to be a host for T68. The application of PDB cultures of Trichoderma strains can be linked to the ability of wheat plants to adapt the antioxidant machinery and to tolerate water stress. However, noninoculated PDB-Trp application made water-stressed control plants collapsed, while those treated with Trichoderma did not. In any case, the plant's ROS production and antioxidant activities of none of the treatments with addition of Trp did not seem to respond to water stress, although those corresponding to the application of Trichoderma PDB-Trp cultures showed better protection. Plants treated with T49 and T115 showed the best water stress tolerance phenotypes. Perhaps the production of GA 4 by T49 and ACCD by T115 could be a cause of this good performance of the wheat plants. | 2021-08-29T06:16:17.619Z | 2021-08-01T00:00:00.000 | {
"year": 2021,
"sha1": "492e1dcf1fc53a78b4773d519d0e6bc059120ed1",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-0817/10/8/991/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ed1f685ec7430c0bbeeb4a377ed11ffa762dba92",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
114023326 | pes2o/s2orc | v3-fos-license | IntroducIng ESd concEptS Into tEachIng EducatIon program
In response to the new Basic Plan for Promotion of Education drawn by the Ministry of Education, Culture, Sports, Science and Technology, the Faculty of Education of Shizuoka University redefined its mission upholding target item such as education closely connected with community, and enhancing the level of teacher training from local, national and global perspective. Based on this new orientation, the faculty introduced class subject on ESD in the teacher education curricula, and in order to deepen students’ understanding of ESD, the field trip (or Student Exchange Program) aimed to learn with/at community was also scheduled. In this paper, we introduce these two activities and explore their possibilities in enhancing educational practices in ESD. It is considered essential that in teaching ESD context the educator needs to,; a) building ownership on issues of creating a sustainable society, b) acquiring regional/local cognition positioning oneself as one of the stakeholders, c) understanding links between local and global issues. Enhancement of these skills and capacities will be discussed from the perspectives of; i) system of teacher education, ii) method of teaching and learning, iii) collaboration among university, schools, and community, iv) evaluation of qualifications and capacities. Keyword: ESD, System of Teacher Education, Method of Teaching and Learning Introduction The Faculty of Education of Shizuoka University redefined its mission in 2013, upholding target items such as “education closely connected with the community” and “enhancing the level of teacher training from local, national, and global perspectives.” As a part of teacher training reforms based on these circumstances, the Faculty of Education at Shizuoka University launched the Working Group for Promotion of ESD and Internationalization in 2013 and has since been involved in the promotion of ESD for teacher training. This report is an early-stage discussion two years after the adoption of the program, looking at the challenges relating to fostering fundamental qualities and abilities necessary to build a sustainable society. In particular, we will describe in an organized manner the challenges that have been revealed in our efforts to enhance the following abilities in students: to have a sense of ownership regarding the issues in building a sustainable society; acquire proper knowledge of the area with the readiness to work to solve, as a member of the community, the issues; and to understand the connection between local and global issues. Initiatives by the Faculty of Education at Shizuoka university Introduction of ESD courses to faculty curriculum Starting in the academic year of 2014, courses related to ESD were newly established for the firstto third-year students. The class titles offered are: ESD and Issues in Contemporary Society for the first-, ESD Theories and Practice for the second-, and ESD and Practical Training for the third-year students. All of these were introduced as optional quasi teacher-training-related subjects. This report discusses the course started in 2014 for first-year students: ESD and Issues in Contemporary Society (full year, four credits). The objectives of the course are to raise interest in ESD, to have the students acquire the skills and attitudes to communicate with others on issues in modern society and to cooperate with them in trying to solve these problems. To practice step-bystep learning toward such goals, the course was set for a full year, and student-led learning methods built upon problem-based learning (PBL) were also introduced. After attending classes on issues closely related to the creation of a sustainable society, the students participated in discussions Lucia E.Yamamoto, Keiko Ikeda, and Akihito Nakajo 32 Journal of Sustainable Development Education and Research | Vol. 1, No.1, 2017, pp. 31-34 and research, learned in a hands-on style, and completed the course after exploring the issues from an ESD perspective and proposed solutions in a report. Each of the students selected one of the seven challenges that were discussed in the course (environment, energy, multicultural coexistence, human rights and poverty, aging society with dwindling birth rate and the local community, multilingual environment, and development), by which they were divided into seven groups and deepened their learning. Community service learning courses at Gadjah Mada University in Indonesia A community service learning program has been implemented in cooperation with Gadjah Mada University, a national university in Indonesia, which has entered into an exchange agreement with the Faculty of Education of Shizuoka University. In the program, students of the two countries learn about each other’s local issues through mutual visits. In August, our students spent around ten days attending practical lessons given by Gadjah Mada University in a suburban village near Jogjakarta City, Indonesia and went on field trips to observe how community are supported by the university. More specifically, the Japanese students examined the characteristics of resources for tourism in the area from the standpoint of developing sustainable tourism. They were also engaged in educational practice, health service, and other activities, and got handson learning at a site where knowledge and skills academically acquired are being given back to the area in accordance with its needs. Students from Gadjah Mada University, on the other hand, stayed in Japan in December for about ten days. They visited municipal government agencies and non-profit organizations that are striving for the development of sustainable tourism in Shizuoka Prefecture capitalizing on Mt. Fuji as their resource. Through their tourism experiences in Japan, the Indonesian students learned about the differences between Japan and Indonesia and also development methods adaptable to their country. challenges and prospects for the promotion of ESd in teacher training The ESD Implementation Plan states, “It is necessary that learners become aware of diverse issues, view these as their own, and act toward solutions”. To this end, it is important for the individuals receiving education to work in a nearby community to develop implementation methods tailored for the local characteristics and promote them. That is to say, the essential qualities and abilities for those who work for the building of a sustainable society are, first and foremost, having a sense of ownership regarding the issue for achieving a sustainable society, and preparing to be a person responsible for settling the issues as a member of the community. Based on these assumptions, it is indispensable to be able to acquire proper awareness about the area and to understand the relationship between local and global issues. These basic qualities and abilities are naturally important for teachers who are expected to practice ESD along with children, thus should be nurtured in teacher training. In the present situation, however, various challenges are present. First of all, unfortunately not many students show interest or actually participate in the class subjects or student exchange program described above. The number of students who took the course of ESD and Issues in Contemporary Society was 13 students in 2014 and 21 students in 2015. Moreover, while the community service learning project through the student exchange program has set its maximum number of students as five, the number of applicants has not even reached this quota. Most students have had the experience, during their school days including high school, of studying more than one topic among the variety of ESD learning tasks. In addition, students who took ESD-related courses expressed thoughts such as, “When I talked with three other classmates, it was the first time I was able to think deeply about multicultural coexistence,” and “Now I would like to research the reality of the homelessness (poverty) on my own. I wished I could study further in a hands-on manner.” These thoughts give a glimpse into these students’ enthusiasm toward exploring the issues that they dealt with in the learning activities. Therefore, we do not regard them having no basic knowledge of the challenges in creating a sustainable society or no interest in them. We presume that they have not had experiences or opportunities to see the issues as their own, or their own community’s and to actively learn or take actions. Furthermore, when it comes to a global issue where it is difficult to understand its relationships with issues occurring around them or in a familiar community, the students obviously had more difficulty in having a sense of ownership. Certainly, education offered at a high school that complies with the current curriculum guideline Introducing ESD Concepts Into Teaching Education Program 33 J S E D R 2 0 1 7 focuses on individual fields with few opportunities to overlook them as a whole or considers links between an issue and another. This might have resulted in a reluctance to understanding the ESD philosophy or way of learning. It is said that, in response to this present situation, the next revision of the curriculum guidelines will contain two mandatory classes in high school geography and history: Integrated History, which studies the modern history of Japan and the world; and Integrated Geography, which nurtures the power to resolve localand global-scale problems. We suspect that the ESD philosophy will be heavily reflected in the new subjects to be introduced in the restructuring of the high school curriculum. To address such situation of students, how can we make them realize that they themselves have to acquire a sense of ownership and knowledge of the community, and enhance their level of understanding regarding the relationship between local and global issues? Hereafter we will organize and present what are thought to be necessary or effective in the followin
Introduction
The Faculty of Education of Shizuoka University redefined its mission in 2013, upholding target items such as "education closely connected with the community" and "enhancing the level of teacher training from local, national, and global perspectives."As a part of teacher training reforms based on these circumstances, the Faculty of Education at Shizuoka University launched the Working Group for Promotion of ESD and Internationalization in 2013 and has since been involved in the promotion of ESD for teacher training.This report is an early-stage discussion two years after the adoption of the program, looking at the challenges relating to fostering fundamental qualities and abilities necessary to build a sustainable society.In particular, we will describe in an organized manner the challenges that have been revealed in our efforts to enhance the following abilities in students: to have a sense of ownership regarding the issues in building a sustainable society; acquire proper knowledge of the area with the readiness to work to solve, as a member of the community, the issues; and to understand the connection between local and global issues.
Introduction of ESD courses to faculty curriculum
Starting in the academic year of 2014, courses related to ESD were newly established for the firstto third-year students.The class titles offered are: ESD and Issues in Contemporary Society for the first-, ESD Theories and Practice for the second-, and ESD and Practical Training for the third-year students.All of these were introduced as optional quasi teacher-training-related subjects.
This report discusses the course started in 2014 for first-year students: ESD and Issues in Contemporary Society (full year, four credits).The objectives of the course are to raise interest in ESD, to have the students acquire the skills and attitudes to communicate with others on issues in modern society and to cooperate with them in trying to solve these problems.To practice step-bystep learning toward such goals, the course was set for a full year, and student-led learning methods built upon problem-based learning (PBL) were also introduced.After attending classes on issues closely related to the creation of a sustainable society, the students participated in discussions and research, learned in a hands-on style, and completed the course after exploring the issues from an ESD perspective and proposed solutions in a report.Each of the students selected one of the seven challenges that were discussed in the course (environment, energy, multicultural coexistence, human rights and poverty, aging society with dwindling birth rate and the local community, multilingual environment, and development), by which they were divided into seven groups and deepened their learning.
Community service learning courses at Gadjah Mada University in Indonesia
A community service learning program has been implemented in cooperation with Gadjah Mada University, a national university in Indonesia, which has entered into an exchange agreement with the Faculty of Education of Shizuoka University.In the program, students of the two countries learn about each other's local issues through mutual visits.In August, our students spent around ten days attending practical lessons given by Gadjah Mada University in a suburban village near Jogjakarta City, Indonesia and went on field trips to observe how community are supported by the university.More specifically, the Japanese students examined the characteristics of resources for tourism in the area from the standpoint of developing sustainable tourism.They were also engaged in educational practice, health service, and other activities, and got handson learning at a site where knowledge and skills academically acquired are being given back to the area in accordance with its needs.Students from Gadjah Mada University, on the other hand, stayed in Japan in December for about ten days.They visited municipal government agencies and non-profit organizations that are striving for the development of sustainable tourism in Shizuoka Prefecture capitalizing on Mt.Fuji as their resource.Through their tourism experiences in Japan, the Indonesian students learned about the differences between Japan and Indonesia and also development methods adaptable to their country.
challenges and prospects for the promotion of ESd in teacher training
The ESD Implementation Plan states, "It is necessary that learners become aware of diverse issues, view these as their own, and act toward solutions".To this end, it is important for the individuals receiving education to work in a nearby community to develop implementation methods tailored for the local characteristics and promote them.That is to say, the essential qualities and abilities for those who work for the building of a sustainable society are, first and foremost, having a sense of ownership regarding the issue for achieving a sustainable society, and preparing to be a person responsible for settling the issues as a member of the community.Based on these assumptions, it is indispensable to be able to acquire proper awareness about the area and to understand the relationship between local and global issues.
These basic qualities and abilities are naturally important for teachers who are expected to practice ESD along with children, thus should be nurtured in teacher training.In the present situation, however, various challenges are present.
First of all, unfortunately not many students show interest or actually participate in the class subjects or student exchange program described above.The number of students who took the course of ESD and Issues in Contemporary Society was 13 students in 2014 and 21 students in 2015.Moreover, while the community service learning project through the student exchange program has set its maximum number of students as five, the number of applicants has not even reached this quota.
Most students have had the experience, during their school days including high school, of studying more than one topic among the variety of ESD learning tasks.In addition, students who took ESD-related courses expressed thoughts such as, "When I talked with three other classmates, it was the first time I was able to think deeply about multicultural coexistence," and "Now I would like to research the reality of the homelessness (poverty) on my own.I wished I could study further in a hands-on manner."These thoughts give a glimpse into these students' enthusiasm toward exploring the issues that they dealt with in the learning activities.Therefore, we do not regard them having no basic knowledge of the challenges in creating a sustainable society or no interest in them.We presume that they have not had experiences or opportunities to see the issues as their own, or their own community's and to actively learn or take actions.Furthermore, when it comes to a global issue where it is difficult to understand its relationships with issues occurring around them or in a familiar community, the students obviously had more difficulty in having a sense of ownership.
Certainly, education offered at a high school that complies with the current curriculum guideline focuses on individual fields with few opportunities to overlook them as a whole or considers links between an issue and another.This might have resulted in a reluctance to understanding the ESD philosophy or way of learning.It is said that, in response to this present situation, the next revision of the curriculum guidelines will contain two mandatory classes in high school geography and history: Integrated History, which studies the modern history of Japan and the world; and Integrated Geography, which nurtures the power to resolve local-and global-scale problems.We suspect that the ESD philosophy will be heavily reflected in the new subjects to be introduced in the restructuring of the high school curriculum.
To address such situation of students, how can we make them realize that they themselves have to acquire a sense of ownership and knowledge of the community, and enhance their level of understanding regarding the relationship between local and global issues?Hereafter we will organize and present what are thought to be necessary or effective in the following four segments: 1. Teacher training system; 2. Teaching and learning methods; 3. Cooperation with schools and communities; and 4. Evaluation of students' quality and ability.
(i) Teacher training system
The challenges pointed out in firmly incorporating the ESD-related content in the teacher-training curriculum are as follows: Merely listening to classroom lectures are not sufficient for learning about challenges for building a sustainable society and how to deal with them.Fieldwork and reporting activities would not fit in the time allotted for one class period.Yet intensive courses lasting several days are not likely to allow for good fieldwork and other activities.It is, therefore, necessary to adeptly devise a class timetable, such as one classroom lecture lasting for two class periods every other week.The teacher training department is not as flexible as others in scheduling timetable, so some special planning would be necessary.Moreover, as off-campus activities will increase, appropriate rules regarding insurance, transportation means, and the like will have to be formulated.
It is anticipated that the teachers themselves may encounter unfamiliar topics, which will produce a burden for them in terms of securing time to prepare for a class, let alone time for accompanying students in their fieldwork, learning activities, and particularly, exchange programs abroad.This problem has entailed the current situation where there are not many teachers who have experience or interest in practicing fieldwork.Teachers have been increasingly busier due to overwhelming administrative work, making overseas business trips for more than ten days becoming almost impossible.We have two teachers assigned taking shifts in fulfilling the duties of leading and assisting students who visit overseas universities that concluded exchange agreements with us.The number of teachers in the teacher training department who have experienced practicing fieldwork or who find it interesting is extremely limited in comparison to the humanities and social sciences departments.Considering that making business trips to rural villages in developing countries with poor communication environment poses a certain risk, it is a serious challenge to secure enough number of teachers who are capable of flexibility coping with the environment and carrying out their work.
(ii) Teaching and learning methods
Various learning methods have been tried out in ESD promotion initiatives.Seminarstyle lectures with a small number of people are given for this purpose, mainly employing active learning and service learning.ESD courses offer omnibus-style lectures and are taught by several teachers whose specialties vary among the ESDrelated subjects.Students taking this course are provided with opportunities to understand diverse challenges in multifaceted ways.In these activities and lectures, PBL format is adopted where students and participants form a team to explore challenges.Within a fixed time period, however, PBL method proved to be incapable of deepening students' awareness of the challenges.Devising desirable size and length of the classes/activities and planning curriculum with a long-term outlook is deemed necessary.
Challenges are also seen in understanding the community in classes regarding communities.When asked about different issues of the area or about the distribution of resources of his/ her own community, it was obvious that they did not have much recognition of a current state of affairs.What is first sought is training to deepen awareness of the students' sphere of daily life (familiar area), then honing this understanding to actually apply in other areas.
It was drawn from their participation in community service learning at Gadjah Mada University that what is required is developing the ability to understand the locality such as distribution situation of their resources and the means to leverage them.This is because there is no uniform way of sustainable tourism development, which is to make the best use of the unique community resources including their lifestyle maintained in the area as well as the environment, and the ideas utilized for this are to be based on a comprehension of the regional profile and specific characteristics.
Our challenge for the future is to make sure that students accumulate practical fieldwork experiences through ESD courses and put their skills into practice in other areas in Japan or abroad.
(iii) Cooperation with schools and communities
Students taking the ESD courses need to go out of the campus to deepen their understanding of current issues.They need to comprehend the circumstances of the issues and witness specific efforts being made.The students can thereby affirm where they stand, and recognize that the issues they have picked are their own.In short, practical training opportunities in the campus and local community are essential for ESD.Nevertheless, it is too great a burden on the to coordinate these opportunities.We must secure sites of practical training not only by relying on the teachers' personal connection with the community but also by building links and collaborative relationships as an organization.As a partnership with schools and the community are hardly achieved in a top-down manner, it is important to maintain day-to-day contact with them to construct mutually beneficial relations.
(iv) Evaluation of students' quality and ability Looking at the actual situations of students, we see many who study hard on limited-scope assignments given, but it takes time for them to acquire attitudes for independent learning.Although they have knowledge of issues in daily life or global issues to some extent, their curiosity and interest seem to be limited in range.It will be effective to come up with a way to organically connect the existing curriculum with the study of ESD courses, such as encouraging study from a global angle utilizing interdisciplinary courses.
Another problem remains of how to evaluate what students have learned.The conventional method of measuring and giving grades for the amount of knowledge acquired does not fit with PBL class format.As time is needed to discover problems and think about it to figure out possible solutions, assessing the long-term manifestation of students' learning is difficult.We include in our evaluation students' attitude that continues to transform throughout the course.Not only the completed works such as report exhibited, presentations, and implementation of cooperative problem solving that are ultimately evaluated, the process to get to the point should also be considered as an important part of their achievement.
Looking toward the Future
Based on the experience accumulated in a short period of two years since ESD promotion in teacher training began, this report organized and presented the challenges in nurturing the fundamental qualities and abilities necessary to be a contributor to a sustainable society, including acquisition of a sense of ownership and knowledge of the area and understanding of the relationship between local and global issues.
Students are expected to acquire, under the tight schedule of the teacher training curriculum, a sense of ownership and knowledge of the area while also raising the level of knowledge in regards to various challenges accompanying ESD.In order to support the students to achieve this goal, we will be required to follow their changes with a long-term vision from the time of entering university until graduation and evaluate, rather than completing evaluation by each individual class or subject.To deepen students' learning in the PBL activities and classes performed in participation style, in addition to organically utilizing the current curriculum and practice opportunities available in the community, evaluation methods that place more importance on the process should be fully developed. | 2018-12-13T02:26:51.175Z | 2017-05-19T00:00:00.000 | {
"year": 2017,
"sha1": "20b1160be50e29be37f2e3280b5a1b74eeb2904a",
"oa_license": "CCBYSA",
"oa_url": "http://ejournal.upi.edu/index.php/JSDER/article/download/6241/4770",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "20b1160be50e29be37f2e3280b5a1b74eeb2904a",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Engineering"
]
} |
221462627 | pes2o/s2orc | v3-fos-license | Implicit-statistical learning in aphasia and its relation to lesion location
Background: Implicit-statistical learning (ISL) research investigates whether domain-general mechanisms are recruited in the linguistic processes that require manipulation of patterned regularities (e.g. syntax). Aphasia is a language disorder caused by focal brain damage in the left fronto-temporal-parietal network. Research shows that people with aphasia (PWA) with frontal lobe lesions manifest convergent deficits in syntax and ISL mechanisms. So far, ISL mechanisms in PWA with temporal or parietal lobe lesions have not been systematically
Introduction
Detection, encoding and exploitation of regularities in a given environment is essential for the successful learning and use of many skills (Conway and Pisoni, 2008). Language is a prominent example of such a skill (Christiansen, 2018;Siegelman et al., 2017). Language is comprised of units at multiple levels (e.g. phonemes, syllables, words, phrases); these basic building blocks of language are assembled into complex structures, which in turn can be defined in statistical and probabilistic terms (Conway and Pissoni, 2008).
Two research traditions, Implicit learning (IL) and Statistical learning (SL), investigate how the domain-general capacity to process regularities contributes to language acquisition and language processing (e.g. Erickson and Thiessen, 2015;Ettlinger et al., 2015;Dehaene and Cohen, 2007;Kepinska et al., 2017;van Witteloostuijn et al., 2019a). Recently, it was proposed that the two traditions be referred to under the term Implicit-statistical learning (ISL; Christiansen, 2018). It is believed that domain-general ISL mechanisms are recruited in linguistic processes that require manipulation of patterned regularities, the most prominent example being syntactic processing (Udden et al., 2017). Behavioral studies have shown that performance on ISL tasks and syntactic processing are related in healthy adults (Conway et al., 2010;Daltrozzo et al., 2017;Misyak and Christiansen, 2012). In the same vein, neuroimaging research has yielded robust evidence to suggest that ISL mechanisms activate anatomical regions which overlap with those activated by syntactic processing, specifically in the left inferior frontal gyrus (LIFG) in the frontal lobe (for general reviews see: Bapi et al., 2005;Conway and Pisoni, 2008;Udden and Bahlmann, 2012, and for experimental studies see: Karuza et al., 2013;Seger et al., 2000;Udden et al., 2017). Given this behavioral and anatomical link between syntactic processing and ISL mechanisms, it has become a matter of empirical interest to study these mechanisms in people with language disorders. Aphasia is an overarching term for a series of acquired language disorders that result from brain damage in key anatomical regions or networks, most commonly after the occurrence of a stroke (Caplan, 2015). These key anatomical regions are usually dispersed across the frontal, the temporal, and the parietal lobes of the left hemisphere (Ibid.). The exact location of lesions in aphasia patients has been shown to be associated with their linguistic profiles (see Caplan, 2015 or Goodglass andKaplan, 1994 for reviews). A particularly interesting group for analysis of ISL mechanisms is patients with Broca's aphasia, a language disorder that is characterized by deficits in complex syntactic processing (see e.g. Caplan et al., 1985;Friederici and Kotz, 2003;Novick et al., 2005). Patients with Broca's aphasia typically suffered damage to the frontal lobe, more specifically to the LIFG (Fitch and Friederici, 2012). These are the same anatomical regions that are activated during ISL tasks. This anatomical convergence has motivated a wealth of research that seeks to examine whether the syntactic impairment in Broca's aphasia might stem from a deficit in domain-general ISL mechanisms, rather than from a selective impairment to the syntax module (Cope et al., 2017;Dominey et al., 2003;Goschke et al., 2001;Schuchard and Thompson, 2013;Schuchard et al., 2016;Zimmerer et al., 2014). The majority of the aforementioned research has shown that patients with Broca's aphasia manifest impaired ISL abilities (but see Cope et al., 2017), which was suggested to support the hypothesis that ISL mechanisms play a prominent role in syntactic processing.
One limitation shared by most of these studies is that they lack an aphasic group with intact frontal regions. A second limitation is the lack of direct empirical evidence that syntactic processing and impairment in ISL mechanisms in aphasia correlate (with the exception of Dominey et al., 2003, however this study is only available as a short research summary and full study details are not provided). We aim to evaluate the performance of two anatomically defined groups of Persons With Aphasia (PWA), one with lesions in frontal regions (f-PWA) and the second with lesions in posterior regions (p-PWA), on a visual statistical learning (VSL) task in order to determine whether a lesion in the LIFG region is a prerequisite for impaired ISL abilities in aphasia. Furthermore, we aim to examine the relationship between ISL and syntactic processing in aphasia. In the subsequent paragraphs we succinctly Box 1 Implicit learning and Statistical learning: summary Implicit learning (IL) IL is traditionally investigated through Artificial Grammar Learning (AGL) tasks. AGL tasks are believed to mirror natural language syntax, most often investigating relations between different types of non-adjacent dependencies (Reber, 1967;Christiansen et al., 2010;Silva et al., 2018). Example a) shows the simplest type of non-adjacent dependency in which the participant must learn that A and B are dependent, irrespective of the intervening element X. AGL grammars represent various degrees of complexity (for a review see Fitch and Friederici, 2012).
Statistical learning (SL)
SL is traditionally investigated with a boundary detection task. In this task, the participant must identify boundaries based on transitional probabilities (TP's). This was originally investigated with continuous speech (Saffran et al., 1996) but speech has also been substituted with shapes, tones or tactile sensations (i.e. Siegelman et al., 2017;van Witteloostuijn et al., 2019a;Daltrozzo et al., 2017). Example b) illustrates how the TP of syllable 'ra' after syllable 'ma' (TP = 1) exceeds the TP of 'tu' after 'ra' (TP = 0.33). a) Non-adjacent dependency: AXB b) ha-ma-ra-tu-mi-ka-ha-ma-ra-ko-ti-po-ha-ma-rapa-ki Underlying mechanisms in IL and SL As illustrated with the examples of both IL and SL paradigms, they study distinct structured patterns. IL focuses on the rule-based knowledge that is acquired in an AGL task. No rule-based learning is postulated to occur in an SL task, where the participant solely must gain sensitivity to the TP's of each element. These theoretical assumptions were thought to reflect the differences between syntax acquisition and lexicon learning (Perruchet and Pacton, 2006). Notwithstanding, experimental evidence from recent years showed that the two traditions may investigate overlapping functions as it is under investigation whether rule-based learning in AGL tasks also requires sensitivity to TP's of elements (as the one required for SL tasks). Furthermore, both traditions have implemented new features from each other (e.g. traditional SL tasks can now examine finite state rule-based grammars, Saffran and Wilson, 2003), and therefore a clear separation between SL and IL is no longer feasible. For a detailed review on the issue of rule-based learning vs. transitional learning, see Hauser et al. (2012); Perruchet and Pacton (2006); Silva et al. (2018). review the theoretically pertinent literature on ISL and the relevant findings regarding the link between language processing and ISL tasks. This is followed by a review of previous studies on ISL in aphasia.
Implicit-statistical learning
A marked goal of ISL research is to investigate "the human ability to detect and exploit the relations between elements in close temporal or spatial proximity" (Perruchet and Pacton, 2006, p. 237). The difference between IL and SL lies in the probed structures: while IL typically probes learning of rule-based, long-distance relationships between elements (i. e. non-adjacent dependencies), SL usually assesses the learning of boundaries in sequential elements (i.e. adjacent dependencies). Examples of these dependencies and specific tasks associated therewith are given in Box 1. Recent theoretical reviews concluded that SL and IL share conceptual assumptions (Christiansen, 2018;Milne et al., 2018;Perruchet and Pacton, 2006), likely investigating at least partially overlapping cognitive functions and engaging partially overlapping brain regions (Karuza et al., 2013;Silva et al., 2018). In this study, we espouse the recent proposals by Milna et al. (2018) and Christiansen (2018), which state that it is empirically and experimentally beneficial not to treat these two traditions separately and will use the umbrella term ISL.
Implicit-statistical learning and language processing: behavioral evidence
Because language is replete with both adjacent and non-adjacent patterns, it has been assumed that domain-general ISL mechanisms play an important role in language acquisition and language processing (see e.g. Kidd et al., 2018;Siegelman et al., 2017). Several studies directly examined performance on ISL tasks and language processing (e. g. Conway et al., 2010;Daltrozzo et al., 2017;Misyak and Christiansen, 2012). While these studies used different ISL paradigms and compared distinct linguistic measures, all aforementioned studies found correlations between ISL mechanisms and adult language processing measures. Conway et al. (2010) found that performance on an ISL task correlated to the word-predictability measure in their study. Misyak and Christiansen (2012) used a visual AGL task with both adjacent and non-adjacent dependencies and found correlations with several syntactic measures (e.g. subject-object relative clauses, animate/inanimate relative clauses) and a word-predictability measure. Lastly, Daltrozzo et al. (2017) investigated how visual SL task performance related to grammatical and receptive vocabulary measures in an event-related potential (ERP) experiment. The authors found correlations between the ERP indices in a visual ISL task and the linguistic measure of grammatical ability. The discussed studies provide behavioral evidence of the link between visual ISL tasks and linguistic measures of syntax and word-predictability, and also show that at least partially overlapping domain-general mechanisms are utilized in both. Having explicated the behavioral link between ISL mechanisms and language processing, we proceed to elucidate the same link at the anatomical level.
Implicit-statistical learning and language processing: anatomical evidence
Anatomically, the LIFG corresponds to the pars opercularis, the pars triangularis, and the pars orbitalis of the left frontal lobe (cytoarchitectonically, Brodmann's areas B44, B45, and B47). A wealth of research has implicated this region as essential for syntactic processing (see Hagoort, 2005 for a review). This was corroborated by a robust finding from fMRI studies which demonstrated that higher syntactic complexity, as measured by number of clauses or number of moved elements, leads to an increase in activation of this region (e.g. Ben-Shachar et al., 2003;Friederici et al., 2005). Two opposing research traditions proposed specific models and theories to account for this finding. The first research tradition encompasses domain-specific linguistic models which postulate that the LIFG (or its subregions) is functionally specialized for syntactic operations (Grodzinsky, 2000;Friederici, 2002). A prominent model, proposed by Friederici and colleagues, postulates that left frontal regions have historically hosted general computational mechanisms for non-hierarchical (local) processing; however, a part of the left frontal regions has specialized, and now two phylogenetically younger sub-regions of the LIFG (BA44/45) are exclusively dedicated to processing the type of recursive hierarchical dependencies found in natural language syntax, namely Phrase-structure grammar (Friederici et al., 2006). In opposition to the syntax-based models, a second research tradition proposed that the LIFG region is not functionally specialized for syntactic operations, but that it rather hosts domain-general functions essential for successful syntactic processing (see Petersson et al., 2012). Within this class of models are those that argue in favor of the hypothesis that ISL mechanisms are located in the LIFG region (see Udden and Bahlmann, 2012 for a review). These models predicted a joint activation during ISL tasks (that probe the detection and the encoding of patterned regularities) and syntactic processing (that probes the exploitation of patterned regularities). Expectedly, the literature reports robust findings that processing of non-linguistic patterned regularities, such as investigated by ISL tasks, activate the same areas as syntactic processing, namely the LIFG region (e.g. AGL tasks: Petersson et al., 2012;Seger et al., 2000;Udden et al., 2017, SL boundary detection task: Karuza et al., 2013).
It is important to note that for the purpose of this study we only introduce a simplified and not fully adequate model of syntactic processing. In recent years, empirical research highlighted the importance of large language networks rather than isolated regions (see Fedorenko and Thompson-Schill, 2014). Furthermore, the notion that individual steps of language processing (e.g. lexical-semantic processing, syntactic processing) are computed in different, non-overlapping brain regions is not uncontroversial (see Blank et al., 2016 for a nuanced discussion).
Implicit-statistical learning in aphasia
Given the anatomical rationale of ISL research in this area, we briefly review how the lesion locations connect to the linguistic profiles of PWA. We emphasize that, while lesion location in aphasia and the linguistic profiles have been shown to be associated, caution must be exercised when drawing tentative conclusions, as this association is not always fully transparent, and there remains a large variation in linguistic deficitsdeficits (Caplan, 2015;Dronkers et al., 2004;Varkanitsa and Kasselimis, 2015). Lesions in the left frontal lobe are most commonly associated with Broca's aphasia. The strongly attested and repeatedly encountered finding is that lesions in the LIFG lead to impairment (from mild to severe) in complex syntactic processing (see e.g. Novick et al., 2005;Caplan et al., 1985;Friederici and Kotz, 2003). Not unlike language processing models, some models in aphasiology posit that the syntactic deficits in Broca's aphasia originate in a selective syntactic impairment (Grodzinsky, 2000;Friederici, 2002) while others propose deficits in domain-general cognitive mechanisms, such as resource capacity (Caplan, 2006), verbal working memory (Baldo and Dronkers, 2006), and, notably, disturbance of ISL mechanisms (e.g. Christiansen et al., 2010;Zimmerer et al., 2014). Lesions in the temporal and parietal lobes, often resulting in Wernicke's aphasia, lead to more pronounced deficits in auditory comprehension and lexical-semantic processing (see e.g. Robson et al., 2012;Thompson et al., 2015). Importantly, lesions in this area are not typically associated with the classical profile of syntactic deficits in Broca's aphasia. Patients with a lesion in these posterior areas often present severe comprehension problems, believed to result from deficits in phonological, lexical and semantic processing (see e.g. Thomson et al., 2015 for a review). Aphasia assessment batteries strive to capture the different linguistic profiles of PWA, and consequently include separate linguistic subcomponents (e.g. lexical, semantic, phonological, syntactic, non-word repetition).
There is a dearth of studies on ISL mechanisms in aphasia (Schuchard et al., 2016). All extant studies to date have tested patients with Broca's aphasia with medium to severe syntactic impairment (Christiansen et al., 2010;Cope et al., 2017;Dominey et al., 2003), with the exception of one that also included patients without syntactic impairment (Zimmerer et al., 2014). There are pertinent differences across these studies that should be highlighted. All studies with the exception of two (Cope et al., 2017;Schuchard et al., 2016) tested a small number of patients, rendering the findings less generalizable. Dominey et al. (2003) and Christiansen et al. (2010) each administered an AGL task in agrammatic aphasia. The researchers found that the patients performed significantly worse than control participants. As a consequence, these authors assert that the syntactic deficits manifested in Broca's aphasia reflect damage to domain-general ISL mechanisms. Notably, Dominey et al. (2003) reported strong correlation between abstract sequence processing and syntactic comprehension in 7 agrammatic PWA. Unfortunately, this result is presented as a brief research summary and methodological details of this study are not available. Zimmerer et al. (2014) compared the performance of PWA with grammatical impairment to PWA without grammatical impairment on an AGL task and found that grammatical impairment was associated with a more pronounced deficit in ISL mechanisms. Zimmerer et al. (2014) did not focus on the proportion of correct vs incorrect responses, but on specific patterns of performance and strategies used by PWA as compared to controls. Considering the complex nature of the artificial grammar used in Zimmerer et al. (2014), the authors were able to analyze which types of violations were or were not rejected by PWA. While both PWA groups were impaired on the AGL task (66% in agrammatic speakers, 70% in the aphasic group without syntactic impairment) compared to healthy participants (88%), the authors found that the individual pattern of performance was within normal range in healthy controls and PWA without syntactic impairment. These results contradict those of Cope et al. (2017) who administered a non-word and a tone AGL task to both patients with Broca's aphasia and nonfluent Primary Progressive Aphasia (nfvPPA) and discovered that patients of both etiologies exhibited impairment as compared to controls. Nonetheless, ISL capacities were not completely absent in these groups. Lastly, Schuchard and Thompson (2013) tested 10 PWA with syntactic impairment on a different experimental paradigm that investigates learning (Serial Search Task) in order to evaluate both implicit and explicit learning deficits in aphasia. It was revealed that implicit learning was impaired to a lesser degree than explicit learning and the authors suggest that the explicit awareness and the maintenance of a sequence places excessive demands on working memory, which in turn impedes successful learning process.
Present study
To summarize, ISL research contends that this mechanism is at least partially domain-general and is recruited in language acquisition and language processing in addition to other functions such as perception, music appreciation and motoric skills (see Arciuli, 2016 for a review). Notably, ISL mechanisms are believed to be recruited only in the language modules that necessitate detection, encoding, and exploitation of patterned regularities, such as syntactic processing, but not in other modules like lexical recall. Two veins of evidence, anatomical and behavioral, yielded empirical support for this contention. Firstly, fMRI studies of non-linguistic ISL tasks (e.g. Karuza et al., 2013;Opitz and Friederici, 2004) found overlapping neural activation with the regions that were previously shown to activate during syntactic processing tasks, namely the LIFG region of the left hemisphere (e.g. Ben-Shachar et al., 2003;Friederici et al., 2005), giving rise to our anatomical hypothesis, which states that syntactic processing and ISL mechanisms anatomically overlap. Secondly, performance on ISL tasks and syntactic processing in adults showed a correlation (e.g. Conway et al., 2010;Daltrozzo et al., 2017), giving rise to our behavioral hypothesis, which claims that syntactic processing at least partially recruits domain-general ISL mechanisms. These two hypotheses are not competing and could be incorporated within a single linguistic model which posits that a) syntactic processing recruits ISL mechanisms and b) this mechanism is located in the LIFG region. Nonetheless, we consider it appropriate to evaluate these two hypotheses separately, as the behavioural hypothesis does not necessitate anatomical overlap, but merely argues that domain-general mechanisms are recruited in syntactic processing, irrespective of their anatomical location.
Considering the limitations in previous research, we will examine ISL abilities in two anatomically defined groups, PWA with frontal lesions (f-PWA) and PWA with posterior lesions (p-PWA), on a non-linguistic visual statistical learning (VSL) task with an additional online RT based measure. Our motivating rationale for anatomically defined groups is the prominent role that the LIFG region is assigned in ISL literature. Therefore, it was of particular interest to evaluate ISL abilities in PWA with an intact LIFG region. Furthermore, we will compare the performance on this VSL task to each patient's linguistic impairment in syntactic and lexical processing, as measured by an aphasia assessment battery. This will give us insight into the following research questions: 1. Is damage to the LIFG region a prerequisite for impaired statistical learning capacities? 2. Do syntactic processing and visual ISL mechanisms in aphasia recruit partially overlapping mechanisms?
Our first research question addresses the spared or disrupted ISL mechanisms in PWA. Based on neuroimaging evidence, we operate under the hypothesis that ISL mechanisms are grounded in the LIFG region. Therefore, we expect that f-PWA (lesioned LIFG) would display worse performance on a VSL task than p-PWA (intact LIFG) and nonbrain damaged control participants.
The second research question examines the extent to which ISL mechanisms are recruited in syntactic processing. We hypothesize, based on previous findings in aphasiology and ISL research, that the degree of syntactic impairment, as measured by aphasia assessment tasks, will correlate to the learning effect on a VSL task. The anatomical group, f-PWA or p-PWA, does not influence our predictions. F-PWA are expected to have lower scores on both the VSL task and syntactic tasks, while p-PWA are predicted to have higher scores on these two types of tasks. As a result, the anatomical group is not expected to interact with the correlational analysis. This study has not been pre-registered.
Participants
Three groups of participants were included in this study, all native speakers of Russian. The PWA were separated into two anatomical groups, five PWA with frontal lesions (f-PWA) and eight PWA with posterior lesions (p-PWA), while the third group comprised eleven nonbrain-damaged control participants.
PWA (N = 13, mean age 62 years, range 50-70 years; mean years of education 13.7 years, range 10-18 years; 7 females) were individuals with a single ischemic stroke at least two months prior to testing (mean post-onset time 13 months, range 3-46 months), all premorbidly righthanded, with no hearing or vision impairments (one PWA had corrected vision). All PWA were undergoing post-stroke rehabilitation at the Center for Speech Pathology and Neurorehabilitation (Moscow, Russia). The recruitment conditions for PWA participants were a) availability of anatomical Magnetic Resonance Images (MRI), b) brain lesion was located either in the left frontal region or left posterior (nonfrontal) region, as stated in a patient's MRI report by a local certified radiologist and confirmed with our visual inspection of native MRI images. All PWA were diagnosed with specific aphasia types and aphasia severity based on the comprehensive neuropsychological investigation by a certified clinical psychologist or a speech-and-language pathologist (Luria, 1964). All participants in the f-PWA group had efferent (approximately equivalent to Broca's aphasia) and afferent (comparable to conduction aphasia) motor aphasia, the linguistic profile of which is characterized by non-fluent and agrammatic speech as well as difficulties in comprehension of complex syntactic structures. Five participants in the p-PWA group were diagnosed with sensory aphasia. Their linguistic profile was characterized by a pronounced deficit in comprehension at the lexical and sentence levels, as well as paraphasias in production. One patient in the p-PWA group was diagnosed with acoustic-mnestic aphasia: a deficit in verbal working memory, with high incidence of anomia. One patient had both sensory and acoustic-mnestic aphasias; and oneefferent and afferent motor aphasias. 2 PWA demographics are summarized in Appendix A, Table A.1. Lastly, PWA varied with respect to the length of the time post-stroke (3-46 months). While spontaneous and therapy-induced changes in aphasia severity can occur at any time post-stroke (Holland et al., 2016), our study focused on a single point measurement of two functions (linguistic and cognitive) that were assessed within 3 weeks of each other, which minimizes the negative influence of differences in time post-stroke. In addition, we excluded patients in the very early subacute period post-stroke (before two months) in order to assure the stability of the lesion (Heiss, 2011). Control participants (N = 11, mean age 50.2, mean years of education 15 years, right-handed and 2 ambidextrous) presented no history of neurological and psychiatric disorders or uncorrected visual or hearing impairments.
VSL task
The VSL task used in the present study was developed by van Witteloostuijn et al. (2019a) to evaluate statistical learning abilities in children with developmental language impairment and is structurally based on the task in Siegelman et al. (2017). The abstract shapes that were originally employed in this VSL task were replaced by a set of 12 child-friendly multi-colored aliens (please consult Appendix B, Fig. B.1 for the entire set). The VSL task consists of two phases: the familiarization is an online RT-based task during which the participant is exposed to the visual stimuli (i.e. 12 aliens) and its aim is to measure the online learning effect; the post-familiarization phase consists of two tasks that measure the presence and the magnitude of the offline learning effect.
2.2.1.1. Familiarization phase. During familiarization, the aliens were presented to the participant on a computer screen one by one. Unknown to the participant, the aliens were repeatedly arranged in the same groups of three (i.e. triplets). In total, four triplets were used in this task (referred to as: ABC, DEF, GHI, JKL). 3 The same triplet never repeated itself (i.e. disallowed: ABC, ABC) and the same rule applied to a pair of triplets (i.e. disallowed: ABC, DEF, ABC, DEF). The familiarization phase consisted of four blocks and each block contained twenty-four alien triplets (24 × 4 = 96 alien triplets, 288 individual aliens in total). During each block, the participant saw each of the four triplets six times. Apart from regular individual aliens, three stimuli per block contained a 'repeated alien', meaning that two aliens of the same type would appear after each other (i.e. AABC, DEEF). In such cases, the participants were instructed to touch the screen where the second alien was situated in order to 'scare him off'. This procedure was included to ensure participants' attention throughout the experiment. The statistical structure of the task is reflected in transitional probabilities (TP's) of aliens. Thus, in case of a triplet ABC, given the element A, the TP to element B is 1 and the same holds for a further transition to element C. The TP at a triplet boundary is low, as any of the other three triplets can follow. Therefore, while the element 2 and 3 (e.g. B and C) are always predictable, the element 1 (e.g. A) is unpredictable. The same principle holds for the other triplets. See Fig. 1 for an illustration of distinct TP's to predictable and unpredictable aliens. The familiarization phase constituted an online RT-based measure: participants determined for themselves the pace at which individual aliens were presented by pressing a button. After 200 ms, a new alien would appear. In this manner, the online sensitivity to the TP's of aliens was reflected in the RT's duration. In case of successful learning, the measured RT's to predictable aliens would be shorter than RT's to unpredictable aliens.
Post-familiarization phase.
Once the familiarization phase had concluded, the participant completed the post-familiarization phase, which contained two separate sub-tasks: pattern recognition alternative forced choice task (2-AFC) and pattern completion alternative force choice task (3-AFC). The tasks include grammatical, partially grammatical and ungrammatical (foil) alien triplets. Grammatical triplets appeared during the experiment, while foils were 'disallowed' alien sequences that did not appear in the familiarization phase. The 2-AFC task required that participants choose the correct pattern out of two presented alien orders; this task contained 24 questions. The 3-AFC task consisted of 16 questions, in which the participant had to fill a gap in a sequence of two or three aliens to mirror the order in which they were presented on the screen. The items in both tasks differ at two levels: number of distractors (the participant choses among 2 or 3 aliens) and difficulty (the ungrammatical pair is either partially grammatical or ungrammatical). Both 2-AFC and 3-AFC were introduced by one test trial to ensure participants understood the purpose of the task. For an example of 2-AFC and 3-AFC tasks stimuli, please consult Appendix B, Fig. 2.B.
2.2.1.3. Procedure. The VSL task was programmed and used in E-Prime 2.0 software (Schneider et al., 2002). The experiment was run on a Microsoft Surface Pro 3 with a touchscreen and a keyboard. PWA were tested in their clinical settings. Control participants were tested at various locations in Moscow, Russia.
The participants were seated in front of a tablet and told that they would engage in a short computer game. In this task, it was their mission to send lost aliens back home to their mothership. Each alien was presented individually in the middle of the screen. In order to send each alien home, participants were instructed to press the green button with the left hand. The participants were also told that, on occasion, two aliens of the same type would like to travel one after another, but that this was disallowed and, in such cases, participants were to touch the screen where the alien was situated in order to scare him off. The participants were instructed to look carefully at aliens and pay attention to the order of the aliens, following the procedure in Siegelman et al. (2017) and van Witteloostuijn et al. (2019a). The familiarization phase was preceded by an instruction phase, during which the participant could practice the instructions and aims of the experiment. The instruction phase mimicked the actual familiarization phase, but did not include any of the 12 experimental aliens (i.e. additional 8 aliens were designed exclusively for the instruction phase). The familiarization consisted of four blocks, and after each block the experiment was paused and the participant could take a short break if necessary. Participants were not informed that additional post-familiarization tasks would follow. If, during the instruction phase, the experimenter believed that participants were pressing the green button inattentively or hurriedly, she instructed them to slow down and pay attention to the stimuli.
Once the familiarization phase was finished, the participant was informed that two short tasks would follow. Each task (2-AFC and 3-AFC) were explained and preceded by one practice trial. In these tasks, participants had to touch the screen in order to select the correct answer. The pilot session, conducted with two PWA that could not participate in the experiment properly, revealed that PWA require a longer and more thorough instruction phase than their control counterparts. In order to ensure that all participants understood the instructions correctly, all aliens were printed and introduced to patients on paper in random order before the experiment commenced. If a patient presented persistent comprehension difficulties during the instruction phase, an additional recapitulation of the explanation was offered with the printed aliens.
Linguistic tasks
To examine our hypothesis, it was of interest to determine the syntactic and lexical deficits of PWA. Therefore, we tested PWA on two linguistic components, the syntactic measure, namely the tasks of sentence comprehension and sentence production, and the lexical measure, namely the tasks of naming and word comprehension. The comprehension and production scores were combined to derive one syntactic and one lexical measure of impairment in all PWA. Combining the production and comprehension scores was considered to provide a more comprehensive assessment of the degree of impairment in given linguistic domains (i.e. syntactic and lexical). Both tasks were taken from the Russian Aphasia Test (RAT), a standardized battery for clinical language assessment implemented on tablet (Ivanova et al., 2016). Importantly, RAT tasks that examined syntactic and lexical deficits in PWA were not used in the classification of aphasia in PWA which were decided solely on the comprehensive neuropsychological investigation (see section 2.1).
The sentence comprehension and sentence production tasks contained 24 experimental items each. These items were selected to capture varying degrees of complexity of several syntactic parameters: number of verb arguments (1-3), reversibility of the semantic roles of a verb, sentence type (simple vs. subordination), and word order (canonical vs. non-canonical). In the comprehension task, the participant had to match an auditorily presented sentence to one of the two black-and-white pictures; the production task required that the participant describes a picture with a sentence. The tasks pertaining to the lexical component were naming and word comprehension. In the naming task, the participant had to name pictures responding to the questions "What is depicted?" (for objects) and "What is a person doing in the picture?" (for actions) in one word. In the comprehension task, the participant had to match an auditorily presented word to the correct picture. The items were controlled for familiarity, visual complexity, image agreement, word imageability, age of acquisition and frequency.
The tests were administered using a Samsung Galaxy Tab A SM-T585 (2016) on Android 7.0 platform, screen size 10.1ʺ, 1920 × 1200 px. In the sentence comprehension test, accuracy of patients' responses was registered automatically. In the production tests, patients' vocal responses were automatically recorded by the same program and analyzed off-line later by the examiner. Linguistic tests were administered within three weeks of the VSL task.
Data analysis
The VSL task provided us with two outcome variables: RT's from the familiarization phase and offline accuracy scores from 2-AFC and 3-AFC tasks. These outcome variables were collected for all participants. Furthermore, linguistic data were collected for PWA which comprised two scores, syntactic and lexical impairment. Our confirmatory research questions were answered by multiple measures. As a consequence, confidence intervals were Bonferroni corrected for multiple testing, CI's were corrected to 0.05/4 = 0.0125 (i.e. CI's = 98.75%) for all confirmatory results. Exploratory results were not corrected (i.e. CI's 95%).
Online RT's
Following van Witteloostuijn et al. (2019a), the RT's to the first triplet in each block were eliminated. This was done to remove deviating RT's due to the pause and restart after each block. In order to account for the variation in the individual speed of the experiment, all RT's were normalized following the exact procedure in van Witteloostuijn et al. (2019a). This was achieved by sorting all N observations in increasing order and replacing each observation by the (r -0.5)/N quantile of the normal distribution, where r is the ranking number of the observation. This normalization resulted in optimally distributed z-values. All analyses were run on normalized RT data and the model estimates are expressed as changes in z-values (Δz) from one level of the predictor to the next (for details see van Witteloostuijn et al., 2019a). The analysis employed linear mixed-effects regression models and the lmer function of the lme4 package (Bates et al., 2015) in R (R Core Team, 2013). Once RT's were normalized, residuals became normally distributed, which rendered the application of a mixed-effects linear model suitable. All continuous variables were scaled. The online sensitivity to the transitional probabilities in the task was expressed as a difference in RT's between unpredictable (i.e. alien 1) and predictable (i.e. alien 2 and alien 3) aliens in interaction with Time (i.e. repetition of triplets across time). Predictability (categorical variable: element 1, element 2 and element 3) and Time (continuous variable, scaled) were within-participant predictors. With regards to our first research question, the interaction with the between-participant predictor Group was of interest. Predictability and Group were contrasted using backwards difference coding. Two backwards difference codings were applied to the predictor Group. The confirmatory coding examined whether Control participants and p-PWA performed significantly different than f-PWA. The exploratory coding examined if all PWA (f-PWA and p-PWA) performed significantly worse than Control participants. A single model was thus used with two codings. Random slopes were included by-Subject and by-Item.
Offline score 2-AFC and 3-AFC tasks
Responses on both tasks were scored 0 for incorrect and 1 for correct responses. Accuracy was expressed as the percentages of correct responses. Initially we examined whether each Group displayed learning by comparing performance with chance level. This analysis was carried out separately per each task, as chance levels differed; task 2-AFC had a chance level of 0.500 and 3-AFC of 0.333. A one-sample t-test was run per task and per Group.
To answer our research question, whether the magnitude of the learning effect differed as a function of Group, we applied a glmer function from the lme4 package (Bates et al., 2015). Both tasks (2-AFC and 3-AFC) were run separately. The accuracy on the tasks was fitted as a function of Group. A single model with two codings (confirmatory and exploratory codings, see Section 2.5.1.) was used for 2-AFC and 3-AFC tasks. The statistical results from the generalized mixed-effects were exponentiated for easier interpretation (see van Witteloostuijn et al., 2019a).
Offline score and linguistic measures
Our second research question addressed the correlation between linguistic impairment and visual statistical learning in aphasia. In order to determine whether the magnitude of the offline learning effect was related to the linguistic measures of PWA, we ran a Pearson's momentcorrelation between the 2-AFC and 3-AFC tasks and two linguistic measures: syntactic impairment and lexical impairment.
Results
In the following sections, we present results aimed at answering our two research questions. The first section focuses on visual statistical learning abilities in two aphasic anatomical groups (f-PWA and p-PWA) and Control participants. The second section addresses the relationship between visual statistical learning and linguistic measures of f-PWA and p-PWA.
Familiarization phase: online RT's
Our first research question examined the effect of Group on the sensitivity to the statistical regularities in the task. Fig. 2 illustrates the normalized RT's to predictable and unpredictable aliens across 24 repetitions in each Group. Mean raw and normalized RT's per Group are given in Table 1. Considering the potential Group differences, we find no evidence that p-PWA and Controls display a stronger learning effect than f-PWA (Δz = − 0.031, 98.75% CI [-0.168… 0.104], t = − 0.584, p = 0.559). In light of these results, we examined whether Controls displayed a larger learning effect than all PWA together (see Online supplementary material on details of contrast coding). This was confirmed; Control participants showed higher sensitivity to the statistical regularities of the task than PWA (Δz = 0.104, 95% CI [0.017…0.192], t = 2.366, p = 0.018). We conclude that there is evidence of non-brain damaged participants showing a better capacity to apprehend statistical regularities than PWA. Table 2 summarizes the performance on two tasks-3-AFC (completion task) and 2-AFC (recognition task)-indicating the mean and the standard deviation per group. The mean score on the 3-AFC task (chance level = 33%) was 64% in Control participants, 44% in f-PWA and 43% in p-PWA. The mean score on the 2-AFC task (chance level = 50%) was 73% in Control participants, 61% in f-PWA and 52% in p-
We now introduce results with regards to Group differences on the offline tasks. Fig. 3 shows the proportion of correct responses on both tasks per Group. There is no evidence that Controls and p-PWA scored significantly higher than f-PWA on 2-AFC (log odds = +0.582, CI 98.75% = [0.121 … 2.604], p = 0.328) and 3-AFC tasks (log odds =
Table 2
Mean scores and standard deviations in 2-AFC (chance level = .5) and 3-AFC (chance level = .33) tasks in three participant groups: Controls, f-PWA, and p-PWA. . We subsequently examined the exploratory research question whether Controls performed significantly better than the PWA together. We provide evidence for this hypothesis on both 2-AFC and 3-AFC tasks. The Control group scored significantly higher than PWA on both the 2-AFC task (log odds = +3.217, CI 95% = [1.296 … 8.775], p = 0.013) and the 3-AFC task (log odds = +2.536, CI 95% = [1.196 … 5.786], p = 0.017). We conclude that non-brain-damaged individuals demonstrate a stronger statistical learning effect, as measured by 3-AFC and 2-AFC tasks, than PWA with frontal and posterior lesions.
Offline tasks and linguistic measures
Participants linguistic scores per each task are given in Appendix A. Pearson's product-moment correlation coefficients were computed to assess the correlations between the scores on VSL offline tasks 2-AFC and 3-AFC and the two linguistic measures (syntactic impairment and lexical impairment, as described in detail in Section 2.2). Fig. 4 illustrates the relationship between the linguistic scores and VSL performance on 2-AFC and 3-AFC tasks. There was a weak, but significant correlation between the 2-AFC score and the syntactic impairment (r = 0.156, CI 98.75% = [0.015 … 0.290], p = 0.005). No correlation was found between the 2-AFC score and the lexical impairment (r = 0.034, CI 98.75% = [-0.107 …0.174], p = 0.542). Statistical results are presented in Table 3. There was no correlation between the 3-AFC task and any of the linguistic measures (Table 3). Individual correlations between each linguistic task (naming, word comprehension, sentence production, sentence comprehension) and 2-AFC and 3-AFC tasks can be found in Appendix C, Table C.1.
Discussion
We applied a VSL task to test ISL in PWA with frontal or posterior lesions. We sought to validate two hypotheses: 1) the anatomical hypothesis: that frontal lesions lead to a more pronounced deficit in ISL mechanisms than posterior lesions in aphasia, and 2) the behavioral hypothesis: that syntactic processing and domain-general ISL mechanisms engage partially overlapping functions. Key observations regarding our first hypothesis reveal that: a) lesion location shows no relation to the magnitude of the learning effect on a VSL task (online and offline), and b) visual ISL is impaired, but not completely absent, in aphasia. The insight most pertinent to our second hypothesis shows that the syntactic deficit in PWA weakly correlated to the size of the learning effect on the VSL task. Overall, behavioral-linguistic profiles were better able to explain performance on the VSL task than general anatomical profiles. We discuss these results in turn below.
Lesion location does not predict ISL abilities
Our first hypothesis posited that f-PWA would manifest more severe impairment on the VSL task than p-PWA and control participants. We sought to validate the anatomical hypothesis put forward in previous research (e.g. Christiansen et al., 2010;Udden et al., 2017;Zimmerer et al., 2014): that the frontal regions-and more specifically the LIFG region-are the locus of domain-general ISL mechanisms. This hypothesis was not supported, neither in the online nor the offline tasks (Figs. 2 and 3). This result was particularly unexpected in light of the fact that all neuroimaging research strongly implied that the LIFG region plays an Table 3 Pearson product-moment correlation between the 2-AFC and 3-AFC tasks and linguistic measures: syntactic measure score and lexical measure score. Results include size of correlation r, 98.75% CI, and p value. appreciable role in the learning and processing of patterned regularities and no activation of posterior regions was reported (e.g. Karuza et al., 2013;Opitz and Friederici, 2004;Seger et al., 2000;Udden et al., 2017). There is limited research on ISL mechanisms in patients with other than Broca's aphasia. Zimmerer et al. (2014) tested PWA without syntactic impairment and concluded that their performance pattern was more similar to healthy controls. Unlike Zimmerer et al. (2014) we find no evidence of milder impairment in patients with posterior lesions when compared to patients with frontal lesions. Our results reveal that non-brained damaged individuals showed significantly better ISL than PWA, irrespective of the lesion location. Notably, our results do not invalidate previous findings that patients with lesions in frontal regions show impairment in ISL abilities. However, we provide novel evidence which suggests that PWA with spared frontal regions also exhibit impairment in ISL mechanisms. This finding adds to the current understanding of the status of ISL abilities in aphasia.
Implicit-statistical learning is not absent in aphasia
With regard to the performance of PWA, our findings suggest that ISL abilities are not totally absent in cases of aphasia. The score on the 2-AFC and 3-AFC tasks demonstrates a successful learning effect of PWA (Fig. 3). Some early studies on ISL in aphasia (Christiansen et al., 2010) have shown an absence of any learning effect on AGL tasks. Two more recent studies (Cope et al., 2017;Schuchard and Thompson, 2013;Schuchard et al., 2016), on the other hand, have evinced impaired, but not completely absent, ISL mechanism in Broca's aphasia, which is in line with the results obtained in the present study. We posit several explanations for these mixed results: a) a significant difference in sample size, and b) a difference in the structural complexity of the tasks and c) differences in socio-demographics and lesion characteristics. Firstly, all early studies tested small numbers of PWA (n < 7), while Cope et al. (2017) tested 22 patients with aphasia and Schuchard et al. (2016) tested 10 patients. Our data testifies to large individual variation (Fig. 3) which raises concerns regarding the validity of initial results with small samples in a heterogeneous population such as an aphasic one. Secondly, ISL tasks probe structures of varying complexity using a plethora of tasks and paradigms (see Fitch and Friederici, 2012 for a detailed discussion), which complicates cross-study comparison (Christiansen et al., 2010). We cannot reject the possibility that our finding of impaired yet present ISL abilities in patients with aphasia is due to the simpler task structure of the VSL task as compared to some AGL tasks. Due to the relatively simple underlying structure of the current VSL task, we were unable to study individual performance patterns, as is possible when investigating more complex grammars (e.g. Visser et al., 2009;Zimmerer et al., 2011Zimmerer et al., , 2014. The fact that ISL mechanisms were not completely absent in PWA carries potential implications for language re-learning in aphasia. PWA with intact or mildly impaired ISL mechanisms may benefit from implicit language therapy as they are capable to detect, encode and exploit patterned regularities in the environment.
Implicit-statistical learning and language impairment in aphasia 4.2.1. Implicit-statistical learning relates to linguistic measures
ISL mechanisms are only considered to be recruited in domains that require exploitation of patterned regularities, such as syntactic processing but not lexical recall. Consequently, our working hypothesis regarding the second research question was that ISL mechanisms would correlate to the syntactic impairment, but not lexical impairment in aphasia. For this reason, we tested all PWA on two language measures -syntactic and lexical abilities-for confirmation of our hypothesis. Our findings revealed that the learning effect on the offline VSL 2-AFC task correlated with syntactic deficits in aphasia. While most ISL studies on aphasia have operated under hypotheses akin to our own--that the deficits in syntax are directly linked to the deficits in ISL abilities of patients-their empirical evidence has consisted of a convergence of deficits (except for Dominey et al., 2003, the lack of methodological details in this study are discussed in section 1.4). We consider this a key finding of our study that is in line with the direct link between syntactic processing and ISL mechanisms reported in healthy populations (e.g. Conway et al., 2010;Misyak and Christiansen, 2012). Furthermore, despite the overwhelming research focus on ISL and child language acquisition (Daltrozzo et al., 2017), we further corroborate that ISL mechanism are important to adult language processing (see Kidd et al., 2018 for a detailed discussion).
While the correlation effect between visual ISL task and syntactic deficits was significant, it was of low strength. Therefore, we only carefully suggest that our findings provide some evidence for the models that assert that language deficits in aphasia are at least partially caused by domain-general mechanisms such as ISL mechanisms, working memory, attention, or processing limitations (see Caplan, 2006 for a review).
Domain-generality of implicit-statistical learning
We want to emphasize that the magnitude of the learning effect on VSL task correlated only to syntactic impairment in aphasia and we found no evidence of a correlation between the learning effect and lexical impairment in aphasia. This finding supports the notion that, while domain-general ISL mechanisms support language processing in aphasia, they are recruited on a selective basis in language modules that require exploitation of patterned regularities. Our findings are important for debates of domain-generality of ISL functions (Frost et al., 2015;Siegelman et al., 2017). Current theoretical consensus is that ISL mechanisms operate as independent computational mechanisms but are subjected to some constraints of domains, pointing towards the coexistence of domain-general and domain-specific mechanisms (see Frost et al., 2015 for a discussion). Our data shows a direct correlation between impairment in syntax and ISL abilities. We refrain from drawing strong conclusions of complete domain-independence of ISL mechanisms, as our evidence based on two tasks is insufficient in this regard. Our findings permit us to articulate the suggestion that visual and linguistic ISL mechanisms partially overlap.
Brain behavior relations in aphasia
Interestingly, behavioral linguistic profiles were better able to account for the learning effect on VSL task than anatomical profiles in this study. We identified a behavioral overlap (as measured by the VSL and linguistic tests), but no anatomical overlap (as measured by the VSL and lesion location) in the present study. This is most likely caused by the heterogeneity of both size and exact location of the lesion. Our study included 13 patients, which is inadequate for a significant lesion overlap; furthermore, no matching based on the size of the lesion was performed. Stroke-induced lesions in Broca's aphasia rarely affect discrete areas, and tend to extend well beyond the language-associated Broca's area (Dronkers et al., 2004). Therefore, any conclusions regarding the localization of ISL mechanisms warrant cautious interpretation. We posit that coarse lesion information, such as in Christiansen et al. (2010), Dominey et al. (2003) or Zimmerer et al. (2014) is insufficient to draw conclusions regarding localization of ISL mechanisms. These conclusions should be relegated to fine-grained neurolinguistic methods, such as those applied in Cope et al. (2017), wherein lesion overlay with 3D atlases was used or the voxel-based lesion-symptom mapping (Bates et al., 2003).
Limitations of the study
Starting with the patient population, the most pertinent limitation of the present study is a small number of participants and an uneven group size. In the same vein, our study did not control for some relevant demographic variables, such as the age and the education of PWA and controls. It has been suggested that, in cases of aphasia, these variables could influence severity and recovery time (Ellis and Urban, 2016).
Lastly, our study entails two clinical limitations: the differences in time post-stroke, and the differences in therapy stage. Unfortunately, these clinical factors were outside the scope of the current study. With regards to the first clinical limitation, the potential interaction of longitudinal aphasia changes (both positive and negative) with ISL mechanisms remains unexplored. Importantly, our study used a single time point measurement of two functions; therefore, we hypothesize that the effect of differences in time post-stroke has been minimal. Turning to the second clinical limitation, it is possible that differences in therapy stage diminished our results, as some patients have received substantially more language therapy targeting some of the relevant linguistic processes that we tested, which led to higher linguistic scores in some patients. Nonetheless, our results were significant. Future studies could include PWA that are not undergoing therapy.
Turning to the testing paradigm used, the VSL task is a popular task for measuring the capacity of individuals to apprehend statistical regularities from a given input (e.g. Daltrozzo et al., 2017;Siegelman et al., 2017). Our results show discrepancy between the online and offline findings (this was not statistically tested). Fig. 3 shows that all participants (controls and PWA) demonstrated learning on a 3-AFC task; however, no online learning effect was detected at the level of the entire group (section 3.1). We argue, in accordance with previous research (e.g. Siegelman et al., 2016), that the online component may not be sufficiently sensitive to measure participants' learning of statistical regularities in the task. This is supported by the results in van It is also plausible that the handedness procedure (all participants, PWA and controls were right-handed but instructed to complete the experiment with their left-hand) induced excessive variation in the RT's and impeded detection of the learning effect.
In addition, we cannot discard the possibility that participants in our study resorted to verbal (e.g. lexical-semantic verbalization of alien features) or non-verbal (e.g. memorization) strategies during the online familiarization phase, as we did not administer an exit questionnaire to our participants. However, descriptive findings of the exit questionnaire from the original study by van Witteloostuijn et al. (2019a) failed to demonstrate any systematic strategy applied by children when completing this task. The stimuli in this study consists of only four triplets and it could be argued that it is in the range of possibility to memorize such a small number of triplets. Importantly, during stimuli presentation the participant does not see individual triplets but only 'a sequence of triplets' (e.g. ABCGHIDEFABCJKLDEFJKLGHI). While any participant must memorize the internal structure of the triplet (e.g. ABC), this is not possible without disambiguation of the external boundary of each triplet (e.g. F-ABC-J), which we argue is accomplished via implicit-statistical learning. Nonetheless, future studies should strive to supplement the testing paradigms by exhaustive cognitive assessment of patients (e.g. verbal and non-verbal IQ, memory, processing speed, attention). It has been suggested that ISL mechanisms are of 'multi-component nature' (Arciuli, 2016) and therefore, it is plausible that unrelated cognitive dysfunctions of attention, processing speed or working memory have impacted our results.
Conclusion
In the present study we tested implicit-statistical learning in aphasia. We collected both anatomical (i.e. lesion location) and behavioural (i.e. linguistic deficits) data from all patients and compared them to the magnitude of the learning effect on a visual statistical learning task. We extend current theoretical knowledge of ISL in two ways. Our first key finding regarding the hypothesized anatomical link between ISL mechanisms and lesion location remained unsupported, and we demonstrate that patients whose LIFG regions were spared, but whose posterior regions were in fact lesioned, manifest impaired ISL mechanisms. Importantly, ISL mechanisms are not completely absent in aphasia. Our second key finding is that the magnitude of the learning effect on the VSL task weakly correlates to the impairment in syntactic abilities, but not to the impairment in lexical abilities. This is a pertinent result, as it suggests that a) domain-general ISL mechanisms at least partially overlap with mechanisms recruited in language processing and b) these ISL mechanisms are recruited in the linguistic modules that require exploitation and manipulation of patterned regularities. This was first demonstrated in typical language processing (Conway et al., 2010;Daltrozzo et al., 2017), and we validate that the same correlation holds in impaired language processing in aphasia. Turning to possible clinical implications that can be derived from our findings, it is promising that ISL mechanisms are not completely absent in aphasia. This holds potential for re-learning of damaged linguistic structures and subsequent exploitation of these structures in communication.
Role of funding source
Te study was supported within the framework of a subsidy by the Russian Academic Excellence Project '5-100'.0
Declaration of competing interest
None. | 2020-09-03T09:14:34.386Z | 2020-09-02T00:00:00.000 | {
"year": 2020,
"sha1": "738f43578ce203c0491e18402c54fa0b368d36e7",
"oa_license": "CCBY",
"oa_url": "https://repub.eur.nl/pub/130413/Repub_130413_O-A.pdf",
"oa_status": "GREEN",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "747627ebd07cb633c0469fa5fbca94c1d161b6bf",
"s2fieldsofstudy": [
"Linguistics",
"Medicine"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
234160791 | pes2o/s2orc | v3-fos-license | Parameter estimation and signal detection algorithm based on adaptive capture in non-cooperative communication
,
Introduction
In TDMA communication system, the signal transmission is dis-continuous and bursty which requires the receiver to accurately realize signal detection and fast synchronization. Especially in the non-cooperative communication system, because the carrier frequency is unknown, the signal will have a large frequency offset even up to the symbol rate of 20% above. The existence of frequency offset will cause the signal spectrum to shift. In the Digital Down Converter, the signal exceeds the passband of the matched filter, causing the baseband signal after Digital Down Converter to be incomplete and affecting the demodulation of subsequent signals. Therefore, it is necessary to eliminate the frequency offset before the digital down-conversion and ensure that the algorithm is con-venient for hardware implementation and consumes less resources.
Usually, a known leading sequence before the TDMA signal is used to estimate synchronize parameter.Therefore,for the demodulation of TDMA signals, the combination of open-loop estimation algorithm and closed-loop algorithm is usually adopted. The general TDMA signal demodulation process in satellite communication is shown in Figure 1: Frequency estimation algorithms commonly used for TDMA signals are divided into two categories: frequency domain estimation algorithms and time domain esti-mation algorithms. The frequency domain estimation algorithm carries out a rough frequency estimation by searching the peak of the periodogram [1] and then uses an interpolation algorithm for accurate estimation [2][3]. Aiming at the large carrier frequency offset, liter-ature [4] proposes a large frequency offset estimation algorithm based on segmented FFT superposition, but it is not convenient for hardware imple-mentation. Time-domain estimation algorithms mainly use the autocorre-lation value of the eliminated modulation information signal to extract frequency offset information. This type of algorithm is generally more com-plex and the estimation range and estimation accuracy are mutually restr-icted [5][6]. Later, a time-domain improved algorithm based on correlation values was proposed which has low complexity and wide range [7]. But it has special requirements for the data frame structure.
The common existence detection algorithms for TDMA signals are di-vided into three categories: energy detection algorithm, frequency dete-ction algorithm and correlation detection algorithm. The basic principle of the energy detection algorithm is to use short-time energy as the detection feature. When the TDMA preamble sequence cannot be obtained, energy detection can be used as an applicable signal detection algorithm. The problem is that the algorithm is sensitive to noise and has poor detection performance under low SNR.
Frequency domain detection algorithms mainly include methods based on cyclic spectrum [8], methods based on amplitude spectrum [9] and DFT-based Power-Law algorithm [10]. This kind of algorithm can also achieve better performance under low SNR, but the implementation complexity is relatively high.The correlation detection algorithm is applicable to the situation where the leading sequence of data is known, but it is sensitive to frequency offset. So it is not applicable. In literature [11], the dual-correlation algorithm is proposed for detection.Although it can realize signal detection in the case of low SNR and frequency offset, the implementation complexity is relatively high.
In summary, the current general signal detection and parameter estimation algorithms cannot cope with the influence of frequency offset in non-cooperative communication on the signal and the parameter estimation and signal detection algorithms cannot balance complexity and accuracy. This paper proposes a burst signal parameter estimation and detection algorithm based on adaptive capture, which can eliminate the influence of frequency offset in the Digital Down Converter. The theoretical basis of the algorithm is given and a large number of simulations prove that the algorithm has better estimation and detection performance under low signal-to-noise ratio and large frequency offset.In addition,it consumes less resources.
The algorithm has the following advantages: 1) High accuracy and wide range of frequency estimation.
2) The hardware implementation consumes less resources.
3) It can eliminate the influence of frequency offset in non-cooperative communication on the signal and keep track of frequency changes.
Principle of Digital Down Converter
Digital Down Converter refers to the transfer of the effective spec-trum of the intermediate frequency signal after A/D sampling to the base-band through mixing, which can adjust the data rate at the same time. In general, Digital Down Converter consists of frequency conversion, filtering and resampling. In non-cooperative communication, there is frequency offset after Digital Down Converter since the frequency is unknown. If the frequency offset is large, the signal spectrum will be shifted so that the fil-ter module will affect the signal and the signal cannot be demodulated.
Principle of adaptive capture algorithm
Assuming that the transmission channel is an additive Gaussian white noise channel, the received signal model is as follows: where k a is the modulation information symbol, In general, Root raised cosine function is used for forming filter in communication system. So we get: Finally, a simplified baseband signal expression is obtained: x a e n + + = + (4) According to literature [12], the frequency estimation formula based on maximum likelihood is as follows: It can be seen from formula (5) that when f can maximize the above formula, f is the Without the complex number term, the accumulated value is the largest. Therefore, it can be proved that the correlation value between signal and leading sequence can be used as the basis for frequency capture. According to the principle of cross-correlation algorithm, the correlation calculation between the signal after Digital Down Converter signal and the known leading sequence can be obtained as follows: , where b f is the step value of one traversal. From the above derivation, we can see that ( ) Y i takes the maximum value when b nf orb nf is equal to ' f . Since the magnitude and direction of the unknown frequency offset are random, it is necessary to further verify the specific influence of the frequency offset on the peak of the correlation result in order to ensure the frequency direction and accuracy. For making the correlation value more prominent, square the modulus of the correlation result to get: i and τ are fixed values for the correlation value of a certain point. So expand According to the even symmetry of the cosine function, we can get: From Figure 2, we can intuitively see the specific impact of frequency offset on the correlation peak: Combined with Figure 2, it can be seen that when the normalized frequency offset is within the range, the function is monotonic. As a whole, the peak value is larger when the frequency offset is closer to 0. Therefore, it is further proved that the correlation result between the signal and the preamble sequence can be used as the basis for frequency capture. The following needs to determine the specific capture algorithm to ensure the accuracy and speed of frequency capture.
Capture algorithm
In actual communication, the frequency of burst signals is a gradual process but this change process is very slow. Therefor, we can assume that the frequency does not change during the frequency capture. Ideally, it is desirable to take the beginning of a burst as a traversal process and take the maximum value of the correlation result as the judgment basis. But in actual situations, the specific location and length of the burst cannot be obtained at this stage. Therefore, in order to ensure that every capture process contains burst signals. Performing a rough detection of the signal position through energy detection and then make a sliding correlation between the detected signal and the preamble sequence. Keep the maximum value of correlation results after one capture. In principle, it is necessary to ensure that every time the capture process contains at least one burst.
Capture model
f is the step value of the capture process. The ultimate goal is to search to make For the sake of generality, normalize the energy of n Y and let ( )
Rough estimate
1) The threshold is set to Th according to Figure 2 and the corresponding frequency is th f .
In order to ensure that the capture performance threshold under low signal-to-noise ratio should be in an appropriate range, the threshold is set to 25% of the peak value through simulation and comparison. Traversing with the initial DDC frequency c f as the initial value which means 0 n = .By comparing the size of 0 Y and Th , it is judged whether the range of frequency offset is within or outside the threshold. According to the above-mentioned monotonicity, we determine the frequency offset direction at this time by performing a search in two directions. 3) At this time, the correlation peak satisfies n Y Th > . Since the direction of the frequency is already known, it is necessary to continue traversing in order to find the maximum peak. According to the monotonicity of the influence of frequency on the correlation peak, there must be n Y Th < stepping in a known direction. Stop capturing when n Y Th < .
We select the largest peak value as the frequency offset of the rough estimation stage. The rough estimate frequency value is 1 In order to improve the capture speed of large frequency offsets and ensure the reliability of the correlation value, the th f is relatively large. This will result in low frequency resolution and insufficient accuracy of the captured frequency values, which requiring accurate estimation.
Accurate estimate
According to the previous analysis, the largest peak and its two adjacent peaks both contain frequency offset information. Therefore, the two peaks adjacent to the maximum peak can be used for interpolation to further accurately estimate the frequency offset. From equation (9), we can get: Finally get: Use the interpolation formula in [13]: It is proved thatξ ∆ is an unbiased estimate of the remaining frequency offset. It should be noted that there are restrictions on the use of the above interpolation formula. It is needed to ensure that
Signal detection
After the frequency is determined, set the Digital Down Converter frequency to c fin f f + . At this time, the signal after Digital Down Converter is the signal with frequency offset eliminated and the signal can be detected directly by using equation (9). In order to ensure that the thresholds of signals of different amplitudes are uniform during detection. It is necessary to normalize the signal energy. The signal detection decision expression based on the correlation algorithm is as follows:
Use correlation results to find phase offset
The expression of data-assisted phase offset estimation in [14] is as follows: The correlation value after frequency capture and signal detection is as follows: is small enough to be ignored. Therefore, we can directly use the correlation value in the signal detection to estimate the phase offset and only need to perform the argument calculation on the correlation value. It does not need to be calculated separately, reducing the computational complexity. The phase offset estimation expression is as follows:
Frequency tracking
In this paper, the frequency estimation algorithm based on adaptive capture needs several of times to capture the frequency and then perform accurate estimation. It is assumed that the frequency does not change during the capture process but the frequency will slowly change over time. Therefore, it needs to be able to track the frequency as the frequency changes after frequency capture in practical applications. Phase-locked loop as the core structure of closedloop carrier synchronization algorithm, its main function is to capture and track signal frequency [15]. Therefore, on the basis of the algorithm in this paper, the phase-locked loop structure is applied to the frequency tracking stage. A phase detection module, a loop filter and a numerically controlled oscillator are added after the signal detection. The numerically controlled oscillator is used to adjust the frequency of the Digital Down Converter module.
In this way, a closed-loop structure is formed as a whole to realize the tracking of the slowly changing frequency. The overall flow of the algorithm is shown in the figure below:
Performance evaluation
The simulation of the algorithm in this paper is mainly developed from two aspects: algorithm complexity and algorithm performance.
Algorithm complexity
The frequency estimation algorithm in this article is before signal detection. Although energy detection is used to find the beginning of the signal at this time, the position of the valid data is not accurate. Whether using the FFT-based frequency estimation algorithm or the adaptive frequency capture algorithm in this article, multiple sliding correlations are still needed to ensure the accurate correspondence between the preamble sequence and the valid data. In hardware implementation, computing modules can be reused. Therefore, we mainly compare the complexity of an operation when calculating complexity. We analyze the algorithm complexity from the perspective of hardware resource consumption. According to the analysis from the perspective of resource consumption in Table 1, the complexity of the adaptive capture algorithm saves nearly 80% compared to the FFT frequency estimation algorithm when 128 N = . But the adaptive capture algorithm requires more data to capture than the FFT frequency estimation algorithm, which brings about time consumption. Since time is only consumed in the initial capture stage, the impact is negligible.
Comparison of operational complexity of signal detection module: When comparing the complexity with the signal detection algorithm against frequency offset in [11], the complexity of the frequency estimation module is taken as part of the calculation since the signal detection module of the adaptive acquisition algorithm performs signal detection after the frequency estimation module. Table 2 compares the operational complexity of the signal detection module.
Comparison of traversal times of different capture algorithms: Compared with the conventional step capture algorithm, the algorithm based on interpolation capture proposed in this paper reduces the number of steps. Conventional capture refers to setting a more precise step value and traversing in two directions. Take the maximum value corresponding to the frequency as the final frequency estimate. Table 3 shows the comparison of the capture times of the two algorithms in different frequency offset ranges.
Simulation 1: Accuracy and range of frequency capture
The mean square error is used to measure the frequency estimation accuracy of the adaptive capture algorithm. The signal used in the algorithm is a single carrier signal modulated by QPSK.Set the normalized frequency offset to 0.02 and 0.1 (normalized symbol rate). The preamble length is The simulation result is shown in Figure 4 and Figure 5. From the simulation results in Figure 4 and Figure 5, it can be seen that the frequency estimation mean square error of the adaptive capture algorithm and the L & R and Fitz algorithms in this paper can be approximated to MCRB under the condition that the normalized frequency offset is 0.02. The & L W algorithm is difficult to approach MCRB even under high signal-to-noise ratio conditions. When the normalized frequency offset is 0.1, the L & R and Fitz algorithms can no longer perform frequency estimation because the frequency offset exceeds the estimation range of the algorithm. Although the estimation range of the & L W algorithm is larger, its estimation accuracy is poor. The mean square error of frequency offset estimation of the adaptive capture algorithm in this paper can still be close to MCRB under the condition that the normalized frequency offset is 0.1. In theory, the frequency offset estimation range of the algorithm can reach 25% ± of the sampling rate under the condition of four times oversampling.
Simulation 2: The influence of pilot frequency on search accuracy
In order to verify the influence of the pilot frequency on the estimation accuracy of each algorithm, set the pilot length =100 N . The correlation function interval of L & R and Fitz algorithm is taken as 2 N . The number of simulations under each signal-to-noise ratio is 10000.
Set the normalized frequency offset to 0.02 (normalized symbol rate) and get the simulation result shown in Figure 6. From the comparison of the simulation results in Figure 5 with Figure 4, it can be seen that the adaptive capture algorithm in this paper and the L & R and Fitz algorithms still have the highest accuracy. The overall trend of each algorithm remains unchanged. But the overall accuracy of the algorithm decreases slightly. It can be seen that the increase of the pilot length can improve the estimation accuracy of the algorithm.
Simulation 3: Signal detection probability
Generate a burst signal whose modulation method is QPSK. The first 32 symbols are noise. Then a 32-symbol preamble sequence is added. After the data, it is 936 symbols of valid data and three consecutive segments of data in this format are used for signal detection performance simulation. The normalized frequency offset is set to 0.05 and the simulation is performed 10000 times under different signal-to-noise ratios. The detection probability and false alarm probability are counted.
It can be seen from Table 4 that after the frequency offset is eliminated based on the adaptive capture algorithm, the corrlelation algorithm with better anti-noise performance can be used for burst signal detection and the detection performance still has a better effect at a lower signal-to-noise ratio.
Simulation 4: Phase offset estimation accuracy
The mean square error is used to measure the phase offset estimation accuracy of this article. The signal used in the algorithm is a single carrier signal modulated by QPSK. In order to verify that the frequency estimation accuracy of the adaptive capture algorithm meets the requirements of phase offset estimation and the estimation performance of the phase offset estimation algorithm. We use different frequency offset estimation algorithms to eliminate the frequency offset data for phase offset estimation simulation. Set the normalized frequency offset to 0.02 and the length of the preamble N to128. The correlation function interval of the L & R algorithm is 2 N and the simulation times are 10000 times under each SNR.
It can be seen from Figure 7 that using the adaptive capture frequency offset estimation or the L & R algorithm to eliminate the frequency offset and then using the intermediate results for phase offset estimation can achieve high accuracy. It is proved that the estimation accuracy of the adaptive capture frequency offset algorithm meets the requirements of phase offset estimation. The mean square error of the phase offset estimation algorithm is close to MCRB .
Conclusion
This paper proposes a parameter estimation and signal detection algorithm based on adaptive capture in non-cooperative communication. The basic idea of the algorithm is to capture multiple times through sliding correlation and find the frequency corresponding to the maximum correlation value as the rough estimated frequency value.After derivation, an interpolation formula suitable for this algorithm is found for accurate estimation.On the basis of finding the frequency, the correlation result can be used for signal detection and phase offset estimation with simple calculations. Finally, the entire parameter estimation and signal detection module constitutes a closed-loop structure, which is based on the principle of phaselocked loop to realize the tracking of the slowly varying frequency. The algorithm eliminates the frequency offset in the Digital Down Converter stage to prevent the large frequency offset from affecting the signal.The simulation shows that the estimation accuracy of this algorithm is close to MCRB . The estimation range is wide and has better performance under low SNR. The hardware-based implementation consumes less resources.Although multiple captures are required to obtain the estimated value, it is only necessary to perform a complete frequency capture at the beginning. After obtaining an accurate estimate, the phase-locked structure can be used for frequency tracking. Therefore, the algorithm in this paper can be applied to the TDMA communication system. This work was supported by the National Natural Science Foundation of China under Grant U1736107 | 2021-05-11T00:07:33.975Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "a1ec61baa17f5a5e3c10c022424f1060144a24e1",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2021/05/matecconf_cscns20_04002.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b3de147136c680cebec01a632cdca552dac11316",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
269653811 | pes2o/s2orc | v3-fos-license | The impact of bank’s diversity and inclusion index on profitability: evidence from Indonesia and Malaysia Era
This study aims to investigate the effects of the Diversity and Inclusion Rating (DIR) score on profitability, comparing conventional and Islamic banks. Employing the available data on DIP and ESG (Environmental, Social, and Governance) scores from the Refinitiv database, this study took a dataset of 100 firm-year observations which consists of both conventional and Islamic banks in Indonesia and Malaysia. We conducted a random-effect regression model with the inclusion of some appropriate control variables as well as year and country dummies. The findings of this study prove that there is a positive and significant association between DIR and both profitability ratios of ROA and ROE. Meanwhile, for Islamic banks, DIR is negatively related to ROA and ROE for several reasons explained in this study, including the partial misalignment of conventional Diversity & Inclusion proxy with Sharia principles.
The impact of bank's diversity and inclusion index on profitability: evidence from Indonesia and Malaysia Introduction
Islamic banks have a different principle in banking activities that must comply with Islamic teaching.Consequently, Islamic banks have more restricted activities that limit Islamic banking operations, which is against the Shariah principle such as committing to interest-based transactions and gambling activities.However, such constraints do not discourage Islamic banks from having excellent development.According to ICD-Refinitiv (2022), up to date, Islamic banks have had significant development year-by-year.The report also reveals that the asset of Islamic banks in the world was USD 1,603 billion in 2015, and in the year 2021, became USD 2,765 billion; it has 17% growth in 2021, which is projected to be USD 4,025 million in 2026 (ICD-Refinitiv, 2022).
With the current development of Islamic banks, the question then emerges of what the determinant of Islamic banking performance is.In this regard, two general aspects determine the performance, which are financial and non-financial determinants.For the first determinant, many studies have been conducted to examine the impact of financial aspects on banking performance.Previous studies including Zarrouk et al. (2016) reveal that the driving force of Islamic banks is the same as conventional banks in terms of financial performance, such as the level of capitalization, cost-effectiveness, and asset quality.Similar results are found by Khasawneh (2016), Saif-Alyousfi & Saha (2021), Ramlan & Adnan (2016), and Trad et al. (2017), who state that financial activities directly impact banking performance reflected by either return on asset (ROA) or return on equity (ROE).The previous studies generally have the same and clear viewpoint that, empirically, financial performance matters for Islamic banking performance.
From non-financial performance, some studies argue that diversity and inclusion play a pivotal role in determining firms' performance (Cheong & Sinnakkannu, 2014;Bax, 2023).Focusing on Islamic banking sectors, Jabari & Muhamad (2020) highlight the diversity in the board of directors (BOD) and Shariah supervisory board (SSB), particularly the issue of women's representation in the board.The finding of the study finds that Islamic banks that have more diversity in BOD and SSB tend to have a better performance compared to Islamic banks that are less diverse.Additionally, the bank size also affects the level of diversity in which larger banks tend to have more diversity, which finally results in better Islamic banking performance.
However, the previous studies on diversity and inclusion of the firm only focus on BOD, including in the case of Islamic banks conducted by Jabari & Muhamad (2020).Therefore, the questions remain existed on how diversity and inclusion in Islamic banking, not limited only to BOD, affect Islamic banking performance.The question is important because, considering the fast and significant Islamic banking development, the size of the bank becomes larger and the business activity may expand to many sectors and regions, which finally require skilled human resources.In addition, Adams & Ferreira (2009) explain that diversity and inclusion have a positive impact on firm performance because it discloses potential talents to work in the firms regardless of their background.
The study aims to examine the impact of diversity and inclusion on Islamic banking performance in Indonesia and Malaysia while also observing its effect on Islamic banks in particular.The region is selected because of the reasons.Firstly, it is one of the leading regions that have significant development of Islamic banking especially (ICD-Refinitiv, 2022).Secondly, Islamic banks in Indonesia and Malaysia have robust regulation and governance (Ibrahim & Law, 2019;Fakhrunnas et al., 2023).Thirdly, Indonesia and Malaysia are two of the countries that have the largest Muslim population (Trinugroho et al., 2021), creating larger opportunities for Islamic banks to develop.
In more detail, the study intends to answer some research questions (RQ), which are: RQ1: What is the influence of diversity and inclusion on bank's return on asset?RQ2: What is the influence of diversity and inclusion on bank's return on equity?RQ3: What are the differences between Islamic and conventional banks regarding the relationship between diversity and inclusion in financial performance?Finally, after the introduction, the arrangement of the study consists of a literature review, followed by data and samples in the methodology section.The next section then presents the result and discussion, and ends with conclusion and recommendation.
Resource Dependence Theory
According to Pfeffer & Salancik (1978), the key to organizational survival is the ability to acquire and maintain resources.The principle of resource dependence asserts that enterprises rely on the resources in their surrounding environments to exist.Businesses are in danger as a result of these dependencies.Businesses can cultivate links with the external bodies that govern those resources to lessen reliance and their associated uncertainties.De Souza & Gama (2020) suggest that the concepts of diversity and inclusion as two sides of the same coin.This is because diversity is about the composition of a group or environment, while inclusion refers to the dynamics of the members of that group or environment.Some studies argue that diversity and inclusion play a pivotal role in determining firm performance (Bax, 2023;Cheong & Sinnakkannu, 2014).Understanding the influence of diversity and inclusion on the firm performance has important implications for stakeholders.The resource dependency theory appreciates the strategic importance of other stakeholders beside the immediate shareholders in guaranteeing firms' access to resources through affiliation with various constituencies (Lawal, 2012).
In previous studies, resource dependence theory has been used to explain the influence of different types of diversity and inclusion on firm performance.However, these studies only focus on BOD.The diversity of board members is believed to lend support to resource dependence theory because diversity in terms of skills, nationality, and gender brings on board varied experiences and perspectives, which altogether enhances the effectiveness of the board (Hillman et al., 2007).Diversity in board membership helps companies realize diversity of expertise in understanding and dealing with complex and uncertain external environments.(Hillman et al., 2000).For example, gender diversity improves financial performance by gaining access to a broader talent pool and expanding the variety of expertise available to the BOD, which, in turn, increases a firm's competitive advantage compared to less diversified firms (Kim & Starks, 2016).As for cultural diversity, Cox et al. (1991) state that cultural diversity in workforces brings value to organizations and ultimately improves their performance.
The Effect of Diversity & Inclusion Rating (DIR) on Return on Asset (ROA)
Studying the importance of gender and cultural diversity, social inclusion, and both personal and professional development of the employees, Suciu et al. (2020) found that the added value generated by the employees of companies with headquarters in Europe (employee turnover), salary incentives, flexible work programs, employee satisfaction, gender, and cultural diversity are key factors with a significant positive impact on the financial performance of companies.Regarding gender diversity among the Sharia Supervisory Board (SSB) members, Jabari & Muhamad (2020) found that women's presence and proportion on the SSB positively affect Islamic banks' ROAA (Return on Average Assets).The results of this study provide evidence that women on the BOD bring a unique set of attitudes, perspectives, and values, which enhance Islamic banks' financial performance (Jabari & Muhamad, 2020).These empirical evidence are in line with the view of resource dependence theory indicating that diversity enhances firm financial performance.According to these findings, higher DIR corresponds to increased ROA H1: Diversity and inclusion positively and significantly influence profitability as measured by ROA
The Effect of Diversity & Inclusion Rating (DIR) on Return on Equity (ROE)
Return on Equity (ROE) is a financial performance measure commonly used instead of Return on Asset (ROA).ROE describes how well a company uses the equity of its shareholders to maximize their earnings.Kabir et al. (2023) examined the relationship between gender diversity and firm performance covering firms from 19 European countries from 2010 to 2020.The results show that gender diversity exerts a positive effect on the firm's performance (ROA & ROE).Research on gender diversity in Islamic banks conducted by Jabari & Muhamad (2020) found that Islamic banks with a more gender-diverse BOD are expected to have better financial performance as measured by the ROAE (Return on Average Equity).The financial performance of corporations is generally improved by gender diversity on the board, according to theoretical considerations based on the perspective of resource dependence theory.According to these findings, higher DIR corresponds to increased ROE H2: Diversity and inclusion positively and significantly influence profitability as measured by ROE
Research Method
Data related to diversity, inclusion, and financial performance of banks are extracted from the Refinitiv Eikon database.As Diversity and Inclusion Rating (DIR) data was just recently released by Refinitiv, this study selects a sample covering all Islamic banks in Indonesia and Malaysia with available DIR scores.It is resulted in a total of 21 banks of which 19 of them are conventional banks and the other 2 are Islamic full-fledged banks.However, we should note that among 19 conventional banks, 11 of them provide Islamic window services.The data period for this study spans over 8 years (2015)(2016)(2017)(2018)(2019)(2020)(2021)(2022).The details of the sample selection process are depicted in Table 1.The sample of this data is initially 280 firm-year observations but then filters against some missing data on financial and DIR variables.From that process, it ultimately resulted in 100 final observations.This number is statistically acceptable as suggested by Harrel (2017), for which there should be at least 10 observations per variable.Here, we only use one independent variable which is Diversity and Inclusion Rating.Subsequently, to investigate the relationship between DIR and profitability, we conducted a panel regression based on the following research equations: Additionally, we also estimate the interaction between DIR and Shariah banks using the equation as follows: where PROFIT is the dependent variable as measured by Return on Asset (ROA) and Return on Equity (ROE), while the independent variable is represented by bank's Diversity and Inclusion Rating (DIR).
We selected ROA and ROE as the dependent variable of this study because they have been used as a reliable accounting measure of firm's financial performance and are widely used by many past studies (Jabari & Muhammad, 2020;Moudud-Ul-Huq et al., 2018;Pathan, 2009).On the other hand, some other studies (Abdullah et al., 2016;Papangkorn et al., 2019) used a market-based measurement of financial performance, which does not apply to this study as only 2 listed banks out of 21 banks captured in this study.Meanwhile, for the independent variable, we employ a newly released DIR score from the Refinitiv database because it depicts a broader and more comprehensive range of factors beyond gender, such as cultural diversity, disability, motherhood status, etc.This study employs some control variables to determine the influence of each independent variable on the study's interest.Following prior literature, we selected these control variables primarily because of their likelihood of affecting banks' financial performance.First, it controls for governance-specific characteristics, such as BOD size and BOD independence (Adams & Ferreira, 2009).Meanwhile, for the second ground, this study controls for firm-specific characteristics such as slack (Orazalin, et al., 2023), capital intensity (Haque & Ntim, 2022), and leverage ratio (Chadha & Sharma, 2015).Given the two countries' data, the third group of control variables comprises country-level variables which include GDP (Boudawara, et al., 2023), inflation (Jabari & Muhamad, 2020), and country's governance quality (Orazalin, et al., 2023).Detailed measurements and definitions of all variables can be found in Table 2.
In order to analyse the static panel data regression models of our research, we conducted both fixed and random effects models, including the appropriate control variables.After that, we test for the consistency of the random effects estimator in our analysis below by conducting the standard Hausman test statistics.We found an insignificant value for the Hausman test statistic which implies that the fixed effect estimator is not consistent and thus random effect is more appropriate.Furthermore, to control for the heteroscedasticity, we measure t-statistics using the robust standard error as applied by previous studies (Alharasis, et al., 2024;Orazalin, et al., 2023).Moreover, our results are robust to alternative specifications of time-and geography-specific factors such as year and country dummies (Elsayed & Paton, 2005;Haque & Ntim, 2022).
Results and Discussion
Table 2 displays the descriptive statistics of all the variables included in the analysis, such as Diversity and Inclusion Rating (DIR), Return on Assets (ROA), Return on Equity (ROE), Gross Domestic Product (GDP), Inflation, and Worldwide Governance Index (WGI).As shown in Table 2, the results of the descriptive statistics analysis show that the mean value of DIR is relatively low, with only 53.645 out of 100.This means that there is ample room for improvement in implementing Diversity and Inclusion values within the banking sector in Malaysia and Indonesia.Meanwhile, for the profitability of the banking sector, ROE is relatively higher than its ROA, with the mean values of 0.1231 (12.31%) and 0.0153 (1.53%) respectively.Further, there are some negative values in both ROA and ROE particularly in 2020, as an effect of the Covid-19 pandemic.Table 4 presents the pairwise correlation coefficient between the dependent, the independent, and the control variables.The correlation coefficients with * are significant at the 5% level.Moreover, the correlation coefficients show that the DIR is positively related to Return on Assets (ROA) and Return on Equity (ROE).Further, except for ROA & ROE, the absolute values of the coefficients range between 0.694 and 0.016, indicating no evidence of serious multicollinearity.To test the hypotheses, we run separate regressions for both types of bank's profitability (Table 4).Panel 1 & 2 shows the regression results with both the dependent variable of ROA, while Panel 3 & 4 shows the regression results with the dependent variable of ROE.In addition to that, we would like to figure out their effects specifically on Islamic banks.Hence, following Orazalin, et al. (2023), we put Shariah bank as a dummy variable where a value of 0 is assigned for conventional banks and 1 is assigned for Islamic banks.The result for Islamic banks is depicted in Panel 2 & 4.
Panel 1 & 2 show that there is a positive and significant association between DIR and both profitability ratios of ROA and ROE.The results are statistically significant for these two models at p < 0.05.These findings support Hypothesis 1 and aligned with previous studies (Jabari, et al., 2021;Cardillo, et al, 2021;Aggarwal, et al., 2019) which advocate that a greater level of diversity would lead to better performance of banks as they might bring a broader experience and expand the variety of expertise to the firms.On that basis of argument, we can also argue that the variable of inclusion added in this study could also strengthen this positive effect.As suggested by Resource Dependence Theory, gaining access to a larger talent pool including those with disabilities, mothers with children, etc. would improve the financial performance of the firms (Pfeffer and Salancik, 1978).
Meanwhile, for Islamic banks, DIR is negatively related to ROA and ROE (significant at 10% and 5% levels).This inconsistent result can be caused by low-performing attitude behaviour (Dankwano & Hassan, 2018) arising from potential improper management and initial culture of homogeneity in developing economies where Islamic banks are mainly operating.In this regard, group members who differ from the majority tend to have lower levels of psychological commitment and higher levels of turnover intent and absenteeism (Marimuthu & Kolandaisamy, 2009).Hence, several studies suggested that heterogeneity tends to lead to conflicts and reduces the effectiveness of communication within the firms (Pelled et al., 1999;Amason, 1996;Carpenter, 2002).Furthermore, based on the critical mass theory, the positive effect of diversity can only be realised if the number of women exceeds a certain number to dominate and lead the change (Arena et al., 2015;Joecks et al., 2013).Given a relatively low implementation of Diversity and Inclusion in the banking sector of Indonesia and Malaysia, we could argue that the benefit of diversity is yet to be transformed at this stage.Furthermore, there is a minor indicator of Diversity and Inclusion which is not in conformity with Islamic principles including the US LGBT Equality Index.Hence, this study suggests that further scholars could re-construct a novel Diversity and Inclusion index which is fully aligned with Islamic principles, and hence could reflect the true image of Islamic banks and further investigate its effects on other observable variables.
Conclusion
The study examines the impact of DIR on banking performance reflected by ROA and ROE.The findings reveal that DIR has a positive and significant relationship to banking performance, indicating that hypotheses H1 and H2 are accepted.Moreover, Islamic banks do not benefit from the DIR to increase banking performance.It confirms that diversity and inclusion are not essential in improving Islamic banks' financial performance, in which the current DIR measurement does not fully align with the Shariah principle.Thus, Islamic banks are suggested to not blindly follow DIR as currently structured.
As policy implications, it has twofold.Firstly, from the viewpoint of the regulator, financial authority is not necessary to regulate diversity and inclusion as structured above in the case of Indonesian and Malaysian banking sectors, considering the presence of Islamic banks.The presence of regulation of DIR potentially reduces Islamic banking performance.Secondly, from the perspective of the bank's practitioner, diversity and inclusion are inseparable from the development of Islamic banks.However, in the case of Islamic banks, it must align with the Shariah principle, and it is not counterproductive to banking performance.
Finally, we acknowledge that the study has a limitation in the form of the small sample size due to the limited availability of recently released data on DIR scores, potentially limiting the historical analysis and comprehensive understanding of long-term trends.However, despite its relatively small sample size, it has passed the minimum statistical requirements, and we believe there is a strong importance of having this timely observation as a basis for future studies to further develop this research model using a larger sample size and capturing a broader impact of DIR on the banking sector.By using a larger size, we believe that more advanced estimation can be used, especially implementing panel dynamic models which are able to fully address some econometrical issues in the model.Secondly, this study merely focuses on two regions which are Indonesia and Malaysia, and hence the findings of this study could not be generalisable to other countries/regions where data collection and reporting standards may vary.Lastly, further research is also needed to construct a completely novel DIR measurement which is fully in line with Sharia principles and/or to investigate the specific mechanisms through which diversity and inclusion influence Islamic bank performance and how these factors can be aligned with Shariah principles.
Table 1 .
Sample selection
Table 2 .
Variables Description and Sources | 2024-05-11T16:13:54.343Z | 2024-04-26T00:00:00.000 | {
"year": 2024,
"sha1": "cf1167b6fe831c1543bbc4718b8e50006c18551c",
"oa_license": "CCBYSA",
"oa_url": "https://journal.uii.ac.id/JCA/article/download/32161/16705",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "41c197ac2f731eccb00fe63418354999803f688b",
"s2fieldsofstudy": [
"Business",
"Economics"
],
"extfieldsofstudy": []
} |
246635667 | pes2o/s2orc | v3-fos-license | Trafficking regulator of GLUT4-1 (TRARG1) is a GSK3 substrate
Trafficking regulator of GLUT4-1, TRARG1, positively regulates insulin-stimulated GLUT4 trafficking and insulin sensitivity. However, the mechanism(s) by which this occurs remain(s) unclear. Using biochemical and mass spectrometry analyses we found that TRARG1 is dephosphorylated in response to insulin in a PI3K/Akt-dependent manner and is a novel substrate for GSK3. Priming phosphorylation of murine TRARG1 at serine 84 allows for GSK3-directed phosphorylation at serines 72, 76 and 80. A similar pattern of phosphorylation was observed in human TRARG1, suggesting that our findings are translatable to human TRARG1. Pharmacological inhibition of GSK3 increased cell surface GLUT4 in cells stimulated with a submaximal insulin dose, and this was impaired following Trarg1 knockdown, suggesting that TRARG1 acts as a GSK3-mediated regulator in GLUT4 trafficking. These data place TRARG1 within the insulin signaling network and provide insights into how GSK3 regulates GLUT4 trafficking in adipocytes.
Introduction
Insulin maintains whole-body glucose homeostasis by regulating glucose transport into muscle and adipose tissue and by inhibiting glucose output from liver.Defects in these processes cause insulin resistance, which is a precursor to metabolic disorders including Type 2 diabetes.Insulin signaling augments glucose transport into adipose tissue and muscle by promoting the translocation of the glucose transporter GLUT4 from a specialized intracellular storage compartment (GLUT4 storage vesicles; GSVs) to the plasma membrane (PM) [1].Previous proteome studies of intracellular membranes highly enriched in GLUT4 have revealed major GSV resident proteins including TBC1D4 (AS160), Sortilin, IRAP, LRP1, VAMP2 [2][3][4] and Trafficking regulator of GLUT4-1 (TRARG1).TRARG1 positively regulated insulin-stimulated glucose transport and GLUT4 trafficking, and was required for the insulin- [5,6].However, the mechanisms by which insulin signals to TRARG1 or how TRARG1 promotes insulin sensitivity remains unknown.
The most characterized signaling pathway relevant to GLUT4 trafficking is the PI3K/AKT axis, which is activated by a series of upstream signaling events triggered by the binding of insulin to its receptor at the cell surface.AKT is necessary and sufficient for insulin-stimulated GLUT4 trafficking [7], and is thought to act predominantly via phosphorylation of the Rab GTPase-activating protein (GAP), TBC1D4.Phosphorylation of TBC1D4 inhibits its GAP activity, and permits its target Rab GTPases to mediate the translocation of GSVs to the PM [8].However, loss of TBC1D4 or its cognate Rab proteins did not completely mimic or inhibit insulin-stimulated GLUT4 translocation [9][10][11], suggesting that there may be additional sites of insulin action within the GLUT4 trafficking pathway [12].
In addition to phosphorylating substrates such as TBC1D4, AKT also modulates cellular metabolism through increased or diminished activity of other kinases.For example, AKT directly phosphorylates glycogen synthase kinase 3 ( which inhibits its kinase activity [13] and leads to dephosphorylation and activation of glycogen synthase (GS) and thereby glycogen synthesis.GSK3 is a unique kinase in that 1) it is constitutively active and inhibited in response to extracellular stimulation and 2) its substrates usually need priming by another kinase [14].Given the role of GSK3 in glycogen synthesis, it has generally been thought that GSK3 mainly participates in glucose metabolism via regulation of glycogen synthesis [15][16][17][18][19].However, although there are conflicting reports [20][21][22], evidence from studies using small molecules to acutely inactivate GSK3 shows that GSK3 may also regulate glucose transport and GLUT4 translocation [21,22].
In the present study, we identified a range of post-translational modifications (PTMs) on TRARG1 and integrated TRARG1 into the insulin signaling pathway by showing that insulin promotes TRARG1 dephosphorylation via PI3K/AKT.Specifically, we report that TRARG1 is a novel direct substrate of the protein kinase GSK3, which phosphorylates murine TRARG1 at three sites within a highly phosphorylated patch between residues 70 and 90 in the cytosolic portion of TRARG1.Further, submaximal insulin-stimulated GLUT4 trafficking was potentiated by GSK3 inhibition, but this potentiation was impaired by simultaneous Trarg1 knockdown.Overall, our findings have revealed new information on how TRARG1 is regulated by insulin signalling and suggest that TRARG1 may provide a link between GSK3 and GLUT4 trafficking.
TRARG1 phosphorylation alters its migration in SDS-PAGE
We previously reported that a proportion of TRARG1 exhibited reduced electrophoretic mobility following separation by SDS-PAGE.These apparent higher molecular weight species of TRARG1 were enriched in the PM but not low or high density microsomal (LDM, HDM) fractions (Fig. 1A) [5].Given the role of TRARG1 in GLUT4 trafficking, we sought to identify the cause of altered electrophoretic mobility of TRARG1 with the aim of providing insight into how TRARG1 regulates GLUT4.We hypothesized that these apparent higher molecular weight TRARG1 bands are due to PTM of TRARG1 as: 1) they were all reduced in intensity upon knock-down of TRARG1 using different sets of siRNAs [5], and 2) they are not splice variants as the cDNA of TRARG1, which does not contain any introns, also generates multiple bands when transfected into HEK-293E cells (Fig. 1B).
We first used mass spectrometry (MS) analysis to identify PTMs on immunoprecipitated HA-tagged TRARG1 (HA-TRARG1) expressed in HEK-293E cells.This revealed multiple PTM types on murine TRARG1 including phosphorylation, methylation, oxidation, ubiquitination (or other ubiquitin-like modifications as indicated by peptides containing di-gly modifications) and acetylation (Fig. 1C, Supplemental table S1).This analysis revealed TRARG1 to be extensively post-translationally modified, with a particularly high number of phosphorylated sites within a short cytosolic region between residues 70 and 90.To test if these specific PTMs affected TRARG1 mobility in SDS-PAGE, we generated a series of HA-TRARG1 mutants (Fig. 1D), including Ser/Thr to Ala or Glu mutants where Ala prevents and Glu mimics phosphorylation [23], a Lys-null mutant (K R) to prevent ubiquitination or acetylation of Lys and a Cys-null mutant (C S) to prevent palmitoylation of Cys.Of these mutants, only the Ser/Thr to Ala mutant expressed in HEK-293E cells or 3T3-L1 adipocytes exhibited a similar electrophoretic mobility as the lower band of HA-TRARG1, while Ser/Thr to Glu mutant migrated to a similar position as the higher bands of HA-TRARG1 (Fig. 1E-F).These data suggest that the apparent higher molecular weight TRARG1 bands are dependent on PTMs on Ser/Thr, but not Lys or Cys.
Consistent with these data, treatment of lysates from 3T3-L1 adipocytes (Fig. 1G), HEK-293E cells transfected with HA-TRARG1 (Fig. 1H) or adipose tissue (Fig. 1I) with Lambda protein phosphatase (LPP) in vitro completely removed apparent higher molecular weight TRARG1 bands.We note that the apparent higher molecular weight TRARG1 bands were only present in epididymal and subcutaneous white adipose depots, but not in brown adipose tissue, as previously observed [5] (Fig. 1I).Further, acute treatment of 3T3-L1 adipocytes with the phosphatase inhibitors Calyculin A or Okadaic Acid decreased TRARG1 migration in SDS-PAGE (Fig. 1J-K).Therefore, TRARG1 is extensively post-translationally modified, with phosphorylation reducing TRARG1 mobility by SDS-PAGE.Additionally, the ratio of apparent higher molecular weight TRARG1 signal to total TRARG1 signal by immunoblotting can serve as a proxy metric for TRARG1 phosphorylation (Fig. 1K).
TRARG1 is dephosphorylated with insulin in a PI3K/AKT-dependent manner
Given the role of TRARG1 in adipocyte insulin responses [5,6], we next tested whether TRARG1 phosphorylation was regulated by insulin signaling.Using the ratio of apparent higher molecular weight TRARG1 versus total TRARG1 as a proxy for TRARG1 phosphorylation, we detected a decrease in TRARG1 phosphorylation in 3T3-L1 adipocytes stimulated with insulin (Fig. 2 A-B).This decrease was blocked by PI3K (wortmannin) or AKT inhibition (MK-2206), but not by mTORC1 (rapamycin) or ERK1/2 inhibition (GDC-0994) (Fig. 2A-B).As evidence for the efficacy of kinase inhibition, phosphorylation of AKT S473 and TBC1D4 T642 was impaired by wortmannin and MK-2206; 4EBP1 S65 by wortmannin, MK-2206 and rapamycin; and p90RSK T359/S363 by GDC-0994 (Fig. 2A).These data indicate that TRARG1 is dephosphorylated in response to insulin in a PI3K/AKT-dependent manner.To complement this, we mined data from a SILAC-based, global phosphoproteomic analysis of insulin-stimulated 3T3-L1 adipocytes previously published by our laboratory [24].These data revealed five phosphosites on TRARG1 (Ser45, Ser76, Ser79, Ser80 and Ser90) downregulated with 4 insulin treatment compared to basal conditions (log2FC -0.58) (Fig. 2C).The most insulin-sensitive sites were Ser76, Ser79 and Ser80 (Fig. 2C).AKT inhibition using MK-2206 before insulin stimulation -fold compared to insulin stimulation alone (Fig. 2C).Together these data indicate that phosphosites on TRARG1 are dephosphorylated in response to insulin in a PI3K/AKT-dependent manner and that target sites S76 and S80 appear to be most affected.
TRARG1 is a novel substrate of GSK3
To elucidate the chain of events by which insulin decreases TRARG1 phosphorylation, we next aimed to identify the kinase responsible for TRARG1 phosphorylation.Here, we considered PKA or GSK3; both kinases that are deactivated by insulin.Further, both these kinases have consensus motifs consistent with phosphorylation of TRARG1 between residue 70 and 90 (PKA: R-R-X-S/T; residues 81-84, GSK3: S/T-X-X-X-S/T; residues 70-74, 72-76, 76-80 and 80-84).To test for the involvement of PKA in TRARG1 phosphorylation, we treated 3T3-L1 adipocytes with or without insulin and/or -adrenergic receptor agonist that potently activates PKA.Insulin-stimulation decreased phosphorylation of both TRARG1 and HSL, a PKA substrate (Fig. 3A-B).Isoproterenol increased HSL phosphorylation in the absence and presence of insulin, but not TRARG1 phosphorylation, suggesting that TRARG1 is not a PKA substrate.In contrast, the GSK3 inhibitors CHIR99021 and LY2090314, like insulin, decreased phosphorylation of glycogen synthase at S641, a GSK3 substrate, and TRARG1 (Fig. 3C-D).The same effect was observed for endogenous and transfected TRARG1 in six other cell types or tissues.Endogenous TRARG1 in epididymal (EWAT) (Fig. 3E-F) and subcutaneous (SWAT) fat explants (Fig. S1A-B), and cultured human SGBS adipocytes (Fig. S1C), was dephosphorylated in response to insulin or GSK3 inhibitors.Similarly, ectopic TRARG1 was dephosphorylated by insulin or GSK3 inhibitors in L6 myotubes (Fig. S1D), and by GSK3 inhibitors in HEK-293E and HeLa cells (Fig. S1E-F).
To complement this pharmacological approach, and to test for a GSK3 isoform-specific effect on TRARG1 phosphorylation, we knocked down and/or , in 3T3-L1 adipocytes and assessed the phosphorylation status of TRARG1 (Fig. 3G-H).Phosphorylation of glycogen synthase was only significantly decreased with the knock down of both Gsk3 isoforms, although the effect was subtle, perhaps due to residual GSK3 expression (Fig. 3G & I).Nevertheless, phosphorylation of TRARG1 was significantly decreased under conditions where Gsk was depleted (Fig. 3G-H), whereas knockdown of Gsk alone did not affect TRARG1 phosphorylation (Fig. 3G-H).These data suggest that TRARG1 is predominantly phosphorylated by GSK3 in 3T3-L1 adipocytes.
We next utilized an in vitro assay to test if GSK3 phosphorylates TRARG1 directly.Endogenous TRARG1 was immunoprecipitated from serum-starved 3T3-L1 adipocytes treated with LY2090314 to ensure that the majority of TRARG1 was dephosphorylated while the priming site remained phosphorylated and incubated with Incubation with either GSK3 isoform decreased TRARG1 electrophoretic mobility (Fig. 3J), suggesting that TRARG1 was in vitro.Together, these data suggest that TRARG1 phosphorylation is regulated by insulin through the PI3K-AKT-GSK3 axis.
Identification of GSK3 target sites on TRARG1
In general, GSK3 has a substrate consensus motif where it phosphorylates a serine or threonine reside if there is a pre-phosphorylated (primed by a different kinase) serine/threonine four residues C-terminal to the target site (Fig. 4A) [14].Substrates of GSK3 often contain three or four adjacent S/T-X-X-X-pS/T motifs, allowing GSK3 to phosphorylate every fourth residue in a string of sequential sites as it creates its own primed site [14].To identify the GSK3 sites on TRARG1, we first performed a phosphoproteomic study with GSK3 inhibitor LY2090314.As expected, phosphorylation of the known GSK3 sites on glycogen synthase (S649, S645 and S641) were significantly decreased with GSK3 inhibition (Fig. 4B).On TRARG1, S72 was the only site significantly dephosphorylated following GSK3 inhibition among the sites detected (Fig. 4B).Although phosphorylation of S80 and S76 were not detected in this study using a GSK inhibitor (Fig. 4B), they were both decreased by insulin and rescued by an AKT inhibitor (Fig. 2C), which matches the attributes of GSK3 target sites.Therefore, these datasets together revealed a putative string of sequential GSK3 target sites on TRARG1 at T88, S84, S80, S76 and S72.Of note, S72 was not regulated by insulin in the phosphoproteomic analysis described in Fig. 2C [24] although it was clearly down-regulated in response to GSK3 inhibition (Fig. 4A).
Since phosphorylation of neither S84 nor T88 was significantly regulated by the GSK3 inhibitor, we hypothesized that S84 or S88 is the priming site (pre-phosphorylated by a different kinase).To test this, we expressed murine TRARG1 phospho-mutants with S/T at positions 79/80, 84, 85, or 88/89/90 mutated to Ala in HEK-293E cells.Only TRARG1 with S84 mutated to Ala lost the apparent higher molecular weight bands (Fig. 4C), suggesting that S84 is the priming site on murine TRARG1 allowing for its subsequent phosphorylation by GSK3 at multiple sites (S80, S76 and S72), which altered TRARG1 migration in SDS-PAGE.In support of this, murine TRARG1 with S72/76/80 mutated to Ala completely lost the apparent higher molecular weight bands, similar to the S84A mutant (Fig. 4D).
Since functionally important phosphosites are more likely conserved across species [25], we aligned TRARG1 sequences from multiple species in which TRARG1 homologues were identified.The phosphosite-rich region between residues 70 and 90 in the murine sequence is highly conserved across the vertebrate species examined (Fig. 4E; pink and red residues), indicating a likely important function for this region.Of the insulin/GSK3 regulated sites identified from phosphoproteomic analyses (Fig. 4E, orange residues), S76 (human S79) was conserved across all species analysed, although S80 appeared to be more specific to placental mammals.Analysis of TRARG1 polymorphisms across placental mammals (Fig. 4F, Supplemental table S2) revealed that murine S76, S79, S80, S84, S85, T88 and S90 (human S79, S82, S83, S87, S88, T91 and S93) were conserved within placental mammals, while murine S72 was only present in rodents (Supplemental table S2).Indeed, the Ala mutant of S87 on human TRARG1, which is equivalent to the S84 priming site on murine TRARG1, abolished its apparent higher molecular weight bands (Fig. 4G), suggesting that this conserved site acts as the equivalent priming site for human TRARG1 phosphorylation by GSK3.
Taken together, our data show that murine TRARG1 is primed at S84 for subsequent phosphorylation by GSK3 at S80, S76 and S72.Further, these sites are highly conserved across species including human with the exception of S72, which is only present in some rodents.
TRARG1 phosphorylation does not regulate its subcellular distribution
Since we initially observed that the apparent higher molecular weight bands of TRARG1 were highly enriched within the PM subcellular fraction (Fig. 1A), and TRARG1 undergoes insulin-stimulated translocation to the PM [5], we determined whether TRARG1 phosphorylation status altered its localization.To this end, we performed immunofluorescence microscopy with HA-TRARG1 or 7A/7E phosphomutants expressed in 3T3-L1 adipocytes.These mutants targeted the majority of the most highly conserved residues within the 70-90 phosphosite-rich region of murine TRARG1 to mitigate redundancy between phosphosites.Although the S72 and S80 sites were not mutated in this construct, the 7A or 7E mutant mimicked the lower (dephosphorylated form) or higher (phosphorylated form) band of TRARG1, respectively (Fig. 1E), likely because of the requirement for phosphorylation of S84 for GSK3-mediated phosphorylation of S72, and S80 (Fig. 4C-D).HA-TRARG1 and GLUT4 colocalized at the PM and in the peri-nuclear region (Fig. 5A), consistent with previous reports [5].The subcellular distribution of HA-TRARG1-7A and HA-TRARG1-7E were indistinguishable from HA-TRARG1 being localized to the PM and peri-nuclear regions (Fig. 5A).These data suggest that the phosphorylation status of TRARG1 does not alter its localization.
As part of our previous characterization of TRARG1 topology [26], we revealed that the N-terminal cytosolic domain of TRARG1 (1-100; del_101-173) does not exhibit reduced migration by SDS-PAGE, suggesting that this truncation mutant is not phosphorylated (Fig. 5B-C).In contrast, membraneassociated truncations of TRARG1 are phosphorylated (Fig. 5B-C), suggesting that membrane localization is required for TRARG1 phosphorylation.This appears to be specific to PM-localized TRARG1 in adipocytes (Fig. 1A and 5D) [5].In support of this, we detected GSK3 at the PM in adipocytes (Fig. 5D), and this pool of GSK3 was targeted by insulin signalling since we also detected phopsho-GSK3 in the PM fraction following insulin stimulation.These data place the kinase (GSK3) and its substrate (TRARG1) in the same subcellular location, and suggest that GSK3 at the PM is deactivated in response to insulin.Together, these data suggest that TRARG1 phosphorylation does not regulate localization, but rather TRARG1 phosphorylation is dependent on its membrane-localization.
GSK3 signalling to TRARG1 may regulate GLUT4 translocation
Given the role of TRARG1 in insulin-stimulated GLUT4 trafficking, and previous reports that inhibition of GSK3 potentiates insulin-stimulated GLUT4 trafficking [21,22], we tested whether TRARG1 is needed for the enhancement of insulin-stimulated GLUT4 trafficking to the PM that results from GSK3 inhibition.First, we confirmed that Trarg1 knockdown (87%; Fig. 5E) significantly decreased PM GLUT4 abundance in response to both submaximal and maximal insulin-stimulation compared to cells treated with non-targeting (NT) siRNA control (Fig. 5F), as previously reported [5,6].In addition, inhibition of GSK3 with either CHIR99021 or LY2090314 increased cell surface GLUT4 under insulin-stimulated conditions, compared to DMSO control cells (Fig. 5F), as suggested by previous studies using alternate GSK3 inhibitors [21,22].To determine whether Trarg1 depletion blunted the effect of GSK3 inhibitors on GLUT4 trafficking, we calculated the difference in PM GLUT4 between CHIR99021-or LY2090314-treated cells and cells treated with DMSO under the same knockdown and insulin treatment conditions (Fig. 5G).The increase in PM GLUT4 in submaximal (0.5 nM) insulin-stimulated adipocytes treated with GSK3 inhibitors was attenuated by Trarg1 knockdown (Fig. 5G).However, GSK3 inhibition increased cell surface GLUT4 to a similar extent in both control and Trarg1 KD adipocytes in response to maximal insulin stimulation (Fig. 5G).These data suggest that GSK3 activity regulates GLUT4 trafficking, and that TRARG1 is in involved in a pathway by which GSK3 inhibition increases GLUT4 trafficking to the PM at submaximal insulin doses.
Discussion
TRARG1 colocalizes with GLUT4 and positively regulates GLUT4 trafficking and insulin sensitivity in adipocytes.However, its mechanism of action remains unclear.Here, we have integrated TRARG1 into the insulin signaling pathway and reported that TRARG1 is a novel substrate of GSK3 and its phosphorylation is regulated through the PI3K/AKT/GSK3 axis.Our data indicate that murine TRARG1 is primed by a yet-to-be-identified kinase at S84 promoting GSK3 phosphorylation of TRARG1 at S80, S76 and S72.This phosphorylation does not influence TRARG1 subcellular localization.However, Trarg1 knockdown blunted the potentiation of submaximal insulin-stimulated GLUT4 trafficking induced by GSK3 inhibition.These data place TRARG1 within the insulin signalling pathway and suggest that GSK3 phosphorylation of TRARG1 may negatively regulate GLUT4 traffic.
The regulation of glucose transport by insulin occurs through the integration of signaling and trafficking processes that cooperate in bringing GLUT4 to the cell surface [27].However, the precise points of intersection between upstream signaling via AKT and the distal GLUT4 trafficking machinery are not fully resolved.Our data support a role for insulin signaling via AKT to GSK3 in this process since pharmacological inhibition of GSK3 potentiated insulin-stimulated GLUT4 translocation (Fig. 5F).This observation, using two distinct small molecules, is largely consistent with other studies using GSK3 inhibitors in cultured adipocytes and isolated muscle that reported increased insulin-stimulated glucose uptake and/or GLUT4 translocation [18,[20][21][22].Consistent with a role for GSK3 in insulinstimulated glucose transport, expression of a GSK3 S9A mutant impaired glucose uptake in cultured adipocytes [20], and muscle-specific GSK3 knock-out mice were more insulin-sensitive [19].In contrast, isolated muscle from and/or knock-in mice showed normal insulin-stimulated glucose uptake, despite decreased glycogen synthase activity [28].Discrepancies in data from knock-in mice and pharmacological targeting of GSK3 may result from the majority of in vitro studies have been performed in adipocytes.Since TRARG1 expression is limited to adipose tissue, it may be that GSK3 operates via distinct substrates to regulate GLUT4 in these tissues and has a greater role in adipocytes.Alternatively, there are substantial differences in the intervention times in these experimental approaches (acute small molecule treatment vs. chronic gene knock-in), and so there may be adaptations to persistent increased GSK3 activity in genetic models.
The potentiation of submaximal insulin-stimulated GLUT4 translocation following GSK3 inhibition was attenuated by Trarg1 knockdown (Fig. 5F & G).Since we have reported TRARG1 to be a direct GSK3 substrate, these data suggest that reduced TRARG1 phosphorylation may provide a link between GSK3 activity and GLUT4 trafficking.That this effect was observed at physiological insulin doses also increases our confidence that GSK3-TRARG1 signalling is relevant to insulin-stimulated glucose transport in adipose tissue in vivo.However, studies using TRARG1 phospho-mutants (such as those described in this study) will be needed to more definitively assign a role for GSK3-mediated TRARG1 phosphorylation in insulin-stimulated GLUT4 translocation.In addition, that Trarg1 knockdown did not prevent increased PM GLUT4 by GSK3 inhibition in cells treated with a maximal dose of insulin (Fig. 5F & G) suggests that GSK3 may also regulate GLUT4 trafficking via TRARG1-independent mechanisms.There are a number of GSK3 substrates reported to regulate membrane trafficking such [29], dynamin I [30] and kinesin light chains [31], so testing whether alternate GSK3 substrates also intersect with GLUT4 traffic is worthy of further study.Overall, our findings provide impetus for further work to understand the contribution of GSK3-mediated phosphorylation of TRARG1 in insulin-regulated GLUT4 traffic.We initially observed that endogenous phosphorylated TRARG1 was enriched at the PM but not in LDM or HDM fractions [5] (Fig. 1A), suggesting that GSK3 may phosphorylate TRARG1 at this site.These data also imply that TRARG1 (and it's phosphorylation) may play a functional role at the PM.For example, TRARG1 may affect GLUT4 internalisation or regulate interactions between GLUT4-containing vesicles and docking machinery at the PM, as reported for the TRARG1 paralogue PRRT2 [32].Our finding that mutation of TRARG1 phosphosites did not alter TRARG1 localization (Fig. 5A), suggests that localization may impart phosphorylation status, and not vice versa.Indeed, GSK3 has been reported to localize to the PM at the early stage of WNT signaling activation [33] and our subcellular fractionation analysis showed that GSK3 was mainly localized to the LDM and cytosol, with a small proportion of GSK3 localized to the PM (Fig. 5D).Further supporting a role for TRARG1 localization in promoting phosphorylation, the TRARG1 mutant (del_101-173), which abolishes its PM/membrane localization, was not phosphorylated in HEK-293E cells as indicated by the lack of higher molecular weight bands (Fig. 5B-C).Of note, a TRARG1 mutant (del_129-173) localized to intracellular membranes was still phosphorylated in HEK-293E cells, suggesting that there may be differential control of TRARG1 phosphorylation between HEK-293E cells and adipocytes.Thus, our data suggest that TRARG1 phosphorylation on residues between 70 and 90 does not affect TRARG1 localization, but rather TRARG1 phosphorylation is localization-dependent.
TRARG1 amino acid sequence alignment across multiple species revealed a highly conserved phosphosite-rich region between residues 70 and 90 in the murine TRARG1 N-terminal cytosolic region, with conservation shared by Callorhincus milii (Australian ghost shark), the earliest extant species in which TRARG1 is found.Using changes in phosphosite abundance in response to insulin and GSK3 inhibition and phosphosite mutagenesis, we identified a GSK3-priming site (murine S84) and GSK3 phosphorylation sites (murine S72, S76 and S80).The S76 and S80 GSK3 target sites were highly conserved, at least within placental mammals (Fig. 4F), suggesting an important role of regulated phosphorylation for TRARG1 function.One key difference within mammals is the presence of a third GSK3 site in rodents (murine S72), that is almost exclusively proline in human and other mammals (Supplemental table S2).Nevertheless, the loss of phosphorylation upon mutation of the S87 priming site in human TRARG1 (Fig. 4G) and loss of endogenous human TRARG1 phosphorylation in SGBS adipocytes treated with a GSK3 inhibitor (Fig. S1C) suggest that our findings using murine TRARG1 are translatable to human TRARG1.How TRARG1 regulates GLUT4 traffic remains unclear but identifying GSK3 sites within TRARG1 may inform future studies into its mechanism of action.Our previous mapping of TRARG1 topology revealed a membrane associated domain between residues 101 and 127.The proximity of this region to GSK3-target sites raises the possibility that phosphorylation and increased negative charge between residues 70 and 90 may alter the interaction between the TRARG1 membrane-associated region and the negatively charged membrane phospholipids.TRARG1 belongs to the dispanin protein family, and other members of this family have been implicated in altering membrane fluidity [34] as well as in specific membrane trafficking processes such as synaptic vesicle fusion [32].Indeed, the analogous membrane-associated region of the dispanin protein IFITM3 has been reported to be required for its antiviral actvity [35], suggesting that GSK3-mediated phosphorylation may influence TRARG1 activity via this domain.Alternatively, hotspots of protein phosphorylation such as those we have identified in TRARG1 have been implicated in modulating protein-protein interactions [25], and may serve as an integration point for multiple signaling pathways [36].For example, changes in TRARG1 phosphorylation status may regulate TRARG1 oligomerisation, which has been reported for other dispanin proteins [37], and/or release or recruit other regulators of GLUT4 traffic.Indeed, in general there were multiple apparent higher molecular weight immuno-reactive TRARG1 bands observed in both HEK cells and adipocytes (Fig. 1A, 1H, 3A, 3C, 3E, S1A, S1E), suggesting distinct phosphospecies.These TRARG1 phospho-species depend on GSK3 since inhibition or knockdown of GSK3 almost completely ablated all apparent higher molecular weight forms of TRARG1 (Fig. 3C-D, 3G-H).This suggests the multiple apparent higher molecular weight bands result from phosphorylation at some of the GSK3 sites, but not others (e.g.S80 and S76, but not S72), or that GSK3-mediated phosphorylation of TRARG1 leads to phosphorylation at alternate sites not targeted directly by GSK3 (Fig. 1C and 4B).Understanding the role that distinct TRARG1 phospho-species play in TRARG1 function will be the focus of future studies.In addition, our findings that TRARG1 is a new GSK3 substrate raises several questions including: 1) the identity of the priming kinase for subsequent GSK3 activity; 2) whether other kinases can also target this region of TRARG1 in addition to GSK3; and 3) which phosphatase dephosphorylates TRARG1.
In summary, we have integrated the trafficking regulator of GLUT4-1, TRARG1, into the insulin signaling network via GSK3 and provided evidence that TRARG1 may be involved in the mechanisms by which GSK3 inhibition promotes insulin-stimulated GLUT4 trafficking.This provides strong basis
Cell culture and transfection
3T3-L1 fibroblasts and HEK-293E/HeLa cells were maintained in Dulbecco's modified Eagle's medium (DMEM) supplemented with 10% fetal bovine serum (FBS) and GlutaMAX (Life Technologies, Inc.) (DMEM/FBS medium) at 37 °C with 10% CO2 -MEM supplemented with 10% FBS and GlutaMAX at 37 °C and 10% CO2.For 3T3-L1 stable cell lines, fibroblasts were transduced with pBABE-puro retrovirus (empty vector control), or retrovirus with the constructs of interest in pBABE-puro vector.Puromycin (2 μg/mL) was used to select for transduced cells.3T3-L1 fibroblasts were cultured and differentiated into adipocytes as described previously [5], and used for experiments between day 9 and 12 after the initiation of differentiation.HEK-293E/HeLa cells were transfected with indicated constructs 2 days before experiments using lipofectamine2000 (Thermo Scientific), according to the manufacturer's instructions.Myoblasts at 80% confluence were differentiated into myotubes by replacing 10% FBS with 2% FBS.Myotubes were used in experiments at between 6-8 days post-differentiation. SGBS cells were cultured and differentiated as previously reported [41,42], and used for experiments 12-14 days after initiation of differentiation.SGBS adipocytes were incubated in growth-factor free media for 4 h prior to addition of insulin or GSK3 inhibitors.
Cell fractionation
3T3-L1 adipocytes were washed with ice-cold PBS and harvested in ice-cold HES-I buffer (20 mM HEPES, pH 7.4, 1 mM EDTA, 250 mM sucrose containing protease inhibitors mixture (Roche Applied Science)).All subsequent steps were carried out at 4 o C. Cells were fractionated as previously described [26].Briefly, cells were homogenized by passing through a 22-gauge needle 5 times and a 27-gauge needle 10 times prior to centrifugation at 500 x g for 10 min to pellet cell debris.The supernatant was centrifuged at 13,500 x g for 12 min.The pellet contained the PM and mitochondria/nuclei (M/N).The supernatant consisted of cytosol, low density microsome (LDM), and high density microsome (HDM).This supernatant was centrifuged at 21,170 x g for 17 min to pellet the HDM fraction.The supernatant was again centrifuged at 235,200 x g for 75 min to obtain the LDM fraction (pellet).The pellet containing PM and M/N was referred to as PM fraction in this study as TRARG1 is not enriched in M/N fractions (data not shown).
Lambda protein phosphatase assay in 3T3-L1 adipocytes/HEK-293E and fat tissues
Cells were washed with PBS twice and lysed in 1% (v/v) Triton X-100 in PBS and then solubilized by passing through a 22-gauge needle 3 times and a 27-gauge needle 3 times prior to centrifugation at 12,000 x g for 15 min.In the case of fat tissues, subcutaneous/epidydimal white adipose tissue or brown adipose tissue was excised from mice and lysed in 1% (v/v) Triton X-100 in PBS.Tissues were lysed and homogenized by sonication (Gallay Scientific) prior to centrifugation at 13,000 x g for 15 min.The supernatant was transferred to a clean tube and protein concentration determined by bicinchoninic acid (BCA) assay (Thermo Scientific).100 μg of tissue or cell lysates were treated with or without 2 μL Lambda Protein Phosphatase (LPP) (New England Biolabs) at 30 o C for 15 min (30 min for HEK-293E lysate) in a reaction volume of 50 μL.Starting material without 15 min (30 min for HEK-293E lysate) incubation at 30 o C was included as a control.Reaction was terminated by the addition of SDS (1% (w/v) final concentration), and the reaction mixture was incubated at 95 o C for a further 10 min.Loading sample buffer (LSB) with TCEP was added to the samples, which were then incubated at 65 °C for 5 min, separated by SDS-PAGE, and analyzed by immunoblotting.
Phosphatase inhibitors assay and kinase inhibitors assay in 3T3-L1 adipocytes
3T3-L1 fibroblasts were seeded in 12-well plates and differentiated into adipocytes.Cells were used for experiments on day 10 post-differentiation.For phosphatase inhibitors assay, adipocytes were treated with 100 nM calyculin A, 1 μM okadaic acid or DMSO in DMEM/FBS medium for 60 min.For kinase inhibitors assay, cells were serum-starved in basal DMEM media (DMEM, GlutaMAX, 0.2% (w/v) bovine serum albumin (BSA, Bovostar, Bovogen)) for 2 h prior to treatment with kinase inhibitors (100 nM wortmannin, 10 μM MK-2206, 100 nM rapamycin or 1 μM GDC-0994) for 20 min followed by 100 nM insulin treatment for 20 min.Cells were transferred onto ice and washed with ice-cold PBS twice, followed by lysis with 1% (w/v) SDS in PBS-containing protease inhibitors mixture (Roche Applied Science) and phosphatases inhibitors mixture (1 mM sodium pyrophosphate, 2 mM sodium orthovanadate, 10 mM sodium fluoride).Cell lysates were sonicated at 90% intensity for 12 s and centrifuged at 13,000 x g.Protein concentration of the supernatant was determined by BCA assay.LSB was added to the samples, which were then incubated at 65 °C for 5 min, separated by SDS-PAGE, and analyzed by immunoblotting.
Studies in adipose tissue explants
Eight-week-old male C57BL/6J mice were obtained from Australian BioResources (Moss Vale, NSW, Australia) and were kept in a temperature-controlled environment (22 ± 1°C) on a 12 hr light/dark cycle with free access to food and water.All experiments were carried out at the University of Sydney with the approval of the University of Sydney Animal Ethics Committee (2014/694), following guidelines issued by the National Health and Medical Research Council of Australia.Mice were euthanized by cervical dislocation and epididymal or subcutaneous fat depots excised from mice, transferred immediately to warm DMEM/2% BSA/20 mM HEPES, pH 7.4, and minced into fine pieces.Explants were washed 3 times and incubated in DMEM/2% BSA/20 mM HEPES, pH 7.4 for 2 h.Explants were then aliquoted and treated with 10 nM insulin, 500 nM LY 2090314 or DMSO for 30 min at 37 o C. Treatment was terminated with three rapid washes in ice-cold PBS, after which the cells were solubilized in RIPA buffer (50 mM Tris-HCl, pH 7.5, 150 mM NaCl, 1% (v/v) Triton X-100, 0.5% (w/v) sodium deoxycholate, 0.1% (w/v) SDS, 1 mM EDTA, protease inhibitors mixture).Samples were then subjected to sonication prior to spin at 20,000 x g for 15 min at 4 °C.The supernatant was transferred to a clean tube and protein concentrations were determined using BCA assay.LSB with TCEP was added to the samples, which were then incubated at 65 °C for 5 min, separated by SDS-PAGE, and analyzed by immunoblotting.EsiRNA (Merck, EMU170441, EMU059761) was added to 100 μL Opti-MEM medium to a final concentration of 900 nM with 7.5 μL transfection reagent TransIT-X2 (Mirus Bio), mixed and incubated at room temperature (RT) for 30 min.Adipocytes at 6 days post-differentiation were washed once with PBS, trypsinized with 5 x trypsin-EDTA (Life Technologies, Inc.) at 37 °C, resuspended in DMEM/FBS medium and then centrifuged at 200 x g for 5 min.The supernatant was removed, and pelleted cells were resuspended in DMEM/FBS medium of the same volume as that of the media where the cells were previously cultured (e.g. 1 mL of media for the cells from one well of a 12-well plate).900 μL of cell suspension was then reseeded onto one well of a Matrigel-coated 12-well plate.100 μL of esiRNA or luciferase control mixture was added into per well of a 12-well plate (esiRNA final concentration 90 nM).Cells were used in experiments 72 h following esiRNA knockdown.Method relevant to Fig. 3G-I.
Immunoprecipitation of TRARG1 and in vitro kinase assay
3T3-L1 adipocytes at day 10 post-differentiation were serum-starved in basal DMEM media containing 100 nM LY2090314 for 2 h.Cells were then transferred to ice, washed thrice with ice-cold PBS and harvested in lysis buffer (1% NP 40, 5% glycerol, 50 mM Tris-HCl, pH 7.4, 150 mM NaCl) containing protease inhibitors mixture and phosphatase inhibitors mixture.Cells were lysed by passing through a 22-gauge needle six times, followed by six times through a 27-gauge needle.Lysates were solubilized on ice for 20 min and then centrifuged at 20,000 x g for 20 min at 4 °C to remove insoluble material.The protein concentration of the supernatant was quantified by BCA assay following the manufacturer's -TRARG1 mouse monoclonal (sc-377025) -Life Technologies, 10004D) were washed once with icemagnetic beads were then added into each immunoprecipitation (IP) tube followed by incubation with rotation at 4 °C for 1 h.Beads were then separated from flow-through by magnetic capture and resuspended and washed twice with lysis buffer, followed by two washes with ice-cold PBS.The residual liquid was removed from the final wash.The TRARG1-IP tubes and IgG control tube were incubated with either kinase assay buffer (20 mM Tris-HCl, pH 7.4, 10 mM MgCl2, 1mM DTT, 1 mM sodium pyrophosphate and 10 mM sodium fluoride, 10 mM glycerol-2-phosphate, 2 mM ATP) or kinase assay buffer containing e for 1 h.Following the incubation, reactions were terminated, and proteins were eluted by the addition X laemmli sample buffer and incubation at 65 °C for 5 min.Samples were then centrifuged at 20,000 x g for 2 min at RT and stored at -20 °C.
Polymorphism analysis
TRARG1 sequences were extracted from Ensembl (release 99) and aligned by Multiple sequence Alignment using Fast Fourier Transform (MAFFT) version 7 [43].Analysis of the number of polymorphisms at each residue was conducted across 64 unique placental mammalian species (represented by 69 genomes including different subspecies/sexes of the same species).The results of this analysis between residues 67-91 of murine TRARG1 are presented in Fig. 4F and the full analysis in Supplemental table S2.
Immunofluorescence confocal microscopy
3T3-L1 adipocytes stably expressing HA-TRARG1, HA-TRARG1-7A or HA-TRARG1-7E were replated onto Matrigel-coated glass coverslips (Matrigel from Becton Dickinson) on day 6 after initiation of differentiation and processed for immunofluorescence microscopy on day 10.Cells were washed with PBS and fixed with 4% (v/v) paraformaldehyde (PFA) for 20 min at RT. Cells were blocked and permeabilized with 2% (w/v) BSA and 0.1% (w/v) saponin in PBS followed by incubation with a mixture of rabbit anti-HA antibody (CST, C29F4) and anti-GLUT4 1F8 antibody (generated in-house) at RT for 45 min.Cells were washed five times with 2% (w/v) BSA and 0.1% (w/v) saponin in PBS.Anti-mouse Alexa-555-conjugated secondary antibody (Life Technologies) and anti-rabbit Alexa-488conjugated secondary antibody (Life Technologies) were used to detect primary antibodies.DAPI was used to visualize nuclei.Samples were mounted in Immuno-Fluore Mounting Medium (MP Biomedicals).For HEK-293E cells expressing HA-TRARG1 truncation mutants, cells were re-plated onto Matrigel-coated glass coverslips in 12-well plate 24 h after transfection, allowed to adhere overnight.Samples were prepared as described above, but without antibody incubations.Optical sections were obtained using Nikon C2 Confocal microscope using a Plan Apo VC 60X WI DIC N2 (NA=1.2WD=270 μm) objective.Images were acquired using NIS Elements software.All images were processed using Fiji [44].
siRNA-mediated gene knockdown for GLUT4 translocation assays 47 μL of OptiMEM, 2 μL of TransIT-X2 and 1 μL of siRNA (25 μM, ON TARGETplus siRNA pool (Dharmacon, L-057822-01 for Trarg1, D-001810-10 for non-targeting pool), final concentration 500 nM) were mixed and incubated for 30 min at RT. On day 6 post-differentiation, 3T3-L1 adipocytes grown in a 6-well plate (stably expressing HA-GLUT4-mRuby3 where appropriate) were washed once with PBS, trypsinized with 5X trypsin-EDTA at 37 °C, resuspended in DMEM/FBS medium and centrifuged at 200 x g for 5 min.The supernatant was removed, and pelleted cells were resuspended in 13.5 mL DMEM/FBS.450 μL of cell suspension was added into the mixture of OptiMEM/TransIT-X2/siRNA and mixed well (final concentration of siRNA was 50 nM).All 500 μL of the cell suspension plus OptiMEM/TransIT-X2/siRNA was added to one well of a Matrigel-coated 24-well plate, or 84 μL of the cell suspension plus OptiMEM/TransIT-X2/siRNA was aliquoted into each well of the Matrigelcoated 96-well plate.Cells were refed with DMEM/FBS medium 24 h post transfection and used 72 h post transfection.Method relevant to Fig. 5F-G.
Endogenous GLUT4 translocation assay
Cells were washed once with PBS and once with basal DMEM media prior to incubation in basal DMEM media in the presence or absence of DMSO (control) or GSK3 inhibitor.Cells were stimulated with 0.5 nM or 100 nM insulin for 20 min where indicated.After the stimulation, cells were washed by gently dunking the 96-well plates 12 times in a beaker containing ice cold PBS (all the subsequent PBS washes were performed using this method).Plates were placed on ice and residual PBS was removed with multichannel pipette.Cells were fixed with 4% PFA for 5 min on ice, and 20 min at RT and PFA was replaced with 50 mM glycine in PBS followed by incubation for 15 min.After dunking the plates 6 times in RT PBS, residual PBS was removed and cells were blocked with 5% Normal Swine Serum (NSS, Dako, X0901) in PBS for 20 min.After removing all the blocking media, cells were incubated with human anti-GLUT4 antibody (LM048; [45]) (kindly provided by Joe Rucker, Integral Molecular, PA, USA) in 2% NSS in PBS for 1 h to detect GLUT4 present at the PM.After 12 washes in RT PBS, residual PBS was removed and cells were incubated with anti-human Alexa-488 (Life Technologies) and Hoechst 33342 (Life Technologies) in 2% NSS in PBS for 1 h.Cells were washed 12 times in RT PBS and stored in PBS containing 2.5% DABCO, 10% glycerol, pH 8.5 and imaged on the Perkin Elmer Opera Phenix High Content Screening System.Imaging data were analyzed using Harmony Software supplied with the imaging system.GLUT4 signal was normalized to the number of nuclei per imaging region (as measured by Hoechst 33342 signal).
Sample preparation and real-time quantitative-PCR assays
For Fig. 5E, total RNA was extracted from cells using QIAshredder and RNeasy kits (Qiagen).To remove any DNA, the extracts were incubated with DNAse buffer (Promega) and residual DNAse was subsequently inactivated with DNAse stop solution (Promega).cDNA synthesis was performed using LunaScript RT SuperMix Kit (NEB).Polymerase chain reactions were carried out using TaqMan 2X Universal PCR Master Mix or SYBR Green PCR Master Mix (Thermo) on a QuantStudio™ 7 Flex Real-Time PCR System (Thermo).Acidic ribosomal phosphoprotein P0 (36B4), -actin (b-act) and 18S ribosomal RNA (18s) were used as internal controls.The following primer sets were used: 36B4_F; AGATGCAGCAGATCCGCAT and 36B4_R; GTTCTTGCCCATCAGCACC, b-act_F; GCTCTGGCTCCTAGCACCAT and b-act_R; GCCACCGATCCACACAGAGT, and 18s_F; CGGCTACCACATCCAAGGAA and 18s_R; GCTGGAATTACCGCGGCT, with the corresponding 18s taqman probe GAGGGCAAGTCTGGTGCCAG.The TaqMan gene expression assay (premixed primer set and probes) was used for murine Trarg1 (Mm03992124_m1).
Sample preparation for MS-based analsyis of TRARG1
HEK-293E cells transiently expressing HA-TRARG1 or TRARG1 were washed three times with icecold PBS, lysed in RIPA buffer and homogenized prior to centrifugation at 20,000 x g, 4 °C for 20 min to remove c -HA Microbeads -091with rotation.Microbeads were separated from flow-through by running throu Miltenyi Biotec, 130-042-701) and three washes with RIPA buffer, followed by two washes with icecold PBS.Proteins were eluted in 2X laemmli sample buffer and subjected to SDS-PAGE.Gels were stained with SyproRuby protein gel stain (Life Technologies) according to the manufacturer's instructions and imaged on a Typhoon FLA 9500 biomolecular imager (GE Healthcare).TRARG1 bands were excised for in-gel digestion.
Gel fractions were washed twice in 50% acetonitrile (ACN, Thermo Scientific) in 100 mM ammonium bicarbonate (NH4HCO3, Sigma) at RT for 5min at 2,000 rpm in a ThermoMixer C (Eppendorf).Liquid was removed, 10 mM TCEP and 40 mM 2-Chloroacetamide (Sigma) in 100 mM NH4HCO3 was added to the gel fractions and incubated at RT for 30 min at 2,000 rpm.Liquid was removed and gel pieces were dehydrated in 100% ACN at RT for 5 min at 2,000 rpm followed by rehydration in 100 mM NH4HCO3 containing 14 ng/μL trypsin (Promega) on ice for 1 h.Excess liquid was removed and 100 mM NH4HCO3 was added to gel slices and samples were incubated at 37 °C overnight.Peptides were solubilized by spiking in Trifluroacetic acid (TFA, Thermo Scientific) to 1% and incubating at 37°C for 30 min.To extract peptides, gel pieces were dehydrated using 100% ACN and incubated at 37°C for 15 min.Peptides were transferred to a new tube and dried in a vacuum concentrator (Eppendorf) and then resuspended and acidified in 1% TFA.
Mass spectrometry
Downloaded from http://portlandpress.com/biochemj/article-pdf/doi/10.1042/BCJ20220153/933026/bcj-2022-0153.pdf by University College London (UCL) user on 31 May 2022 Biochemical Journal.This is an Accepted Manuscript.You are encouraged to use the Version of Record that, when published, will replace this version.The most up-to-date-version is available at https://doi.org/10.1042/BCJ20220153 For analysis of immunoprecipitated TRARG1, an Easy nLC-1000 UHPLC was connected to a Q -house (ReproSil Pur C18-elution with a gradient of 5resolution of 70,000, 3e 6 AGC, and a max injection -dependent MS/MS scans with HCD at a resolution of 17,500, AGC 5e 5 For phosphoproteomics analysis in 3T3-L1 adipocytes treated with GSK3 inhibitor, labelfree quantification was applied and samples measured on a Q Exactive HF-X mass spectrometer (Thermo Fisher Scientific) [48] as previously described [49].
Processing of spectral data
Raw mass spectrometry data was processed using the Andromeda algorithm integrated into MaxQuant (v1.6.6.0 or v1.6.1.0)[50], searching against the mouse UniProt database (June 2019 release) concatenated with known contaminants.Default settings were used for peptide modifications, with the addition of Phospho(STY) for the phosphoproteomics study, or the addition of Phospho(STY), GlyGly(K), Oxidation(M), Acetyl(K), Deamidation(NQ), Methyl(KR) in the variable modifications for immunoprecipitated TRARG1.Match between runs was turned on with a match time window of 0.7 min and an alignment time window of 20 min for identifications transfer between adjacent fractions, only for samples analyzed using the same nanospray conditions.For immunoprecipitated TRARG1, only murine TRARG1 sequence was used for searching.Protein, peptide and site FDR was filtered to 1% respectively.
GSK3 inhibitor phosphoproteomic study
This study was performed with four biological replicates.Phosphopeptides groups were filtered out for reverse sequences, potential contaminants and those only quantified 2 times or fewer out of the 8 samples.LFQ intensities were log2-transformed and median normalized.Each group was imputed as previously described in [51].For remaining missing values, a second step of imputation was performed if at least three replicates had quantified values in one condition and the values were completely missing or had only one quantified replicate in the other condition, using a method previously described in [52].For the phosphopeptides with no missing values after imputation, two sample t tests were performed.Treatment was compared to control and p values were corrected for multiple hypothesis testing using the Benjamini and Hochberg method [53].
Western blotting intensities
In phosphatase inhibitor assays (Fig. 1K), differences between control and treated intensities were tested with unpaired two-sided student's t-tests with Welch's correction.In kinase inhibitor assays (Fig. 2B) and in experiments using isoproterenol (Fig. 3B), differences were tested by one-way ANOVA with Dunnett's multiple comparisons test.In GSK3 inhibitor experiments (Fig. 3D), differences from basal cells were tested using one-sided t-tests.In tissue explant experiments (Fig. 3F & Supplemental Figure S1B), differences were tested with paired two-sided t-tests.In GSK3 knockdown experiments (Fig. 3H-I, differences were tested by RM one-way ANOVA with Holm-Sidak's multiple comparisons test.Analyses were performed using GraphPad Prism version 8.0 for macOS (GraphPad Software, CA USA).Error bars are presented as standard error of the mean (SEM) unless otherwise stated.Significance is represented, with a p-value < 0.05 by *, < 0.01 by **, < 0.001 by *** and < 0.0001 by ****.Nonsignificant comparisons indicated by n.s.For endogenous GLUT4 translocation assay, experimental values were normalized to the average value across all conditions in each experiment.Differences in surface GLUT4 relative to nuclear DNA were tested using two-way ANOVA with correction for multiple comparisons (Fig. 5F).Differences in GSK3 inhibitor-mediated increased surface GLUT4 in Trarg1 knockdown cells compared to NT control cells with the same insulin treatment were tested using paired two-sided t-tests (Fig. 5G).Analyses were performed using GraphPad Prism version 8.0 for macOS (GraphPad Software, CA USA).Error bars are SEM.Significance is represented as stated in figure legends.
Endogenous GLUT4 translocation assays
Data Access Statement: Raw and MaxQuant processed data of MS-based proteomics (except for the phosphoproteomic analysis of GSK3 inhibitor treated cells, which will be uploaded to accompany a future publication) have been deposited in the PRIDE ProteomeXchange Consortium (http://proteomecentral.proteomexchange.org/cgi/GetDataset)[54] and can be accessed with the identifier PXD022765.In vitro Lambda protein phosphatase (LPP) treatment of 3T3-L1 adipocytes lysates removed apparent higher molecular weight TRARG1 bands.(H) Apparent higher molecular weight TRARG1 bands were removed by LPP treatment in TRARG1 expressed in HEK-293E cells.(I) Apparent higher molecular weight TRARG1 bands were present in white adipose tissue (epididymal white adipose tissue; EWAT, subcutaneous white adipose tissue; SWAT) lysates, but not brown adipose tissue (BAT) lysates as analyzed by immunoblotting.The apparent higher molecular weight TRARG1 bands were removed by LPP treatment.(J) Apparent higher molecular weight TRARG1 bands were increased in intensity in 3T3-L1 adipocytes treated with phosphatase inhibitors, calyculin A (CalyA) or okadaic acid (Oka).(K) Quantification of (J).The ratio of apparent higher molecular weight (HWM) TRARG1 (as indicated by the bands in the pink box in (J)) signal to total TRARG1 (as indicated by the bands in the blue box in (J)) signal was quantified as a metric of TRARG1 phosphorylation (n=3, mean±SEM, *p <0.05; ***p <0.001, comparisons with cells under DMSO condition).For panels A, B, D, E, F, G, H and I, the migration positions of molecular mass markers (kilodaltons) are shown to the right.Higher molecular weight (HMW) TRARG1 bands indicated by arrow.where the image was cut to remove lanes.(B) Quantification of (A).The ratio of apparent higher molecular weight TRARG1 signal to total TRARG1 signal was quantified (n=2, mean±S.D., n.s = nonsignificant comparisons with basal condition).(C) 3T3-L1 adipocytes were serum-starved (except for the FBS lane), followed by treatment with insulin (100 nM), CHIR99021 (CHIR) (left panel) or LY2090314 (LY) (right panel) at indicated doses for 20 min.Samples were analyzed by immunoblotting (Glycogen synthase; GS).(D) Quantification of (C).The ratio of apparent higher molecular weight TRARG1 signal to total TRARG1 signal was quantified (n=3, mean±SEM., *p <0.05; **p <0.01; comparisons with basal condition).(E) Epididymal white adipose tissue (EWAT) was excised from mice and minced.Explants were serum-starved in DMEM/2% BSA/20 mM HEPES, pH 7.4 for 2 h followed by treatment with insulin (10 nM) or LY2090314 (GSK3i) (500 nM) for 30 min at 37 o C. Treatment was terminated and tissues were solubilized in RIPA buffer and subjected to analysis by immunoblotting.(F) Quantification of (E).The ratio of apparent higher molecular weight (HMW) TRARG1 signal to total TRARG1 signal was quantified (n=4, mean±SEM, *p <0.05; **p <0.01; ***p <0.001, comparisons with basal condition).(G) 3T3-L1 adipocytes on day 6 post-differentiation were (Luc) were used as control.72 h post-transfection, cells were serum-starved for 2 h and harvested for analysis by immunoblotting.(H-I) Quantification of (G).TRARG1 phosphorylation (HMW/total) (H), glycogen synthase (GS) phospho were quantified (n=3, mean±SEM, *p <0.05; **p <0.01, comparisons with cells transfected with esiRNA targeting luciferase).(J) 3T3-L1 adipocytes were serum-starved in basal DMEM media containing 100 nM LY2090314 for 2 h.Cells were lysed and homogenized and cell lysates were immunoprecipitated with anti-TRARG1 antibody or IgG as a control.Immunoprecipitated TRARG1 was treated w All samples were analyzed by immunoblotting.For panels A, C, E, G, and J, the migration positions of molecular mass markers (kilodaltons) are shown to the right.Higher molecular weight (HMW) TRARG1 bands indicated by arrow. .Murine TRARG1 is primed at S84 for subsequent phosphorylation by GSK3 within a highly conserved region.(A) GSK3 substrate consensus motif.Pre-phosphorylated (primed) site is labeled in blue and GSK3 target site is labeled in orange.(B) 3T3-L1 adipocytes were serum-starved followed by treatment with GSK3 inhibitor LY2090314 (100 nM, 20 min) or DMSO as a control.Cell lysates were subjected to phosphoproteomic analysis.Bar plot of log2-transformed mean FC of all detected TRARG1 sites and S641, S645 and S649 on glycogen synthase from this analysis is shown.Numbers following underscore indicate the number of phosphorylation sites detected on that peptide (significance is indicated by *adj.p <0.05; **adj.p <0.01; ***adj.p <0.001, t test).(C) Murine HA-TRARG1 phosphomutants generated by mutating T88, T89 and S90 to Ala (88-90A), S85 to Ala (85A), S84 to Ala (84A) or S79 and S80 to Ala (79-80A), or wild type HA-TRARG1 were transfected into HEK-293E cells.Cells were lysed 24 h post-transfection and samples were analyzed by immunoblotting.(D) Mouse HA-TRARG1 phosphomutants generated by mutating S72 to Ala (72A), S72 and S76 to Ala (72,76A), S72, S76 and S80 to Ala (72,76,80A) or S84 to Ala (84A), or wild type HA-TRARG1 were transfected in HEK-293E cells.Cells were lysed 24 h post-transfection and samples were analyzed by immunoblotting.A longer exposure time for the TRARG1 blot is presented to better visualize higher molecular weight bands (TRARG1 (long)).(E) The phosphosite-rich region between residue 69 and 91 on murine TRARG1 is highly conserved across vertebrate species.Insulin/GSK3 regulated sites are labeled in blue.Other conserved Ser/Thr residues within this region are labeled in orange.(A) 3T3-L1 adipocytes stably expressing HA-TRARG1, HA-TRARG1-7A or HA-TRARG1-7E were serum-starved for 2 h.Cells were fixed and stained for nuclei (DAPI, blue), GLUT4 (red) and HA (green).Immunofluorescence imaging was performed by confocal microscopy.Instances of colocalization between GLUT4 and HA-TRARG1, HA-TRARG1-7A or HA-TRARG1-(B) Immunoblotting analysis of wild type TRARG1 (T1) or TRARG1 truncation mutants expressed in HEK-293E cells.del_101-127 and del_129-173 mutants were phosphorylated as indicated by the apparent higher molecular weight bands; del_101-173 mutant was not phosphorylated as indicated by the lack of apparent higher molecular weight band.(C) N-terminally HA-tagged and C-terminally eGFP fused TRARG1 construct (HA-TRARG1-eGFP) and its truncation mutants were expressed in HEK-293E cells.Subcellular localization of these constructs was determined by confocal microscopy.Full-length TRARG1 and del_101-127 mutant were localized to the PM; del_129-173 mutant was localized to intracellular membranes; del_101-173 mutant was cytosolic.(D) Serum-starved or insulin-stimulated 3T3-L1 adipocytes were subjected to subcellular fractionation.Subcellular localization of GSK3 was determined by immunoblotting analysis.Tubulin and caveolin1 were immunoblotted to control for loading of cytosolic and PM proteins, respectively.(E) Knockdown efficiency of Trarg1 as assessed by qPCR (****p <0.0001, comparison with non-targeting control siRNA).(F) 3T3-L1 adipocytes were serum-starved in the absence (DMSO) or presence of GSK3 inhibitors (10 μM CHIR99021; CHIR or 500 nM LY2090314; LY) before treatment with or without 0.5 nM or 100 nM insulin for 20 min.Surface GLUT4 was quantified by immuno-labelling and expressed relative to cell number as measured by nuclei number (n=5, mean±SEM, # p <0.05, ## p <0.01, ### p <0.001, #### p <0.0001, compared to the DMSO condition with the same insulin treatment and gene knockdown; ‡ p <0.05, ‡ ‡ ‡ ‡ p <0.0001, compared to the non-targeting (NT) knockdown with the same insulin and drug treatment).(G) Differences in PM GLUT4 between CHIR-or LY-treated and DMSO control condition with the same insulin treatment and gene knockdown as shown in (F) were calculated (mean±SEM, *p <0.05, compared to NT knockdown with the same insulin and drug treatment).For panel B and D, the migration positions of molecular mass markers (kilodaltons) are shown to the right.Higher molecular weight (HMW) TRARG1 bands indicated by arrow.
Downloaded from http://portlandpress.com/biochemj/article-pdf/doi/10.1042/BCJ20220153/933026/bcj-2022-0153.pdf by University College London (UCL) user on 31 May 2022 Biochemical Journal.This is an Accepted Manuscript.You are encouraged to use the Version of Record that, when published, will replace this version.The most up-to-date-version is available at https://doi.org/10.1042/BCJ20220153 Downloaded from http://portlandpress.com/biochemj/article-pdf/doi/10.1042/BCJ20220153/933026/bcj-2022-0153.pdf by University College London (UCL) user on 31 May 2022 Biochemical Journal.This is an Accepted Manuscript.You are encouraged to use the Version of Record that, when published, will replace this version.The most up-to-date-version is available at https://doi.org/10.1042/BCJ20220153for future studies into the exact mechanisms by which GSK3-to-TRARG1 signaling regulates GLUT4 trafficking and insulin sensitivity in adipocytes.Downloaded from http://portlandpress.com/biochemj/article-pdf/doi/10.1042/BCJ20220153/933026/bcj-2022-0153.pdf by University College London (UCL) user on 31 May 2022 Biochemical Journal.This is an Accepted Manuscript.You are encouraged to use the Version of Record that, when published, will replace this version.The most up-to-date-version is available at https://doi.org/10.1042/BCJ20220153 Downloaded from http://portlandpress.com/biochemj/article-pdf/doi/10.1042/BCJ20220153/933026/bcj-2022-0153.pdf by University College London (UCL) user on 31 May 2022 Biochemical Journal.This is an Accepted Manuscript.You are encouraged to use the Version of Record that, when published, will replace this version.The most up-to-date-version is available at https://doi.org/10.1042/BCJ20220153 Downloaded from http://portlandpress.com/biochemj/article-pdf/doi/10.1042/BCJ20220153/933026/bcj-2022-0153.pdf by University College London (UCL) user on 31 May 2022 Biochemical Journal.This is an Accepted Manuscript.You are encouraged to use the Version of Record that, when published, will replace this version.The most up-to-date-version is available at https://doi.org/10.1042/BCJ20220153 Downloaded from http://portlandpress.com/biochemj/article-pdf/doi/10.1042/BCJ20220153/933026/bcj-2022-0153.pdf by University College London (UCL) user on 31 May 2022 Biochemical Journal.This is an Accepted Manuscript.You are encouraged to use the Version of Record that, when published, will replace this version.The most up-to-date-version is available at https://doi.org/10.1042/BCJ20220153
Figure 1 .
Figure 1.TRARG1 phosphorylation causes apparent higher molecular weight bands by immunoblotting.(A) Subcellular fractionation of 3T3-L1 adipocytes.A longer exposure time for the TRARG1 blot is presented to better visualize higher molecular weight bands (TRARG1 (long)).Apparent higher molecular weight TRARG1 bands are enriched in PM fractions (WCL; whole cell lysate, PM; plasma membrane, LDM; low density microsomes, HDM; high densty microsomes).(B) HA-tagged murine TRARG1 (HA-TRARG1) expressed in HEK-293E cells shows multiple bands by immunoblotting (HA-T1; N-terminally tagged HA-TRARG1, EV; empty vector control).(C) Schematic of the domains in TRARG1 and post-translational modifications detected by mass spectrometry of HA-TRARG1 (murine) overexpressed in HEK-293E cells.(D) Table of TRARG1 Ser/Thr/Tyr, Lys and Cys mutants used to study TRARG1 post-translocation modifications (PTMs) in Fig. 1.Murine TRARG1 residue positions are indicated.(E) HA-TRARG1 phosphomutants with Ser/Thr mutated to Ala or Glu (7A/7E) expressed in HEK-293E cells exhibited a molecular weight similar to the apparent lower or higher molecular weight of HA-TRARG1, respectively.Lys-Arg (K-R) and Cys-Ser mutation had no effect on TRARG1 band patterning.Dashed line indicates where lanes have been excluded.(F) Immunoblotting analysis of TRARG1 phosphomutants (12A, 11E) expressed in 3T3-L1 adipocytes.(G)Invitro Lambda protein phosphatase (LPP) treatment of 3T3-L1 adipocytes lysates removed apparent higher molecular weight TRARG1 bands.(H) Apparent higher molecular weight TRARG1 bands were removed by LPP treatment in TRARG1 expressed in HEK-293E cells.(I) Apparent higher molecular weight TRARG1 bands were present in white adipose tissue (epididymal white adipose tissue; EWAT, subcutaneous white adipose tissue; SWAT) lysates, but not brown adipose tissue (BAT) lysates as analyzed by immunoblotting.The apparent higher molecular weight TRARG1 bands were removed by LPP treatment.(J) Apparent higher molecular weight TRARG1 bands were increased in intensity in 3T3-L1 adipocytes treated with phosphatase inhibitors, calyculin A (CalyA) or okadaic acid (Oka).(K) Quantification of (J).The ratio of apparent higher molecular weight (HWM) TRARG1 (as indicated by the bands in the pink box in (J)) signal to total TRARG1 (as indicated by the bands in the blue box in (J)) signal was quantified as a metric of TRARG1 phosphorylation (n=3, mean±SEM, *p <0.05; ***p <0.001, comparisons with cells under DMSO condition).For panels A, B, D, E, F, G, H and I, the migration positions of molecular mass markers (kilodaltons) are shown to the right.Higher molecular weight (HMW) TRARG1 bands indicated by arrow.
Figure 2 .Figure 3 .
Figure 2. TRARG1 is dephosphorylated with insulin in a PI3K/AKT-dependent manner.(A) 3T3-L1 adipocytes were serum-starved prior to insulin (INS) stimulation (100 nM, 20 min).Where indicated, a PI3K inhibitor (wortmannin (WM), 100 nM), Akt inhibitor (MK-2206 (MK) (rapamycin (rapa), 100 nM), or MAPK inhibitor (GDC-0994 (GDC), 1 μM) was administered 20 min prior to insulin treatment.Samples were analyzed by immunoblotting.A longer exposure time for the TRARG1 blot is presented to better visualize higher molecular weight bands (TRARG1 (long)).(B) Quantification of (A).The ratio of apparent higher molecular weight (HMW) TRARG1 (as indicated by the arrow and pink box in (A)) signal to total TRARG1 (as indicated by the blue box in (A)) signal was quantified as a metric of TRARG1 phosphorylation (n=3, mean±SEM, *p <0.05; **p <0.01; ns, non-significant, comparisons with cells without insulin or drug treatment).(C) Bar plot indicating the log 2-transformed median fold change (FC) in phosphorylation of insulin versus basal or insulin+MK versus insulin alone at Class I TRARG1 phosphosites reported by Humphrey et al. [24].SILAC-labelled adipocytes were serum-starved prior to insulin stimulation (100 nM, 20 min).The Akt inhibitor, MK-30 min prior to insulin treatment where indicated.Only sites downregulated (log2 FC -0.58) following insulin stimulation are shown.Dashed lines indicate where log2 FC =0.58 or -0.58.For panel A, the migration positions of molecular mass markers (kilodaltons) are shown to the right.Higher molecular weight (HMW) TRARG1 bands indicated by arrow.
Figure 4
Figure 4. Murine TRARG1 is primed at S84 for subsequent phosphorylation by GSK3 within a highly conserved region.(A) GSK3 substrate consensus motif.Pre-phosphorylated (primed) site is labeled in blue and GSK3 target site is labeled in orange.(B) 3T3-L1 adipocytes were serum-starved followed by treatment with GSK3 inhibitor LY2090314 (100 nM, 20 min) or DMSO as a control.Cell lysates were subjected to phosphoproteomic analysis.Bar plot of log2-transformed mean FC of all detected TRARG1 sites and S641, S645 and S649 on glycogen synthase from this analysis is shown.Numbers following underscore indicate the number of phosphorylation sites detected on that peptide (significance is indicated by *adj.p <0.05; **adj.p <0.01; ***adj.p <0.001, t test).(C) Murine HA-TRARG1 phosphomutants generated by mutating T88, T89 and S90 to Ala (88-90A), S85 to Ala (85A), S84 to Ala (84A) or S79 and S80 to Ala (79-80A), or wild type HA-TRARG1 were transfected into HEK-293E cells.Cells were lysed 24 h post-transfection and samples were analyzed by immunoblotting.(D) Mouse HA-TRARG1 phosphomutants generated by mutating S72 to Ala (72A), S72 and S76 to Ala (72,76A), S72, S76 and S80 to Ala (72,76,80A) or S84 to Ala (84A), or wild type HA-TRARG1 were transfected in HEK-293E cells.Cells were lysed 24 h post-transfection and samples were analyzed by immunoblotting.A longer exposure time for the TRARG1 blot is presented to better visualize higher molecular weight bands (TRARG1 (long)).(E) The phosphosite-rich region between residue 69 and 91 on murine TRARG1 is highly conserved across vertebrate species.Insulin/GSK3 regulated sites are labeled in blue.Other conserved Ser/Thr residues within this region are labeled in orange.(F) Polymorphism of TRARG1 residues in 64 placental mammals.Ser/Thr residues conserved across murine and human TRARG1 are colored in orange; Bars colored in dark grey indicate Ser/Thr residues present in murine but not human TRARG1 sequence.Equivalent residue numbers for mouse (Mus musculus, Mm) and human (Homo sapiens, Hs) TRARG1 are shown below the bars.A full list of species included in the analysis is provided in Supplemental tableS2.(G) Wild type murine TRARG1, murine TRARG1 with S84 mutated to Ala (84A), wild type human TRARG1, human TRARG1 with S87 (equivalent to S84 in murine TRARG1) mutated to Ala (87A) were expressed in HEK-293E cells.Cells were lysed 24 h post-transfection and samples were analyzed by immunoblotting.For panels C, D and G, the migration positions of molecular mass markers (kilodaltons) are shown to the right.Higher molecular weight (HMW) TRARG1 bands indicated by arrow.
Figure 5 .
Figure 4. Murine TRARG1 is primed at S84 for subsequent phosphorylation by GSK3 within a highly conserved region.(A) GSK3 substrate consensus motif.Pre-phosphorylated (primed) site is labeled in blue and GSK3 target site is labeled in orange.(B) 3T3-L1 adipocytes were serum-starved followed by treatment with GSK3 inhibitor LY2090314 (100 nM, 20 min) or DMSO as a control.Cell lysates were subjected to phosphoproteomic analysis.Bar plot of log2-transformed mean FC of all detected TRARG1 sites and S641, S645 and S649 on glycogen synthase from this analysis is shown.Numbers following underscore indicate the number of phosphorylation sites detected on that peptide (significance is indicated by *adj.p <0.05; **adj.p <0.01; ***adj.p <0.001, t test).(C) Murine HA-TRARG1 phosphomutants generated by mutating T88, T89 and S90 to Ala (88-90A), S85 to Ala (85A), S84 to Ala (84A) or S79 and S80 to Ala (79-80A), or wild type HA-TRARG1 were transfected into HEK-293E cells.Cells were lysed 24 h post-transfection and samples were analyzed by immunoblotting.(D) Mouse HA-TRARG1 phosphomutants generated by mutating S72 to Ala (72A), S72 and S76 to Ala (72,76A), S72, S76 and S80 to Ala (72,76,80A) or S84 to Ala (84A), or wild type HA-TRARG1 were transfected in HEK-293E cells.Cells were lysed 24 h post-transfection and samples were analyzed by immunoblotting.A longer exposure time for the TRARG1 blot is presented to better visualize higher molecular weight bands (TRARG1 (long)).(E) The phosphosite-rich region between residue 69 and 91 on murine TRARG1 is highly conserved across vertebrate species.Insulin/GSK3 regulated sites are labeled in blue.Other conserved Ser/Thr residues within this region are labeled in orange.(F) Polymorphism of TRARG1 residues in 64 placental mammals.Ser/Thr residues conserved across murine and human TRARG1 are colored in orange; Bars colored in dark grey indicate Ser/Thr residues present in murine but not human TRARG1 sequence.Equivalent residue numbers for mouse (Mus musculus, Mm) and human (Homo sapiens, Hs) TRARG1 are shown below the bars.A full list of species included in the analysis is provided in Supplemental tableS2.(G) Wild type murine TRARG1, murine TRARG1 with S84 mutated to Ala (84A), wild type human TRARG1, human TRARG1 with S87 (equivalent to S84 in murine TRARG1) mutated to Ala (87A) were expressed in HEK-293E cells.Cells were lysed 24 h post-transfection and samples were analyzed by immunoblotting.For panels C, D and G, the migration positions of molecular mass markers (kilodaltons) are shown to the right.Higher molecular weight (HMW) TRARG1 bands indicated by arrow.
This is an Accepted Manuscript.You are encouraged to use the Version of Record that, when published, will replace this version.The most up-to-date-version is available at https://doi.org/10.1042/BCJ20220153 Downloaded from http://portlandpress.com/biochemj/article-pdf/doi/10.1042/BCJ20220153/933026/bcj-2022-0153.pdf by University College London (UCL) user on 31 May 2022 Biochemical Journal.This is an Accepted Manuscript.You are encouraged to use the Version of Record that, when published, will replace this version.The most up-to-date-version is available at https://doi.org/10.1042/BCJ20220153 Downloaded from http://portlandpress.com/biochemj/article-pdf/doi/10.1042/BCJ20220153/933026/bcj-2022-0153.pdf by University College London (UCL) user on 31 May 2022 Biochemical Journal.This is an Accepted Manuscript.You are encouraged to use the Version of Record that, when published, will replace this version.The most up-to-date-version is available at https://doi.org/10.1042/BCJ20220153 Downloaded from http://portlandpress.com/biochemj/article-pdf/doi/10.1042/BCJ20220153/933026/bcj-2022-0153.pdf by University College London (UCL) user on 31 May 2022 Biochemical Journal.This is an Accepted Manuscript.You are encouraged to use the Version of Record that, when published, will replace this version.The most up-to-date-version is available at https://doi.org/10.1042/BCJ20220153 Downloaded from http://portlandpress.com/biochemj/article-pdf/doi/10.1042/BCJ20220153/933026/bcj-2022-0153.pdf by University College London (UCL) user on 31 May 2022 Biochemical Journal.This is an Accepted Manuscript.You are encouraged to use the Version of Record that, when published, will replace this version.The most up-to-date-version is available at https://doi.org/10.1042/BCJ20220153 | 2020-12-10T09:06:41.248Z | 2020-12-07T00:00:00.000 | {
"year": 2022,
"sha1": "7d5debfec4dd0a1e776d39d5acea61069e4bd69d",
"oa_license": "CCBY",
"oa_url": "https://portlandpress.com/biochemj/article-pdf/479/11/1237/933716/bcj-2022-0153.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "a17b440cb427d1af82b50959611d42fc865f2b39",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
234430675 | pes2o/s2orc | v3-fos-license | Customer Satisfaction through Management Accounting Practices in the Hotel Industry
The role of management accounting practices in enhancing customer satisfaction has often been overlooked by many. Therefore, this study aims to examine the antecedent role of management accounting practices and their influence on customer satisfaction. Non-probability purposive sampling technique was used to identify the respondents among hotel accounting staffs in Malaysia (N = 200) to examine the customers' satisfaction through management accounting practices in the hotel industry. The current data is analyzed by utilizing Partial Least Square Structural Equation Modeling (PLS-SEM) using the SmartPLS version 3.0 application. The results indicate that management accounting practices had significant effects on customer satisfaction. The findings provide a better understanding of the antecedent role of management accounting practices and their influence on hotel's customer satisfaction. Limitations and contributions are also discussed to justify the significance of this research.
Introduction
The hotel industry is one of the most dynamic sectors in the world. This sector places its customers as the heartbeat of its operations to deliver the best memorable experience. However, with the changing of times and demands of the new trends in travelling, hotels need to keep up and stay relevant despite the fierce competition in the industry.
The hotels are managing their business in a competitive environment which demand a robust managerial approach by considering both the philosophical and technical aspects of the business process. In running the hotel operation, many traditional approaches and conventional concepts are replaced with technological-based platforms. Thus, to be successful in the market, it is not sufficient to lay focus on attracting new customers only. The managers must also concentrate on retaining existing customers and implementing effective practices to improve customer satisfaction and loyalty. In the hotel industry, customers' satisfaction primarily hooked upon the quality of service. Hence, exploring the influences of Management Accounting Practices (MAPs) on customer satisfaction in the hotel selection is indispensable.
Alternatively, management accounting practices are introduced to companies as these practices can assist in providing relevant and useful information to hotel managers, especially in maintaining companies' sustainability in the competitive global market (Sunarni, 2013). Moreover, Pavlatos and Kostakis (2015) also argued that the new economic environment, which was mainly triggered by the global economic crisis, imposed the need for the adaptation of MAPs in improving organization's performance and profitability (Sunarni, 2013) to address the dynamics of the market. Furthermore, Abdel-Kader and McLellan (2013) claimed that management accounting practices have a significant effect on the performance of an organization. The adoption of management accounting practices is believed to cover different aspects in the organization such as strategic planning, resources, cost control, and operation activities, which have become the dominant approach in managing different aspects of a firm's performance.
Hence, Management Accounting (MA) is one of the operational aspects that gets more attention in handling the new economic environment. In the service industry, such as hotel management, less tangible operational elements are more dominant, such as overhead and labor components, where the MA's managerial potential is more appealing than in other industries. Management accounting concerns with the provision of information within the organization and managers at various organization levels to make better decisions and improve operational efficiency and effectiveness (Drury, 2015).
Moreover, research on the contextual factors as an antecedent of the uptake of MAPs has been considered scarce (Pavlatos & Paggios, 2009), especially in the Malaysian context. Besides, research on the influence of management accounting practices towards customer satisfaction has been overlooked by many. In specific, research regarding guest satisfaction that translates into the consideration of adopting management accounting practices can improve customer loyalty which is also pivotal to the success of the hospitality business.
In sum, recent developments in cost accounting has allowed for a stronger focus on customer satisfaction, but fewer studies are concentrating explicitly on the relationship between cost and customer satisfaction. Previous literature that investigates the relationship between costs and clients (McNair, Polutnik & Silvi, 2001;Van Raaij, Vernooij & Triest, 2003) has only focused on industrial companies but not the service industries. Nevertheless, the relationship between costs and customer satisfaction is far more important in service industries because customer satisfaction determined by customer behavior (Krakhmal, 2006). Therefore, among others, practitioners and researchers are curious about the attributes of MAPs among hoteliers, cause, and causal of its adoption. Thus, this paper presents the antecedent of MAPs on customer satisfaction in the hotel industry.
Literature Review Management Accounting Practices
Management Accounting is concerned with providing information to people within the organization to help them make better decisions and improve the efficiency and effectiveness of existing business operations (Drury, 2015). Hilton and Platt (2011) assert that MA is the process of identifying, measuring, analyzing, interpreting, and communicating information in pursuit of organizational goals. Besides, Macinati and Anessi-Pessina (2014) defined MA as a collection of cost-related managerial practices such as budgeting or product costing.
One of the crucial issues in management accounting is the adoption of management accounting practices in the organization. Moreover, Islam and Kantor (2005) defined MAPs as practices that make use of existing techniques and tools by management accountants to assist them in providing management accounting information for the manager in doing their managerial function. In that notion, management accounting can be defined as highly complex activities in a strategic context and is considered as one of the critical parts of an organization operation as it reflects broader functionality than its former terms of the bean-counting process, back door function, and many other terms that implied to its traditional roles.
Thus, management accounting refers to transaction processing that gathers and aggregates data in a meaningful manner. Changes in the business environment affect how businesses should be operated, traded, and managed. These changes indirectly affect management accounting's function and task since management accountants have traditionally provided information that facilitates and supports effective and efficient operations and management. Numerous factors have driven business managers to seek new information with consequences for the roles of management accountants. The driving forces to the changing roles of management accountants are globalization, technology, accounting scandals, and corporate trends.
A study by Horngren (1995) found that the focus of cost management should be on decisions and the various cost management techniques, systems, and measurements that spur and help managers to make a wiser economic decision. Burns and Scapens (2000) argued that competitive environment situation, primarily global competition, was most frequently cited in fostering the change in management accounting. However, past studies have presented a few critics on management accounting. Kaplan (1986) criticized that management accounting is lagging in its development, and Kaplan and Johnson (1987) pointed out that management accounting is not able to innovate. Moreover, Noordin, Zainuddin, Fuad, and Mail (2014) added that traditional management accounting is seen as 'slow' and 'ill-defined' where they found out that traditional management information was less relevant and short-termism in determining the direction of an organization.
Therefore, over these past three decades, there has been a tremendous increase in the number of innovative management accounting practices that developed across a wide range of industries (Pavlatos, 2014;Abdel-Kader & Luther, 2008). Thus, the terms such as 'strategic management accounting' and 'advance management accounting' have started to become more popular (Nordin et al., 2014). Additionally, contemporary management accounting practices generally refer to advanced management accounting techniques (Chenhall & Langfield-Smith, 1998).
Notably, there are many studies in the literature that claimed traditional management accounting as a static and limited organizational process. Some claimed that it did not consider the organization's strategic needs, whereby non-monetary information was neglected and did not account for decision making. However, the traditional role of management accounting techniques has been predominantly adopted in most parts of the world for years (Emily et al., 2007), especially in the late 80s, where management accounting techniques started to take up more strategic roles (Simmonds, 1981). This phenomenon is consistent with the development of strategic thinking in business that also took place at about the same timeline. As proclaimed by many, this move is in line with the increase of complexity on how business operated primarily because of global competition and progression in management accounting theory (Zarowin, 1997, Siegel, & Sorensen, 1999, & Burns & Scapens, 2000. They advocate that it enables MA to supplement the changing needs of managers to meet contemporary challenges and competitive environments (Allot, 2000).
Indeed, the increase in the complexity of how the business operates in global competition and the progression in management accounting theory as well in practice, is proclaimed by many (Burns & Scapens, 2000;Siegel & Sorensen, 1999;& Zarowin, 1997). They advocate that it enables management accounting to supplement the changing needs of managers to meet contemporary challenges and competitive environments (Allot, 2000).
To some researchers, the failure of management accounting to deliver its intended outcome was not due to the environmental factors or weakness in the technique but the failure of the practitioner to utilize the management accounting tools appropriately (Nandan, 2010). Researchers like Drury (1992) advocates that traditional management accounting system has failed to report information that formed the elements of competitive advantage such as quality, reliability, lead times, flexibility, and customer satisfaction. However, these represent the strategic goals of world-class manufacturing companies.
Waweru, Houge, and Uliana (2005) stated that the effect of market economy, intensified competition, globalization, scarce resources, change in the business environment, and accelerating technological changes have driven companies to realize the need to have objectives information and awareness of the need of more detailed cost information. In today's business, managers must continuously ensure that their companies could sustain in the global market. A company must be able to compete nationally and internationally to sustain itself in the market.
Presently, managing today's enterprises has become more challenging due to the rapid growth in the economy. Within a rapidly growing global economic change, managers are required to be more responsive to the changes in the market environment. Such changes accelerated with additional factors such as technological advancement, information and communication technology, political turmoil, economic crisis, the intensity of market competition, and cultural changes, have changed the way businesses are carried out, and enterprises managed. Besides, Gary, Ghosh, Hudick, and Nowacki (2003) stressed that the increasing market competition and uncertainty in the business environment had put significant pressure on managers to make timely and informed business decisions accordingly. Moreover, managers need to be equipped with multiple skills and should be looking for more rigorous support from every aspect of organizational management to deal with the changes in today's economy.
Consequently, several kinds of research have been done in management accounting, especially on its contribution to enhancing customer satisfaction. Though there are plenty of research in management accounting, most of them are focused on the traditional ways of cost control rather than more advanced techniques in measuring customer satisfaction objectives. The role of management accounting in the hospitality sector is considered one of the under-researched areas (Pellinen, 2003). Sevim and Korkmaz (2014) also agreed that only a small number of studies have investigated the use and utilization of management accounting systems among hotels.
Therefore, management accounting practices have progressed to address the need to respond to the changes in the industrial and sectoral environment. The theoretical and practical aspects of MAP have been developed to equip the needs of organizational management in making timely and accurate decision making in enhancing customer satisfaction. As such, this study investigates customer satisfaction through management accounting practices to provide a better understanding of the form of current practices and the related antecedent factors associated with the hotel industry in Malaysia.
Customer Satisfaction
One of the biggest challenges for the hotel industry is to sustain customer satisfaction. Customer requirements for quality products and services in the tourism industry have become increasingly evident to professionals (Lam & Zhang, 1999;Yen & Su, 2004). Many have claimed that customer satisfaction is the starting point to define business objectives such as affects business profitability (Anderson, Fornell, & Lehmann,1994;Yeung, Ging & Ennew, 2002;and Luo & Homburg, 2007). Thus, in this context, positive relationships exist between higher customer's commitments and increase in profit and return rate.
In order to achieve customer satisfaction, it is vital to recognize and anticipate customers' needs and to be able to satisfy them. Enterprises that can rapidly understand and fulfil customers' needs make great profits than those who fail to understand and satisfy their customers (Barsky and Nash, 2003). Moreover, the cost of attracting new customers is higher than the cost of retaining existing ones. Therefore, in order to be successful, managers must concentrate on retaining existing customers by implementing effective policies of customer satisfaction and loyalty.
However, hotels cannot survive without satisfied customers (Chi & Gursoy, 2009) because customers are the vital driver of a hotel property's financial performance (Kim et al., 2013). Moreover, studies by McNair et al. (2001) and Van Raaij et al. (2003) that investigated the relationship between cost and clients only focused on industrial companies but not service industries. In fact, the relationship between costs and customer satisfaction is far more important in service industries since the costs of providing the service are usually determined by customer behavior (Krakhmal, 2006).
While the literature on management accounting practices notes the importance of customer satisfaction, this study fills a gap in a critical and under-researched area by investigating the relationship between the adoption of management accounting practices and customer satisfaction. This research may help promoting greater diffusion of management accounting practices in the Malaysian hotel industry.
Antecedent Contingency Factors
Contingency-based research has a long tradition in the field of management accounting (Chapman, 1997;Chenhall, 2003;and Gerding & Greve, 2004). This theory suggests that an appropriate accounting system's particular features rely upon the specific circumstances in which an organization finds itself (Otley, 1980). Additionally, contingency theory also supports the idea that no universally appropriate accounting system applies equally to all organizations in all circumstances (Otley, 1980, andEmmanuel et al., 1990).
Furthermore, contingency theory explains how an appropriate accounting information system is designed to match the organization structure, technology, strategy, and environment. Hopwood (1976) had pointed out that the design of a management accounting system and the design of an organizational structure are inseparable and interdependent, although this vital observation has been neglected over the years.
Thus, the contingency-based approach assumes that management accounting systems are adopted to help managers achieve desired company outcomes or goals. As mentioned above, contingency theory explains how an appropriate accounting information system is designed to match the organization structure, technology, strategy, and environment. Therefore, it is presumed that organizations operate in an open system, at the same time, these organizations also concern about their goals and how they respond to external and internal pressures. Haldma and Laats (2002) categorized the contingencies into two general groups, which are external and internal factors. External factors indicate the features of the external environment at the level of business and accounting. Hence, major external factors that are examined at the company level in management accounting and control (including cost accounting) research are external environment (Emmanuel et al., 1990;Khandwalla, 1977;Chapman, 1997;Hartmann, 2000) and national culture.
Therefore, to explain the diversity of management accounting practices, there is a need for organization to adopt contingency theory to demonstrate how specific aspects of an accounting system are associated with various contextual variables such as size, competition, and cost structure (Emmanuel et al., 1990).
Based on the discussion above, three antecedent contingency factors were examined in this study, by adopting the underlying theory to explain the current research. The three antecedent contingency factors are the intensity of market competition (IMC), technology (TECH), and hotel size (HS). These factors are believed to contribute to the adoption of management accounting practices in the Malaysian hotel industry.
Framework And Hypotheses Development
The theoretical framework shown below was developed based on the discussed literature reviews and adopted from the previous study. The research framework is shown in Figure 1, followed by four hypotheses of the study.
Research Methodology
A non-probability purposive sampling technique was adopted to verify that the collected data were valid and to ensure the sample characteristics corresponded to the nature of the study. For this study, the questionnaire was used as an instrument to gather relevant information from the respondents. The scaling technique required respondents to indicate a degree of disagreement with MAPS IMC TECH HS CS each series of statements. A Likert 5-point scale was used to differentiate the degree of agreement and disagreement. The target population for this study was pooled from hotels listed under the Ministry of Tourism, Arts, and Culture of Malaysia (hereafter MOTAC) regardless of their star rating. The MOTAC directory used as a sampling frame due to the importance of making sure that the samples adequately represent the intended target population to which the hypothesis testing results are generalized (Van der Stede & Merchant, 2007).
Sample size estimation was determined using G*power 3.0 analysis (Faul et al., 2007). By using G-Power Analysis software, with the effect size of f-square 0.15, α error pro 0.05, power Gf 0.95 with three tested predictors. One hundred nineteen respondents needed as the minimum sample for this study. Data were collected using a mail and field survey. Three hundred eighty-four (384) questionnaires were sent out, and within six months, only 217 hotels had replied which represent 56.5 percent response rate. Out of 217 questionnaires returned, 17 were rejected, leaving 200 questionnaires usable for the analysis (52 percent response rate). According to Smith (2003), a response rate above 25 percent in accounting research is considered sufficient for statistical analysis and inferences. Figure 1 illustrates the research framework that contains statements of three variables investigated. The variables are examined using multiple items (Hayduk & Littvay 2012), and the data was then analyzed using SmartPLS 3.3.2 (Ringle et al., 2015) to assess the hypotheses. Table 1 presents the profile of the participating hotels in the current study. As can be seen from Table 1, majority of the respondents are from 3-star hotel, private company management status, and city hotel. Half of the participating hotels are from the west region (Sabah, Sarawak, and Federal Territory of Labuan). Moreover, majority of the participating hotels have the number of beds ranging from 1 to 100 beds, and the number of rooms ranges below 50 rooms. Hotel size was measured using the number of rooms adopted from Kasimu, Zaiton, and Hassan (2012), in which hotels having more than 100 rooms are considered as large and those with less than 100 rooms are considered small. The current study reveals that 62.5 percent of the hotels are categorized as small size hotels. Large (more than 100 rooms) 75 37.5 Table 2 demonstrates the findings of construct reliability (CR) and convergent validity testing. The results validate that the constructs (or variables under investigation) to have high internal consistency (Roldán & Sánchez-Franco, 2012) and sufficient average variance extracted (AVE) to corroborate the convergent validity (Hair et al., 2017). All indicators measuring each construct achieve good loadings value higher than the threshold value of 0.708, as advocated by Hair et al. (2017).
Measurement Model
The composite reliability (CR) values of 0.973 (CS), 1.00 (HOTEL SIZE), 0.934 (IMC), 0.972 (MAPs), and 0.823 (HOTEL SIZE) implies that these constructs possess high internal consistency. In a similar vein, these constructs also indicate that convergent satisfaction validity with the average variance satisfaction (AVE) value for respective constructs is higher than the threshold value of 0.5, demonstrating that the indicators explain more than 50% of the constructs' variances. 0.799 BUE3, BUR3 and BUR4 items were deleted due to poor loadings Composite Reliability <.708 (Hair et al., 2010 andHair et al., 2014) Table 3 displays the HTMT criterion to evaluate discriminant validity (Ringle, Wende and Will, 2020). In assessing discriminant validity, this study applies Henseler's (2015) heterotrait-monotrait ratio of correlations criterion. The result specifies that discriminant validity is well-established at HTMT0.85 (Diamantopoulos & Siguaw, 2006), which implies that the discriminant validity issue is of no concern. The findings indicated that it is appropriate to proceed with the structural model assessment to test the study's hypotheses, as there is no issue of multi-collinearity between items loaded on different constructs in the outer model. Table 4 demonstrates the assessment of the path coefficient, which is represented by Beta values for each path relationship. A 5000-bootstrap resampling of data was conducted to examine the hypotheses (Hair et al., 2017). The results for direct effects indicate that the intensity of market competition (IMC) and technology (TECH) have a positive influence on the uptake of MAPs. On the contrary, hotel size (HS) shown to have contradicted results on the adoption of MAPs. Table 4 depicts the path coefficients assessment. The results indicate that three out of the four proposed relationships are significant. Specifically, the study found support for H1 (IMC → MAPS, β = 0.380, p < 0.001, LLCI = 0.232, ULCI = 0.509), H2 (TECH → MAPS, β = 0.335, p < 0.001, LLCI = 0.183, ULCI = 0.461), and H4 (MAPS → CS, β = 0.487, p < 0.001, LLCI = 0.366, ULCI = 0.596). These findings advocate past studies conducted by Nair (2017) on the relationship between IMC and MAPs, Azudin and Mansor (2018) on the relationship between TECH and MAPs and Allot (2000) on the relationship between MAPs and CS respectively. Nonetheless, this study did not find support for H3. This finding corroborates with Duncan and Malini (2016) who also did not find any significant relationship between HS and MAPs in their study. (Hair et al. 2017) Lateral Collinearity: WIF 3.3 of higher (Diamantopoulos & Siguaw, 2006) Table 5 displays the quality of the model. H1 (IMC) and H2 (TECH) shown to carry moderate effect size f 2 on the MAPs (0.201 and 0.153, respectively). On the other hand, H4 (MAPs) found to pose a substantial effect size f 2 on customer satisfaction (CS) (0.310) (Cohen, 1988). The coefficient of determination represented by R 2 , which explains whether the intensity of market competition (IMC) and technology (TECH) could explain management accounting practices, indicates moderate effect (Chin, 1998). The R 2 value is 0.237, suggesting that the antecedents (IMC and TECH) can explain moderately towards MAPs. Meanwhile, The R 2 value for CS is 0.384, signifying that the MAPs could explain CS substantially.
Structural Model Assessment
Furthermore, multi-collinearity between indicators are also assessed. All indicators for variables satisfy the VIF values, and they are consistently below the threshold value of 5.0 (Hair et al., 2014) and 3.3 (Diamantopoulos & Siguaw, 2006). Therefore, it can be concluded that collinearity issues do not reach critical levels in any of the variables; therefore, there is no issue with estimating the PLS path model. The predictive relevant values of all exogenous (independent) variables towards endogenous (dependent) variable were larger than 0, indicating that the antecedent variables (IMC, TECH, HS) could predict the MAPs and CS, as presented by Q 2 using blindfolding procedure (Hair et al., 2017). (Cohen, 1989) f 2 ≥ 0.26 consider Substantial (Cohen, 1989) Q 2 ≥ 0.00 consider Large (Hair, 2017)
Conclusion
This study highlights the roles of antecedent contingency factors towards the use of Management Accounting Practices (MAPs) which proven to have a significant effect on the hotel's customer satisfaction. The current research reveals that the intensity market competition and technology to have moderate effect on the adoption of Management Accounting Practices. Surprisingly, in the report, the antecedent of hotel size shows an insignificant correlation between hotel size and the acceptance of accounting standards for management. This suggests that the hotel industry in Malaysia needs to adopt management accounting practices, regardless of the hotel scale (large or small).
The analysis involved a survey data of 200 hotel accounting staffs in Malaysia, which revealed that the management accounting practices' adoption is highly significant in influencing the hotels' customer satisfaction. Specifically, this is consistent with Mubiri's study (2016) that one of hotels' key techniques to improve customer loyalty is benchmarking (MAPs). The hierarchical cost level helps managers to consider cost causation and improve decision-making. Activity-based costing (MAPs) also helps managers understand which consumers are profitable and which do not contribute to improved profitability (Dalci et al., 2010).
In addition, by using management accounting practises, the hotel can monitor customer demands and what action other hotels have taken on these demands. The estimates made under the accounting practises of management show that certain customer groups are unprofitable. Barsky and Nash (2003) have shown that businesses that easily understand and meet the needs of consumers make higher profits than those that struggle to understand and satisfy their customers.
Hence, the main limitation is the survey questions related to the hotel ownership and star rating. Each of the individual hotels has its own private and confidential policy. Hotel's information for instance annual sales, has to be kept strictly private and confidential by the employees and shall not be reported except for certain mandatory reporting required. Yet, some hotels are adhering to the same policy due to the ownership resulting in secrecy, quality assurance, and performance.
This study contributes to the understanding of management accounting practices in service industries through the development of a model that allows establishing a direct relationship between antecedent of management accounting practices and the influences of MAPs in generating customer satisfaction. By assessing this model in the case of the Malaysian hotel industry, it corresponds to the calls for more study on the hospitality business (Collini, 2006;Krakhmal, 2006). Moreover, this finding may also contribute to the Malaysian Association Hotels (MAH) for enlighten them on information about MAPs. Moreover, hotel managers can introduce various advertising schemes or promotions to draw more profitable customers during different seasons. The hotel can draw low-profit customer groups during a low season and can have high-income groups at peak times (Dalci et al., 2010).
Furthermore, recent literature developments in cost accounting has seen a stronger focus on the customer, but studies concentrating on the relationship between cost and customer satisfaction has been overlooked, particularly in the Malaysian context. In fact, the relationship between costs and customer satisfaction is far more important in service industries because customer satisfaction determined by customer behavior (Krakhmal, 2006). Thus, this issue is suggested for further investigation in the future research.
One worth mentioning problem for further investigation is to investigate other antecedents of MAPs such as booking and payment systems in the hotel industry, and the MAPs as a mediating influence on the firm performance of hotels are also recommended for future research. Given the findings and discussions, further investigation by applying multi-group analysis is also recommended to establish differences of effects between groups, such as hotel type, hotel size, or between regions.
Nevertheless, the hotel industry will continue to be affected by disruptors, for instance, Airbnb, Home Cation, or homestay besides fierce competition among hotels itself due to the influx of new hotels in Malaysia. At the same time, competition among hospitality industry players is becoming more intense, creative, and innovative to attract new and existing travelers. Therefore, the hotel industry needs to identify and utilize its success factors and work forwards to improve their business model with effective management accounting practices to ensure retention and loyalty of existing customers while also attracting the new ones, and always ready to stand out from the endless fierce competition. | 2021-05-13T00:03:34.430Z | 2020-12-29T00:00:00.000 | {
"year": 2020,
"sha1": "7002de8294a10a070930790008a82a25564648f5",
"oa_license": "CCBY",
"oa_url": "https://hrmars.com/papers_submitted/7752/customer-satisfaction-through-management-accounting-practices-in-the-hotel-industry.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "db84eb74589d90529ba61c9907240faf67ee9b60",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
258505317 | pes2o/s2orc | v3-fos-license | How to Decide Oxygen Therapy in Childhood Carbon Monoxide Poisoning?
Objective: Carbon monoxide poisoning is an important cause of morbidity and mortality all over the world. In our study, it was aimed to determine the clinical and laboratory parameters that may be effective in deciding the need for hyperbaric oxygen therapy in the management of cases. Materials and Methods: From January 2012 to the end of December 2019, 83 patients who applied to a university hospital pediatric emergency department in İstanbul with the diagnosis of carbon monoxide poisoning were included. Demographic characteristics, carbon monoxide source, exposure duration, treatment approach, physical examination findings, Glasgow Coma Score, laboratory results, electrocardiogram, cranial imaging, and chest x-ray were evaluated from the records. Results: The median age of the patients was 56 (37.0-100.0) months and 48 (57.8%) of them were male. The median time of exposure to carbon monoxide was 5.0 (0.5-3.0) hours in those who received hyperbaric oxygen therapy and was significantly higher than those who received normobaric oxygen therapy (P < .001). Myocardial ischemia, chest pain, pulmonary edema, and renal failure were not detected in any of the cases. The median lactate level was detected as 1.5 (1.0-2.15) mmol/L in those who received normobaric oxygen therapy and 3.7 (3.17-4.62) mmol/L in those who received hyperbaric oxygen therapy, and the difference between them was statistically significant (P < .001). Conclusions: A guideline containing precise clinical and laboratory parameters for hyperbaric oxygen therapy in children has not been developed yet. In our study, carbon monoxide exposure duration, carboxyhemoglobin levels, neurological symptoms, and lactate levels were found to be guiding parameters in determining the need for hyperbaric oxygen therapy.
INTRODUCTION
Carbon monoxide (CO) poisoning is an important cause of morbidity and mortality all over the world. 1 Epidemiological data are necessary to develop organized efforts to reduce the cumulative effect of CO poisoning.
Evaluation of clinical findings and laboratory results in order to make a diagnosis and to provide appropriate treatment is still controversial and unclear. 2 It is very important to make decision immediately on the appropriate treatment by evaluating current conditions in order to prevent dramatic consequences of CO poisoning, which may be severe enough to cause multiple deaths. Uysalol et al.
Carbon monoxide (CO) can cause cellular hypoxia followed by oxidative stress and inflammation, neurological, cerebrovascular or cardiovascular disorders, including encephalopathy, ischemia, and peripheral nerve damage. 3 First symptoms are usually seen in neurological and cardiovascular system because of the need for high amounts of oxygen for these systems functions. Although the risk is higher in patients with cardiac disease, tachycardia, cardiac enzyme elevation, myocardial damage, and arrhythmias can often be seen after exposure to CO. 4 On the other hand, in children, unlike adults, signs and symptoms are different; they do not correlate with laboratory values, and the first treatment choice is not clearly known.
The basis of treatment for CO poisoning is to remove patients from the source and to ensure adequate oxygenation. 5 Normobaric oxygen therapy (NBOT) can be achieved by delivering 100% oxygen at a rate of 10-15 L/min with a nonrebreather reservoir mask. It is recommended to continue the treatment until the symptoms regress or blood carboxyhemoglobin (COHb) level decreases below 5% in patients in a good general condition, with no unconsciousness, no additional complaints, or mild symptoms. 6 Hyperbaric oxygen therapy (HBOT) is a medical treatment in which patients breath 100% oxygen intermittently in a hyperbaric chamber, at a pressure higher than the atmospheric pressure at sea level (1 ATA = 1 atmosphere absolute = 1 Bar = 760 mmHg). Hyperbaric oxygen therapy increases dissolved oxygen in blood regardless of oxygen carried by hemoglobin, thus decreases tissue hypoxia and leads to regression of intoxication symptoms. Although there is no clear difference in results of studies comparing HBOT and NBOT in the literature, there are studies indicating that HBOT reduces the risk of cognitive sequelae. 3,7 Retrospective observational evidence shows that HBOT is associated with reduced short-and long-term mortality in cases of severe CO poisoning, especially in those with acute respiratory failure and in patients under 20 years of age. Hyperbaric oxygen therapy indication in CO poisoning is determined according to clinical signs, symptoms, and laboratory findings of the case. Although controversial, HBOT is recommended for neurological symptoms such as syncope, coma, seizure, mental status change, resistant metabolic acidosis, pregnant women with COHb level above 15%, history of ischemic heart disease and COHb level >20%, patients with COHb level above 40%, sign of cardiac ischemia or arrythmia, and proven end-organ damage findings. 8,9 The treatment of CO poisoning in children is a race against time. The main discussion is deciding whether and when to use HBOT or NBOT. Patients in need of HBOT should be carefully selected and closely observed within the first few hours. Late signs and symptoms such as neurological sequelae can be prevented with early diagnosis and rapid decision for HBOT administration. 10 Although the number of HBOT centers is relatively high in our country, it is limited in the world, and transportation difficulties may be experienced in patient referral. For this reason, it is important to determine the appropriate predictor factors in pediatric population for HBOT need and evaluate the parameters that can help in making a referral decision.
In our study, the determination of clinical and laboratory parameters that can be effective in deciding on the need for HBOT during the management of cases admitted to a university hospital, Pediatric Emergency Department with a suspicion of CO poisoning has been planned. In addition, this study aimed to reveal demographic, clinical, and laboratory characteristics, to examine the systemic involvement, to determine the degree and prognosis of intoxication, and to investigate the effects of these data during the treatment process.
Patients
From January 2012 to the end of December 2019, 83 patients who applied to a university hospital pediatric emergency department in İstanbul with the diagnosis of CO poisoning were included in our study. The data of the patients were extracted and analyzed retrospectively from the patient's records in the division's achieve. Patients who had incomplete records were excluded from the study. Demographic characteristics at the time of admission, CO source, exposure duration to carbon monoxide, treatment approach, physical examination findings, Glasgow Coma Score (GCS), laboratory results, electrocardiogram (ECG), cranial imaging, and chest x-ray were evaluated from the records.
Hyperbaric Oxygen Therapy Method
Hyperbaric oxygen therapy was applied in a multiplace hyperbaric chamber, at the treatment pressure of 2.4 ATA, by sessions with three 25-minute oxygen periods with 5-minute air breaks for 5 minutes, with a total session duration of 120 minutes including compression and decompression time. The total number of HBOT sessions varied according to the clinical condition of each case.
Statistical Analysis
Statistical Package for the Social Sciences (IBM Corp.; Armonk, NY, USA) Windows 21.0 package program was used for the analysis of the data. The variables were investigated using visual (histograms, probability plots) and analytical methods (Kolm ogoro v-Smi rnov/ Shapi ro-Wi lk's test) to determine whether they are normally distributed. Categorical variables were given as numbers (n) and percentages (%). Categorical variables were evaluated using Pearson Chi-square, Fisher, or Freeman-Halton test. The Mann-Whitney U test was performed to compare the significance of pairwise differences. The correlation coefficients and their significance were calculated using the Spearman test. Diagnostic decision-making variables in predicting treatment were analyzed using receiver operating characteristics (ROC) curve analysis. Logistic regression analysis was used to determine independent predictors. Hosmer-Lemeshow goodness of fit statistics were used to assess model fit. A 5% type-I error level was used to accept a statistically significant predictive value of the test variables.
Ethics Statement
This study was approved by the ethics committee of İstanbul University (approval date: 26.01.2018 and number 02) and was conducted according to the guidelines of the Declaration of Helsinki.
RESULTS
From January 2012 to the end of December 2019, 83 patients who applied to a university hospital pediatric emergency department in İstanbul with the diagnosis of CO poisoning were included in our study. The median age of the patients was 56 (37.0-100.0) months, and 48 (57.8%) of them were male. While CO source was 43.9% coal stoves, 46.3% gas was detected in those who received NBOT, 59.5% coal stoves and 16.7% gas were detected in those who received HBOT, and there was no significant difference between the treatment approach. The median time of exposure to CO was 5.0 (0.5-3.0) hours in those who received HBOT and was significantly higher than those who received NBOT (P < .001). Median COHb level was found to be 2.6 (1.2-10.75) in those who received NBOT and 28.95 (13.88-33.25) in those who received HBOT, and the difference between them was significant (P < .001). Demographic and clinical characteristics and laboratory data of the patients are shown in Table 1. Myocardial ischemia, chest pain, pulmonary edema, and renal failure were not detected in any of the cases. Chest x-ray was evaluated as normal in all patients. There was no significant difference between the NBOT and HBOT groups in terms of complaints of acute gastroenteritis, nausea, vomiting, and weakness. A statistical significance was found between restlessness and HBOT (P = .012). The median lactate level was detected 1.5 (1.0-2.15) mmol/L in those who received NBOT and 3.7 (3.17-4.62) mmol/L in those who received HBOT and the difference between them was statistically significant (P < .001).
In the presence of any of the Babinski sign, hyperactive deep tendon reflex or signs of meningeal irritation, the neurological examination finding was considered positive and a significant relationship was found with HBOT (P = .004). Figure 1).
The comparison between neurological and cardiological symptoms and treatment is shown in Table 3. It was found statistically significant that patients with symptoms such as blurred vision, syncope, seizures, altered consciousness, weakness, and confusion received HBOT (P = .039, P = .001, P = .006, P = .001, P = .028, P = .022). In comparison of headache and treatment, the patient group with headache received more NBOT and this difference was found to be statistically significant (P = .041).
No patients had chest pain or myocardial ischemia among the cardiological symptoms. A significant relationship was found between hypotension and HBOT (P = .012).
Correlation analysis was performed between frequently used parameters of blood gas and COHb levels ( Table 4). Although there was a significance between pH and COHb level, a very weak correlation was found (r = 0.020, P < .001). Similarly, although there was a significance between pCO 2 and COHb level, a weak correlation was found (r = 0.236, P = .03). Both significance and very high correlation were found between lactate level and COHb level (r = 0.803, P < .001).
DISCUSSION
Carbon monoxide poisoning is an important cause of morbidity and mortality all over the world and has a significant place among poisoning cases in emergency admissions of children.
Since HBOT centers are few in the world and in our country, access difficulties can be encountered and patients may experience problems during referral. 10 Therefore, it is important to determine relevant clinical and laboratory predictor factors that can help in making HBOT decision and evaluate patients according to these parameters. Thus, it will be possible to determine the patients who need HBOT in CO poisoning who are admitted to the pediatric emergency department, taking into account clinical and laboratory predictive findings as soon as possible.
Carbon monoxide poisoning is more common in winter. Increase in the use of CO sources during this period, the inadequate quality of fuels used for heating, insufficient ventilation of the heating systems, and deficiencies in maintenance are among the possible reasons. 2 In our study, coal stoves and gas are among the main sources of poisoning. There was no significant difference between the sources and treatment. Mendoza and Hampson 11 found exhaust fumes of motor vehicles and coal as the most common causes of poisoning. In our country, gas cylinder or gas-fired water heaters used in baths and coal stoves are reported to be among the main causes of accidental CO poisoning. 12,13 In line with the results found in our study, the importance of raising awareness with intermittent warnings and trainings should be emphasized in our country where the use of wood-coal stoves is common.
Carbon monoxide may cause systemic effects that can lead to lactate production by mechanisms such as seizure, hyperventilation, and cardiac dysfunction. 14 In our study, statistical significance was found in the group that received HBOT comparing treatment with CO exposure duration and COHb and lactate levels. It has been thought that as the CO exposure duration increased, COHb values rise and high COHb levels caused an increase in neurological symptoms and therefore in the need for HBOT. We found that blood lactate level could provide more accurate information about the duration and degree of hypoxia, and lactate could be considered as an indicator of severe intoxication and would help in deciding the need for HBOT. When evaluating the severity of intoxication, duration of treatment, and patients' follow-up, it was concluded that the blood lactate level was more significant than the COHb level. Benaissa et al 15 reported that plasma lactate level was significantly associated with the initial severity of neurological impairment and COHb level at presentation. Damlapinar et al 16 reported that high lactate levels were found in most of the patients and no significant correlation was found between the patients' COHb levels and their clinical conditions. Lactate levels were found to be more significant than COHb levels in terms of loss of consciousness and convulsions, and it was concluded that lactate levels may be important in evaluating the severity of intoxication and treatment. Different studies reported that there is no correlation between the initial degree of intoxication and clinical outcome of the patients. 9,14 Reasons such as contact with normal atmospheric oxygen after leaving the source of poisoning, giving 100% oxygen treatment in ambulance before admission to hospital, delay in admission after exposure may cause the initial COHb level to be measured lower than expected. Sokal and Kralkowska 17 reported a significant correlation between COHb level and lactate level in their study and suggested that this poor correlation could be explained by their different half-lives under oxygen treatment and tissue hypoxia caused by COHb formation and other lactate formation mechanisms. When the results of our study are evaluated together with the literature, it has been thought that lactate level may be a more useful prognostic factor than COHb level and may be effective in determining the treatment process. Evaluation of lactate level together with parameters specified among the indications for HBOT will be useful in defining the patients who need the treatment.
Carbon monoxide poisoning can present with a diverse range of spectrum of neurological signs. Although there are studies reporting that there is no relationship between clinical status of patients at the time of admission and COHb levels, 8,18 there are also articles reporting that COHb level is related to severity of clinical signs. 12,14 In our study, a significant correlation was found between the presence of blurred vision, syncope, seizure, altered consciousness, weakness, confusion, restlessness, or pathological neurological examination and HBOT. Moon et al 14 evaluated the presence of neurological symptoms as an indicator of severe CO intoxication and reported COHb levels to be significantly higher in patients who presented with lethargy and confusion. Tissue hypoxia and metabolic acidosis developed as a result of prolonged exposure to CO were thought to cause cerebral ischemia leading to clinical effects.
Keleş et al 9 reported that there is a correlation between the severity of neurological symptoms and COHb level. There are studies stating that HBOT administration as soon as possible in patients presenting with moderate and severe CO poisoning may be beneficial in preventing neuropsychiatric sequelae. 7,19 In our study, a significance was found between low GCS and HBOT and it was evaluated as one of the indicators of severe neurological impairment. Grieb et al 19 found a significant negative correlation between GCS at the time of admission and severity of intoxication and stated that it should be evaluated together with other parameters in determining the severity of poisoning. Serious neurological symptoms should be considered as a sign of severe intoxication, and HBOT decision should not be delayed. None of the patients had chest pain or myocardial ischemia among cardiological symptoms in our study. A significant correlation was found between hypotension and HBOT. Carbon monoxide can cause vasodilation and hypotension through the activation of guanylate cyclase and the release of nitric oxide from platelets. The retrospective study of Huysal et al 20 reported that troponin levels increased significantly in patients with high COHb levels. Seçilmiş and Öztürk 12 reported that there was no significant difference between the clinical condition of the patient, need for intensive care, and COHb level. They stated that as the exposure duration to CO increased, COHb level, cardiotoxicity, and neurological symptoms increased.
The limitation of our study is the small sample size. However, despite being a single center, our unit is a center that accepts these patients.
CONCLUSION
A guideline containing precise clinical and laboratory parameters for HBOT in children has not been developed yet. Although many studies agree on the use of HBOT in severe poisoning, there are differences in treatment of mild and moderate cases. However, considering the unpredictability of delayed neurological sequelae, it is also suggested that the use of HBOT only in severe cases is too limiting. In our study, the duration of CO exposure, COHb levels, neurological symptoms, and lactate levels were found to be guiding parameters in determining the need for HBOT. More research is needed to develop guidelines to determine which pediatric patients admitted with CO poisoning should be referred to an HBOT center.
Ethics Committee Approval: This study was approved by the ethics committee of İstanbul University (Approval date: 26.01.2018 and number 02).
Informed Consent: Informed consent was not required because of the retrospective nature of the study and the analysis used is anonymous clinical data. | 2023-05-05T15:09:45.176Z | 2023-05-01T00:00:00.000 | {
"year": 2023,
"sha1": "9aad8d404c2b8713c95dedebf698e542b113e6aa",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "a80d68fc0f0bb93fd7cbfbb8b66e6e350a832971",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252584840 | pes2o/s2orc | v3-fos-license | Urinary Biomarkers for the Noninvasive Detection of Gastric Cancer
Gastric cancer (GC) is associated with high morbidity and mortality rates. Thus, early diagnosis is important to improve disease prognosis. Endoscopic assessment represents the most reliable imaging method for GC diagnosis; however, it is semi-invasive and costly and heavily depends on the skills of the endoscopist, which limit its clinical applicability. Therefore, the search for new sensitive biomarkers for the early detection of GC using noninvasive sampling collection methods has attracted much attention among scientists. Urine is considered an ideal biofluid, as it is readily accessible, less complex, and relatively stable than plasma and serum. Over the years, substantial progress has been made in screening for potential urinary biomarkers for GC. This review explores the possible applications and limitations of urinary biomarkers in GC detection and diagnosis.
INTRODUCTION
Gastric cancer (GC) is a malignant tumor originating from the gastric mucosa and associated with high morbidity and mortality [1,2]. Surgical resection is still considered the best treatment approach for GC. However, patients with early-stage cancer are often asymptomatic and thus lose their chance to undergo surgery. Therefore, early diagnosis is crucial for improving clinical outcomes and prognosis [3,4]. Endoscopic assessment is the most reliable imaging method for GC diagnosis, which allows clinicians to collect tissue biopsy and perform endoscopic ultrasound to determine the depth of invasion (tumor or T stage). However, it is semi-invasive and costly and heavily depends on the skills of the endoscopist, which limits its clinical applicability [5]. Other common diagnostic approaches include magnetic resonance imaging, X-ray pepsinogen I, and X-ray pepsinogen II. These approaches offer lower sensitivity and specificity and are costly. Thus, the search for novel noninvasive biomarkers, especially for early-stage GC, has become a hot topic among scientists.
Urine, an ideal biofluid, has gained increasing attention in biomarker discovery. Urine is a highly desirable biospecimen for biomarker analysis; it can be easily obtained when compared with plasma and serum [6,7]. The application of urinary biomarkers in tumors of the excretory or genitourinary cancer system, such as bladder cancer, prostate cancer, and upper urinary tract urothelial carcinoma, has gradually matured, and some urinary biomarkers have already completed the confirmatory stages of clinical use [8,9]. Over the years, substantial progress has been made in screening for potential urinary biomarkers for GC, especially early-stage tumors. However, urine may be affected by age, sex, diet, hormonal status, and physical activity [10]. Therefore, the universal applicability of potential biomarkers requires further verification, and experimental protocols must be standardized. Given that there are different components in the urine, this paper summarizes the research field, possible applications, and limitations of urinary biomarkers for GC detection.
DETECTION TECHNIQUES OF BIOMARKERS IN THE URINE
Continuous improvements in urine testing technologies have enabled the identification of many substances in urine, especially low-abundance substances, thus further promoting the discovery of new biomarkers. Over the last two decades, urine RNomics, proteomics, and metabolomics have developed rapidly in parallel with advanced omics and medical tests [11]. Microarray technologies, quantitative real-time polymerase chain reaction (PCR), and next-generation RNA sequencing have prompted the discovery of many urinary microRNAs (miRNAs) in cancer [12]. Additionally, breakthroughs in analytical technologies have supported metabolic profiling, making it one of the most rapidly expanding disciplines in cancer research. Significant progress has been made in acquiring metabolomic data, sampling techniques, experimental techniques, and data characterization [13,14]. Furthermore, urinary metabolomics has been advanced by recent technological developments in mass spectrometry (MS), nuclear magnetic resonance (NMR), gas and liquid chromatography (LC), and capillary electrophoresis (CE), thus improving reproducibility and metabolome coverage [15]. Meanwhile, there are several different techniques for proteomic studies, including tandem MS (MS/MS), LC-MS, CE-MS, surfaceenhanced laser desorption ionization MS, and array technology have been implemented for proteomics analysis of urine and biomarker discovery [16]. Fig. 1 summarizes the applications of urine detection technologies for GC urinary biomarker research.
MICRORNAs IN URINE
miRNAs are a class of 21-28 nucleotide noncoding RNAs that mediate gene expression posttranscriptionally and are involved in carcinogenesis [17,18]. To date, a number of miRNAs have been discovered, some of which are candidate biomarkers for early diagnosis [19] and are highly stable in tissues and body fluids, including urine [20]. Moreover, studies have shown that urinary miRNAs remained unchanged even after seven cycles of freezing and thawing or incubation at room temperature for 72 hours [21]. Various technologies such as microarray, quantitative real-time PCR, and next-generation RNA sequencing have been widely used to analyze miRNA expression profiles in both biofluids and tissues [22][23][24]. Iwasaki et al. demonstrated higher levels of miR-6807-5p and miR-6856-5p in the urine of patients with GC than in control subjects. A combination of miR-6807-5p and miR-6856-5p achieved an area under the curve (AUC) of 0.748, suggesting that these miRNAs could be used to diagnose early-stage GC [25]. Another study showed that urinary miR-376c was also significantly increased in 20 patients with GC when compared with that of 11 healthy individuals, and it displayed 64% specificity and 60% sensitivity, with an AUC of 0.70 https://doi.org/10.5230/jgc.2022.22.e28
Urinary Biomarkers for Gastric Cancer
for GC diagnosis [26]. Moreover, Kao et al. [27] performed a quantitative stem-loop PCR assay of miR-21-5pin urinary samples from healthy individuals, preoperative patients, and postoperative patients with GC. Compared with healthy controls, patients with GC had significantly upregulated miR-21-5p, and urinary miR-21-5p levels showed a clear downward trend after tumor tissue resection. Interestingly, another study reported no urinary miR-21-5p in patients with GC and healthy controls [25]. The different results may be explained as follows: 1) The sample sizes were different and could significantly affect the results. Therefore, large-scale multicenter studies are warranted to validate these biomarkers. 2) Cancer biomarkers vary across stages of disease progression, and studies involving patients at different stages may report different results. 3) GC is a multifactorial disease, and environmental and genetic factors may affect its etiology. There are differences in the incidence of GC among different regions and races. Whether or not biomarkers reflect disease status across diverse ethnic groups remains unknown. 4) Biomarkers may exhibit different expression levels in different subtypes.
In summary, all these data suggest that miRNA in urine may be a promising noninvasive diagnostic biomarker of the disease; however, their significance needs to be validated in further independent large-scale cohorts. Table 1 summarizes the literature on urinary miRNAs in GC, focusing on the main aspects of the studies presented (i.e., study design, biological function, and results).
Roszkowski et al. [29] investigated the daily urinary excretion of 8-oxoGua and 8-oxodG in a large cohort of 222 patients with malignant cancer, including gastrointestinal cancer, and found that the urinary levels of 8-oxoGua and 8-oxodG were significantly higher in the GC group than in healthy control group. Furthermore, Borrego et al. [32] confirmed that urinary 8-oxo-2′-deoxyguanosine(8-oxo-dG) levels were significantly elevated in patients with GC and progressively declined after gastrectomy. The latest research successfully quantified 8-OHdG and 8-OHG in urine using robust solid-phase extraction (SPE) combined with ultraperformance LC-MS/MS in 70 healthy individuals and 60 patients with GC and found that the concentrations of urinary 8-OHdG and 8-OHG were increased dramatically in patients with GC, with AUC of 0.777 and 0.841, respectively [33]. Table 2 summarizes urinary DNA and RNA oxidative damage markers for GC.
ENDOGENOUS METABOLITES IN URINE
Metabolites are small substrates and products of metabolism with mass units below 2000 that drive essential cellular functions [34]. Metabolites represent the integrated outputs of the genome, transcriptome, and proteome. Moreover, they reflect the upstream input from various external factors, including the environment, diet, lifestyle, and drug exposure AUC = area under the curve; GC = gastric cancer; -= no data available. https://jgc-online.org [35]. Metabolic alterations can be used to detect variations in the biology and morphology of cancers to guide clinical management decisions [15]. Urine is commonly used for profiling metabolic and screening clinical biomarkers [36,37]. To date, various endogenous metabolites involved in multiple metabolic pathways have been detected in urine (Fig. 2). For example, metabolomics has been used to analyze urine for GC biomarkers [38]. GC's distinct urinary metabolomic profile identification of GC could provide an efficient, non-invasive diagnostic modality.
Several studies have examined urinary metabolites for GC detection. Amino acids, bile acids, and oxidative nucleic acid metabolites may be used as diagnostic biomarkers for GC. A previous study analyzed metabolites in 293 urine samples by gas chromatography coupled to mass spectrometry (GC-MS) and found that the urine levels of 10 amino acids (namely, valine, alanine, proline, tryptophan, isoleucine, serine, threonine, tyrosine, methionine, and glycine) were significantly higher in patients with GC and showed diagnostic ability with an AUC from 0.693 to 0.823 [39]. Moreover, Chan et al. detected increased urinary alanine concentrations in patients with GC when compared with those in healthy individuals. They also established a diagnostic model using alanine, 2-hydroxyisobutyrate (2-HIB), and 3-indoxylsulfate (3-IS) for GC, with an AUC of corresponding receiver operating characteristic curve of 0.95, specificity of 80%, and sensitivity of 95% for predicting GC [40]. A study comparing the concentrations of 44 metabolites in the urine of 50 patients with GC and 50 healthy individuals revealed that alanine, tyrosine, glycolate, glycine, methionine, phenylalanine, and arginine levels were significantly increased in patients with GC; moreover, the combination of alanine, acetate, 4-hydroxyphenylacetate, and phenylacetyl glycine showed high sensitivity and specificity (sensitivity: 86%, specificity: 92%) for GC prediction [41]. A further CE-MS metabolomics study found increased lactic acid, valine, leucine, arginine, and isoleucine levels in patients with GC when compared with control subjects. However, histidine, aspartate, citric acid, succinate, malic acid, methionine, and serine were markedly decreased in patients with GC [42]. Kwon significantly different between healthy individuals and patients with GC, with AUCs ranging from 0.632 to 0.936. Furthermore, the early-stage GC diagnostic model exhibited a specificity of 97% and a sensitivity of 94.7%. They also found that urinary metabolomics had a higher diagnostic value than CEA, CA19-9, and CA72-4 levels. A more recent study demonstrated that the levels of D-serine (D-Ser) and D-isoleucine (D-Ile) were significantly higher in the GC group than in the healthy group, while the levels of β-(pyrazol-1-yl)-L-alanine (L-PA) in the GC group were lower than those in the HC group. Univariable analysis of age, L-PA, D-Ser, and D-Ile showed that their AUC values ranged from 0.760 to 0.895, while multivariate model analysis showed that the AUC of the combined indicators was 0.977, showing great potential in diagnosing GC [38].
Lyu et al. [44] used an SPE column that contains a covalent organic framework material coupled to LC-MS/MS to quantitatively analyze samples from patients with GC and healthy control subjects. They found that the levels of hyodeoxycholic acid, cholic acid, and chenodeoxycholic acid were significantly higher in patients with GC, while the glycochenodeoxycholic acid level in patients with GC was significantly lower than that in control subjects. These bile acids achieved favorable diagnostic performance with AUCs of 0.854, 0.851, 0.753, and 0.769, respectively. AUC = area under the curve; GC = gastric cancer; BN = benign gastric disease; -= no data available. https://jgc-online.org recipient cells [45]. EVs secreted from cancer cells participate in fibrosis, angiogenesis, metastasis, and evasion of immune surveillance [46,47]. EVs can be found in various body fluids such as plasma, urine, breast milk, saliva, semen, lymphatic fluid, cerebrospinal fluid, sputum, amniotic fluid, and synovial fluid [48]. Urinary EVs appear to be particularly promising for the early diagnosis of GC. A prospective study performed metagenome analysis using body fluid samples (gastric juice, urine, and blood) to examine the distinct microbial composition of bacteria-derived EVs from patients with GC. Among the four sample types of prediction models, the model using urine samples showed the highest AUC of 0.823, with 67.7% sensitivity, 84.9% specificity, and 76.1% accuracy [49].
EXTRACELLULAR VESICLES (EVs) AND EXOSOMES IN URINE
Exosomes are EVs of 30-150 nm in diameter that are present in almost all body fluids and contain miRNAs, mRNA, lncRNAs, and proteins [50,51]. Exosomes can regulate the expression of target genes, signal pathways, and cell transformation of receptor cells by mediating information transmission between tumor cells and the tumor microenvironment, which have become important mediators of tumorigenesis, tumor growth, angiogenesis, and metastasis [52] and have been identified as prognostic and diagnostic biomarkers for cancer (Fig. 3). Qian et al. [53] applied next-generation sequencing technology to identify exosomal miRNAs in the serum and urine of patients with GC and healthy individuals and found urinary exosomal hsa-miR-1246 upregulation and hsa-miR-139-5p and hsa-miR-345-5p downregulation in GC.
PROTEINS IN URINE
Urinary proteins may be used for the early diagnosis of GC. Dong et al. [54] found that the protein expression levels of endothelial lipase (EL) in the GC group were significantly lower than those in the normal groups, and EL was proposed to act as a promising diagnostic marker of GC, because it achieved an AUC of 0.967 and a 95% confidence interval (CI) of (0.942-0.993). A study based on a computational method for the prediction of excretory proteins confirmed that urinary EL was substantially reduced in patients with GC, obtaining an AUC greater than 0.9, with true positive and false positive rates of 85% and 9.5%, respectively [55].
Metalloproteinases, a group of zinc-dependent proteinases, activate a water molecule that performs a nucleophilic attack on the scissile peptide bond [56]. Matrix metalloproteinases (MMPs) belong to the family M10 of metalloproteinases [57], which degrade various proteins in the extracellular matrix and regulate growth factors, cytokines, chemokines, and cytoskeletal proteins [58]. MMPs are involved in a wide range of biological processes such as cellular differentiation, tissue repair morphogenesis, embryogenesis, cell mobility, angiogenesis, cell proliferation, migration, wound healing, apoptosis, and main reproductive events, such as ovulation and endometrial proliferation [59]. MMPs are recognized as boosters in tumorigenesis [60]. ADAMs (a disintegrin and metalloproteases), a family of MMP related to metalloproteinases, are involved in cell adhesion, cell signaling, and proteolytic processing of numerous transmembrane proteins and play important roles in tumor progression and metastasis [61]. A previous study found increased MMP-9/NGAL (neutrophil gelatinase-associated lipocalin) complex and ADAM12 in the urine of patients with GC compared to healthy control subjects, and a combination of MMP-9/NGAL complex and ADAM12 showed 77.1% sensitivity and 82.9% specificity, with an AUC of 0.825 for the diagnosis of GC [62].
Many proteomics-based biomarkers that rely on single proteins are currently being used for clinical diagnosis. However, because of the lack of specificity of single biomarkers, a step has been made toward identifying and validating panels of biomarkers rather than attempting to identify a unique ideal diagnostic candidate that might not exist [63]. Urinary proteomics used to search for early markers has gained increasing attention because the complexity of the urinary proteome is lower than that of the plasma proteome, making it easier to detect low-abundance protein changes [64]. A proteomics study was used to screen urine diagnostic markers of GC; the study revealed that urinary levels of TFF1 (trefoil factor 1), ADAM12 (a disintegrin and metalloproteinase domain-containing protein 12), PGA3 (pepsinogen 3), and BARD1 (BRCA1-associated RING domain 1) were significantly higher in the GC group than in the healthy control group. Moreover, uTFF1and uADAM12 appeared to be significant independent proteins for GC diagnosis. In addition, these combination biomarkers displayed an important diagnostic value for GC (AUC of uTFF1+uADAM12 0.815, 95% CI, 0.754-0.877; AUC of uTFF1 + uADAM12+ Helicobacter pylori 0.832, 95% CI, 0.773-0.892). These proteins display sex-specific effects; for male GC, the panel of uTFF1/uADAM12/H. pylori demonstrated good performance with an AUC of 0.858, whereas for female GC, another combination of uTFF1/uBARD1/H. pylori also achieved an AUC of 0.893 [65].
Despite some progress, urinary proteomics research and clinical translation remain in their infancy, as some major problems have not yet been resolved. Specimen collection, processing, and fractionation schemas, as well as analytical platform differences and data [66]. Hence, standardization processes and applicable data normalization methods are required. Defining urinary protein levels in healthy individuals remains an important and challenging problem. Age, sex, diet, exercise, diurnal variation, and hormone status contribute to differences in the proteomics of normal urine [67]. Large-scale longitudinal studies of individuals are needed to establish a reference interval for urinary proteomics.
FUTURE PERSPECTIVE
Urine is an ideal biofluid for biomarker discovery in GC. Urinary miRNAs, proteins, and metabolites have all been reported as possible biomarkers of GC. The current large research output and financial investment in this area undoubtedly confirm the great expectations for the potential urinary analysis might have. Nevertheless, owing to a lack of robust validation, evidence is insufficient to support their clinical use. Most studies on urinary biomarkers for GC diagnosis have been small-scale. Therefore, further research with a larger sample size is required. Choosing a greater number of patients, including low-prevalence populations and premalignant conditions such as intestinal metaplasia and atrophic gastritis, helps represent the areal screening population. With rapid developments in computer technology and medicine, using artificial intelligence to combine "signals" from multiple patterns will facilitate the process of discovery and verification. In addition, combining different biomarker values, clinical evidence, and biochemical parameters will be a great strategy to increase the diagnostic accuracy. | 2022-09-29T15:07:55.005Z | 2022-09-15T00:00:00.000 | {
"year": 2022,
"sha1": "8b817e0e9046698c5f9cbdc0d8837f54fd607ac6",
"oa_license": "CCBYNC",
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9633929",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "fd4d56a80444758d6b4584a5e0d71a5b00c29ec8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
209246269 | pes2o/s2orc | v3-fos-license | Quantifying entrance skin dose for early diagnose of lung cancer
Lung cancer is disease that can cause death in a short time, but it is almost never detected at an early stage, because its symptons are not specific, such as coughing and lack of appetite. In some patients who have a smoking habit, it is easier to be suspected of having a tumor or lung cancer. In order to detect a tumor or lung cancer, a chest X-ray examination is generally done as an early diagnosis. As we have already known, exposure of X-ray radiation to the body can have a negative impact, therefore the dose of X-ray radiation exposure must be minimized. In this study a measurement of entrance skin dose (ESD) was performed on chest X-ray examination in patients suspected of having a tumor or lung cancer. The results showed that patients who had symptoms similar to those of tumor or lung cancer had higher ESD than those who did not. Although the radiation dose is higher than dose on routine chest X-ray examination, it only shows abnormality of radiological photogaraph but do not accurately indicate the presence of tumor or cancer.
Introductions
Chest is a part of the human body that is often used as a place of examination to find out any disturbances or diseases in the human body. The flat shape of chest makes it easy to place the tool or detector. Furthermore, in the chest cavity there are only two organs, the heart and lungs compare to abdomen which is more complex.
Of the two organs in the chest cavity, the lungs receive more attention than the heart, because the lungs are part of the respiratory system that is direcly related to the outside environment. Lungs receive oxygen from outside the body through inhalation. During the process of entering oxygen can be followed by bacteria and viruses that will disrupt the body. That is why disorders of the respiratory system are often associated with diseases, for example coughing and difficulty breathing are often associated with the presence of Tuberculocis (TB) bacteria.
One way to find out if there is a problem with lungs is through chest X-ray examination. The results of abnormal radiological photographs of chest X-ray examinations can describe the presence of abscesses in the lungs, fluid (blood or other) in the lungs which show pleural effusion or pulmonary edema, enlarged heart or cardiomegaly [1], cavities in the lungs indicate tuberculosis [2,3], widening of the aorta or aortic aneurysm, fractures, even lung tumors [4,5,6]. Chest X-ray examination is an easy examination and as we described above, it can detect any disturbances and infections. Therefore this examination is preliminary and routine examination performed on healthy patients (only for medical check-ups) and sick patients who suffer from cough (normal or coughing up blood), chest pain, and difficulty breathing. This is due to this simple and inexpensive examination [7,8]. Especially the presence of lung tumors, in radiological photograph is indicated by the presence of irregular and abnormal shadows in the lung area. The image formed is only a shadow because the ability of the material to absorb X-ray radiation depends on the density of the material. For example the dense bone will produce white, while lungs which is less dense will appear black (Figure 1a), while the blood fluid in the lungs due to infection and water will look the same, that is gray. Therefore, in chest X-ray photos, the doctor will only look for gray and shadowy masses. Because of lung tumors are only obscure or unclear images (Figure 1b), lung tumors in X-ray examination can only be detected if tumor diameter at least 2.5 cm or in stage T1b [9,10]. That's why, chest X-ray examination cannot detect lung tumors at an early stage. Moreover, the certainty of a tumor or lung cancer cannot be done simply by using a chest X-ray examination, because the result of the radiological photograph only shows the presence of a mass. So, to determine whether the mass is a tumor or cancer, further testing is needed, such as a biopsy. In this study, entrance skin dose (ESD) measurements were performed on patients who had chest X-ray examinations with suspected lung tumors as an initial diagnosis. These patients experience chest pain, coughing, and dificulty breathing. As a comparison, we also measured entrance skin dose (ESD) for healthy patients who performed medical check-ups.
Material and Method
The research was conducted in The Radiology Department -Teaching Hospital of Hasanuddin University Makassar Indonesia. Samples were patients who experience chest pain, coughing, and dificulty breathing and healthy patients who performed medical check-ups. Supporting data, such as body weight, height, and age of the patient were taken from the patient's medical record. Radiological photographs were taken using Siemens Digital Radiography.
Entrance skin dose is calculated using the equation below [11,12] : where a focus skin to dose (FSD) value of 130 cm for the position of chest X-ray examination of posterior to anterior (PA), backscattering factor (BSF) equals 1.4 [12,13], and the incident air kerma (IN AK) value is calculated using the equation below where values of a and b are taken from the constant values in the graph of specific yield of the X-ray tube at a distance of 100 cm (Figure 2).
Figure 2.
Graph of specific yield of the X-ray tube at a distance of 100 cm.
Result and Discussion
Medical record data illustrates that the patients who are suspected of having lung tumors mostly have a productive age with the body mass index in underweight category. This is in accordance with the symptons of a tumor or lung cancer are chest pain, chronic cough, difficulty breathing, smoking more than 10 packs a day, and lack of appetite which results in weight loss [14]. Whereas in patients who have a body mass index in the normal category have older age. They don't have specific symptons, but they are only suffering from cough and chest pain (table 1). Figure 3 shows the entrance skin dose (ESD) in for patient with the lowest body mass index was 0.092 mGy, and the highest ESD value for patient with underweight category was 0.389 mGy, and an average value of ESD was 0.202 mGy. While the ESD for patient in the lowest normal body mass index category was 0.129 mGy and the highest was 0.285 mGy with an average of 0.187 mGy. The ESD for these two groups of patients was still lower than the ESD limit value sets by Indonesia Nuclear Energy Regulator Agency (INERA) for the position of the posterior to anterior (PA) which is 1.500 mGy [15]. These data show that radiation doses received by patients with underweight category are higher than those of normal weight patients. The reason for the difference in ESD value is on the purpose of the chest X-ray examination. Patients with normal body mass index category do the chest X-ray examinations just to know the condition of their lungs or they don't have specific things to know [5], so low radiation doses are used. On the other side, patients with underweight category usually have specific history such as smoking and chest pain, so the purpose of the chest X ray examination is to know the presence of specific things, such as the presence of mass, cavity, or fluid. Therefore, higher radiation dose is needed to be used, in order to have good image quality.
Conclusions.
Chest X-ray examination is a routine examination used to support the diagnosis of disorders or diseases in the chest, especially the lungs and heart. In some cases, chest X-ray examination can detect the presence of lung tumors or cancer, although it is difficult to know the size of the tumors or cancer, because lung tumors or cancer in radiological photographs are only in the form of shadow masses. So, for the case of lung cancer where the development of cancer cells is very fast, chest X-ray examination is not good, although, high radiation doses are used to have more clear image. This thing can be seen | 2019-11-14T17:08:02.837Z | 2019-10-01T00:00:00.000 | {
"year": 2019,
"sha1": "ed2eb5a787bd0a1032aa8098942f15df39514a6a",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1341/8/082024",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "65870133182ff5046e55d656788192195a928006",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
216036530 | pes2o/s2orc | v3-fos-license | Dexmedetomidine Protects against Ischemia and Reperfusion-Induced Kidney Injury in Rats
Acute kidney injury (AKI), a clinical syndrome, is a sudden onset of kidney failure that severely affects the kidney tubules. One potential treatment is dexmedetomidine (DEX), a highly selective α2-adrenoreceptor agonist that is used as an anesthetic adjuvant. It also has anti-inflammatory, neuroprotective, and sympatholytic qualities. The aim of this study was to establish whether DEX also offers protection against ischemia and reperfusion- (I/R-) induced AKI in rats. An intraperitoneal injection of DEX (25 μg/kg) was administered 30 min prior to the induction of I/R. The results indicate that in the I/R rats, DEX played a protective role by reducing the damage to the tubules and maintaining renal function. Furthermore, in response to I/R, the DEX treatment reduced the mRNA expression of TNF-α, IL-1β, IL-6, and MCP-1 in the kidney tissues and the serum levels of TNF-α, IL-1β, IL-6, and MCP-1. DEX also reduced the levels of oxidative stress and apoptosis in the tubular cells. These results indicate that in response to I/R kidney injury, DEX plays a protective role by inhibiting inflammation and tubular cell apoptosis, reducing the production of reactive oxygen species, and promoting renal function.
Introduction
Acute kidney injury (AKI) can be a consequence of major surgery. It significantly increases the risks for morbidity and mortality [1]. AKI can result in arrhythmia, cerebral edema, hyperkalemia, renal insufficiency, and water intoxication. All of these conditions pose a risk of death [2]. Few studies have reported on acute stress-induced AKI; thus, research into the underlying mechanisms and effective treatments is needed. AKI can be initiated by ischemia and reperfusion (I/R). During I/R, the renal tubular epithelial cells experience disturbed cell polarity and cytoskeletal integrity, disrupted cell-cell and cell-matrix interactions, mitochondrial damage, and increased reactive oxygen species (ROS) synthesis [3][4][5]. Oxidative stress resulting from acute restraint stress has been determined to lead to hippocampal and hepatic damage [6]. Because oxidative stress can promote apoptosis, it also has the capacity to cause AKI. Indeed, a number of pathological kidney injuries have been attributed to apoptosis [6]. This finding has stimulated inquiries into the role of oxidative stress and apoptosis in the pathological process of I/Rinduced kidney injury.
Dexmedetomidine (DEX) possesses several properties that are potentially beneficial for treating AKI. It is a potent and highly selective α 2 adrenergic agonist with analgesic, sedative, and antisympatholytic characteristics [7,8]. The distal and proximal tubules of the kidney, as well as the peritubular vasculature, are rich in α 2 -adrenoceptors [7,8]. Using animal models, studies have explored the effects of DEX following I/R injury. As well as being beneficial for tubular architecture and function and tubular epithelial cell apoptosis, DEX reduces the synthesis of ROS and inhibits the secretion of proinflammatory cytokines [9]. Despite DEX's capacity to alleviate renal I/R injury following surgery, the mechanism by which it acts in AKI has yet to be elucidated.
The purpose of this study was to explore DEX's protective effects against renal I/R injury in rats. In addition, it is aimed at identifying the mechanisms by which DEX modulates apoptosis, inflammatory cytokines, and ROS.
Animals and Experimental
Design. The study observed the China Medical University's guidelines for the use of laboratory animals. Approval was sought from and granted by the Ethics Committee on Animal Experiments of The First Hospital of China Medical University.
Eighteen healthy 7-8-week-old male Sprague-Dawley rats weighing 220-270 g were used. They were accommodated in a laboratory (22°C ± 1°C, 12-12 h light-dark cycle) in pathogen-free housing for one week prior to the experiment. Food and water were provided ad libitum until 12 h before the experiment. The food was then withdrawn; however, the water was available.
The rats were sedated and analogized with 10% chloral hydrate (0.3 ml/100 g) and 10 μg/kg sufentanil citrate (Renfu Pharmaceutical, China, Lot 81A06031) injected intraperitoneally (i.p.). To improve perioperative analgesia, 0.2% ropivacaine (AstraZeneca AB, Sweden) was used for local infiltration before surgical incision and after suture. The rats' temperatures were measured, and they were sustained at 37°C (±1°C) with a heat pad. The heart rates were monitored with subcutaneous electrodes. Arterial pressure was invasively monitored by a 24 G trocar that was placed in the left femoral artery. The exclusion criteria were applied to the rats that had heart rates slower than 200 beats per min (bpm) for more than 5 min or mean blood pressures (MAP) less than 55 mmHg. To induce renal I/R injury, the right renal pedicle was clamped for 45 min, and the left was surgically removed.
After the removal of the arterial clip, the color of the remaining kidney was observed. A change from dark purple to reddish brown within 5 min signaled the successful restoration of blood perfusion. Each rat was administered a 0.5 ml saline i.p. injection every 2 h until it awoke or the specimen was collected. The animals were euthanized at 24 h after the I/R, and abdominal aortic blood samples were collected immediately. To remove the cellular elements, the blood samples were left on ice for 2 h and then centrifuged for 15 min (3,000 g, 4°C). The serum was stored at −80°C. The kidneys were harvested following transcardial perfusion with ice-cold heparinized saline. A part of the renal tissue sections were fixed in paraformaldehyde and embedded in paraffin wax; 4 μm sections were removed and stained with hematoxylin and eosin. In addition, a terminal deoxynucleotidyl transferase deoxyuridine triphosphate nick-end labeling (TUNEL) assay was performed. The remainder of the renal tissue was maintained at −80°C until further analysis.
The rats were randomly assigned to one of three groups (n = 6 per group): Group 1.Sham control (Sham): The rats received 0.5 ml saline i.p. injections prior to sham surgery. The renal vessels were not clamped.
Group 2.Renal I/R group (I/R): The surgery was performed as previously described.
Group 3.DEX+I/R group (DEX+I/R): At 30 min prior to the initiation of renal ischemia, 25 μg/kg of DEX was administered by i.p. injection.
Renal Function.
Serum creatinine (CREA) and serum urea nitrogen were quantified with a UniCel DxC800 Synchron device (Beckman, USA). Standard enzyme immunoassay kits (BioPorto Diagnostics, Gentofte, Denmark) were used to establish the serum concentrations of the neutrophil gelatinase-associated lipocalin (NGAL) and cystatin C.
Reverse Transcription Polymerase Chain Reaction.
Reverse transcription polymerase chain reaction (RT-PCR) was used to determine the expression levels of TNF-α, IL-1β, IL-6, and MCP-1 in the kidneys. The total cellular RNA was extracted with TRIzol (Invitrogen), and the cDNA was synthesized with a Moloney murine leukemia virus (M-MLV) reverse transcription kit (Promega). Quantitative real-time PCR was performed through the use of an IQ SYBR Green Supermix reagent (Bio-Rad, USA) with a Bio-Rad real-time PCR machine. The manufacturer's instructions were followed. The data were analyzed by the use of the −ΔΔCT method. Glyceraldehyde 3-phosphate dehydrogenase (GAPDH) was used as a housekeeping gene against which the expression levels of the target genes were normalized. Table 1 presents the primer sequences.
Kidney Oxidative Stress Analysis.
A homogenate of the kidney tissue was made to measure the concentration of glutathione (GSH) and malondialdehyde (MDA). The homogenate was also used to identify superoxide dismutase (SOD) antioxidant enzyme activity. The manufacturer's instructions for the corresponding assay kit (Nanjing Jiancheng Bioengineering Institute, Nanjing, China) were followed. Mediators of Inflammation 2.6. Serum Inflammatory Cytokine Analysis. The serum inflammatory cytokines were measured through the use of a commercial enzyme-linked immunosorbent assay (ELISA) kit from Nanjing Jiancheng Bioinstitute (Nanjing, Jiangsu, China) and a microplate reader at 450 nm (Thermo Scientific, Waltham, MA). The manufacturer's guidelines were followed.
Terminal Deoxynucleotidyl Transferase Deoxyuridine
Triphosphate Nick-End Labeling (TUNEL) Assay. To detect the apoptotic tubular epithelial cells, an in situ TUNEL assay was used in accordance with the manufacturer's instructions (In Situ Cell Death Detection Kit, POD, Roche, Switzerland). The sections that had been fixed in paraffin were dewaxed. They were then rehydrated with xylene and a graded series of ethanol and double-distilled water. The sections were treated with proteinase K (20 μg/ml) at room temperature for 15 min before being rinsed twice with phosphate buffer saline (PBS). After the slide was dried, 50 μl of TUNEL reaction mixture was applied. The slides were incubated at 37°C in a humidified chamber for 60 min and then rinsed three times with PBS. Converter-POD (50 μl) was added to the specimen, and a coverslip was applied before incubation at 37°C in a humidified chamber for 30 min. The samples were again rinsed three times with PBS and then stained with a diaminobenzidine (DAB) substrate (Zhongshan, Beijing, China). After 10 min, the DAB was removed by being rinsed three times with PBS. Hematoxylin or methyl green stains were applied and then rinsed with tap water almost immediately. ImageJ software (version 1.38; National Institutes of Health, Rockville, MD) was used for counting the number of TUNEL-positive cells in an objective grid in regions of 10 randomly selected sections. A 40x objective lens was used for the counts, which were performed by a researcher who was blinded to the experiment aims. The data from the average of these 10 counts were analyzed.
Results
To confirm the effects of DEX on I/R-induced renal injury, 25 μg/kg of DEX was administered by i.p. injection 30 min before I/R. Figure 1 presents the histological images through which kidney injury was determined. The evidence provided by the hematoxylin and eosin (H&E) stains indicates that the histological injuries to the I/R rats' kidneys included focal renal hemorrhage, focal tubular necrosis, neutrophil infiltration, and vacuolar degeneration of the renal tubular epithelial cells (p < 0:05). The histological damage in the DEX-treated rats was lesser (p < 0:05) than that in the I/R rats. As is shown in Figure 2, the marked increase in serum CREA (p < 0:05) and urea nitrogen (p < 0:05) was indicative of I/R-induced renal dysfunction. However, these serum markers were significantly lower (p < 0:05) in the DEXtreated rats than in the I/R rats. This suggests that the renal function in the DEX-treated rats was maintained to some extent after I/R induction. Furthermore, the serum NGAL and cystatin C levels in the I/R-induced rats were significantly higher (p < 0:05) than those in the sham group. However, they were significantly lower (p < 0:05) in the DEXtreated rats than in the I/R-induced rats (Figure 2).
The effects of DEX on oxidative stress were also evaluated. As is illustrated in Figure 3, the MDA concentrations in the I/R-induced rats were significantly higher (p < 0:05) than those in the sham group. However, in the I/R rats treated with DEX, the levels of oxidative stress were lower (p < 0:05) than those in the I/R rats. The GSH and SOD activity levels in the I/R group were significantly lower (p < 0:05) than those in the sham controls. Indeed, the GSH concentrations and SOD activity levels in the I/R group treated with DEX were significantly higher (p < 0:05) than those in the I/R group.
The effects of DEX on the expression of proinflammatory cytokines were evaluated. The RT-PCR results for the kidney samples indicated that the expression levels of IL-1β, IL-6, MCP-1, and TNF-α mRNA were significantly higher (p < 0:05) in the I/R-induced rats than in the controls ( Figure 4). However, DEX exerted a modulatory effect. The mRNA expression of IL-1β, IL-6, MCP-1, and TNF-α (p < 0:05) in the kidney tissues of the rats that received DEX treatment was lower than that in the I/R-induced rats. The serum levels of IL-1β, IL-6, MCP-1, and TNF-α in the I/R rats were considerably higher than those in the DEXtreated I/R rats (p < 0:05). The serum levels of IL-1β, IL-6, MCP-1, and TNF-α (p < 0:05) in the rats that received DEX treatment were lower than those in the I/R-induced rats. This suggests that in the case of I/R injury, DEX exerts an inhibitory effect on the expression of inflammatory cytokines.
The available evidence suggests that the pathogenesis of I/R injury is aggravated by the apoptosis of the tubular cells. Thus, the effect of DEX on tubular epithelial cell apoptosis in the I/R-induced rats was explored. The results of the TUNEL assay revealed that I/R injury was associated with a significant increase (p < 0:05) in the level of apoptosis in these epithelial cells. In contrast, there were fewer apoptotic cells in the kidney samples from the DEX-treated rats (p < 0:05) than in those from the I/R group ( Figure 5).
Discussion
A sequela of cardiovascular and transplant surgery or shock is renal I/R injury that causes AKI. This results in longer hospital stays and even death [11]. The present study has Mediators of Inflammation demonstrated that pretreatment with DEX helps to maintain renal morphology and function in cases of I/R injury. When kidney I/R occurred, the levels of inflammatory mediators circulating in the blood were reduced, and the kidneys experienced less oxidative stress as a consequence of the administration of DEX. These findings suggest that DEX might be an appropriate pharmaceutical intervention to reduce acute injury-induced kidney damage. Among the qualities of DEX are its anesthetic-sparing effect and facilitation of hemodynamic stability. Consequently, DEX is frequently administered as a sedative in perioperative and intensive care medicine. It is also applied as an anesthetic adjuvant [12]. DEX has wide clinical applications because its analgesic, hemodynamic, sedative, and sympatholytic qualities make it particularly useful for perioperative patients.
Other features are its anti-inflammation, antioxidant, and antiapoptotic effects on the brain, heart, lungs, and kidneys [13,14]. The results of several animal model studies indicate that DEX also offers protection against I/R injury to the kidney [12,15]. Other studies have highlighted the beneficial role of DEX in preventing renal injury in patients undergoing cardiovascular and other major surgical procedures [16]. These findings supplement those of earlier studies that reported significant increases in the CREA and BUN levels following I/R injury. This suggests impaired renal function. An examination of the histopathological evidence indicates that I/R causes kidney injury; however, DEX pretreatment can be mitigative.
A surfeit of oxygen-free radicals is produced by organisms in a state of stress. This disrupts the delicate balance between the oxidation and antioxidant systems [17]. The results of this study indicate that the MDA levels, an important biomarker of oxidative damage, are reduced and the GSH and SOD levels are increased by DEX. MDA is a useful indirect indicator of the extent of free radical-induced damage [18]. In contrast, GSH and SOD are important antioxidants [19]. The results suggest that I/R diminished the antioxidant defense system by increasing the MDA levels and reducing the GSH and GSH enzyme activity. This suggests that I/R could induce oxidative stress, which might be significant in the pathogenesis of AKI. The DEX treatment helped to protect against oxidative stress suggesting that it has antioxidative effects [20]. Therefore, DEX could provide protection against I/R-induced AKI.
Another manifestation of I/R injury is a heightened inflammatory response [21,22]. This response aggravates the conditions because the macrophages, T cells, and other inflammatory mediators are recruited to the tissues injured by I/R [21,22]. As the results of the tissue samples obtained from the kidneys of I/R-injured rats in this study have demonstrated, the expression of IL-1β, IL-6, MCP-1, and TNF-α mRNA was elevated. In contrast, the renal tissue samples from the rats that had received DEX following I/R exhibited lower IL-1β, IL-6, MCP-1, and TNF-α mRNA expression. Relationships among renal morphology, the serum levels of inflammatory mediators, and the state of cell apoptosis were found. The pathogenesis and progression of I/R injury are influenced by the proinflammatory cytokines, including IL-1β, IL-6, MCP-1, and TNF-α [23,24]. DEX not only acts locally on the kidney's α 2 -adrenoceptors but also influences the anti-inflammatory reactions, thereby mitigating the damaging effects of I/R on the kidney.
Apoptosis, also known as programmed cell death, is a critical mechanism for maintaining cell stability; however, too much can damage the body [25]. Usually, apoptosis, an early event in kidney I/R injury, interacts with the subsequent inflammation and kidney injury [26]. In the present study, a
Mediators of Inflammation
significantly higher number of TUNEL-positive cells were retrieved from the rats with the I/R-injured kidneys than from those that received the DEX intervention. Studies have found that TNF-α causes injury to the kidneys [27,28]. DEX might protect against AKI following I/R by stimulating an antiapoptotic effect.
The data obtained from this study demonstrate that DEX exerts a protective effect against I/R injury in rats by Mediators of Inflammation inhibiting tubular cell apoptosis and inflammation, lowering ROS production, and promoting renal function. These results suggest that DEX may play a role in the treatment of I/R-initiated AKI. However, DEX is not without side effects; through parasympathetic activation, it can contribute to bradycardia. How DEX can be best exploited to protect vital organs has yet to be established. Further research into its applications is justified.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request. Figure 5: Apoptosis was detected in the rat kidneys via terminal deoxynucleotidyl transferase deoxyuridine triphosphate nick-end labeling (TUNEL) staining (40x) in rats subjected to the sham procedure, untreated ischemia-reperfusion (I/R), or I/R with dexmedetomidine (DEX) treatment. Representative microphotographs were taken from (a) the sham control group, (b) the I/R group, (c) the DEX (25 μg/kg)+I/R group, and (d) the apoptosis rate following I/R in the rats. The data are the mean ± SD (n = 6). * p < 0:05. | 2020-04-22T05:05:35.603Z | 2020-04-03T00:00:00.000 | {
"year": 2020,
"sha1": "8fb3d88ae58550e0d716c15e51ac0a757612b1c4",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2020/2120971",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8fb3d88ae58550e0d716c15e51ac0a757612b1c4",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
12814940 | pes2o/s2orc | v3-fos-license | A genome-wide screen for modifiers of transgene variegation identifies genes with critical roles in development
An extended ENU screen for modifiers of transgene variegation identified four new modifiers, MommeD7-D10.
Background
Random mutagenesis screens for modifiers of position effect variegation were initiated in Drosophila in the 1960s [1,2]. The screens used a fly strain, called w v , that displays variegated expression of the white (w) locus resulting in red and white patches in the eye. Variegation refers to the 'salt and pepper' expression of some genes due to the stochastic establishment of an epigenetic state at their promoters. The best characterized example of variegation in mammals is the coat color of mice carrying the A vy allele [3,4]. In this case there is a correlation between DNA methylation at the promoter and silencing of the gene [5,6]. Alleles of this type provide us with an opportunity to study epigenetic gene silencing at a molecular level in a whole organism.
The extent of the variegation at the w v locus, that is, the ratio of red to white patches in the eye, was known to be sensitive to strain background, suggesting the existence of genetic modifiers. Offspring of mutagenized flies were screened for changes in the degree of variegation. These screens have been continued to saturation and the results suggest that there are between 100 and 150 such genes [7,8]. Approximately onethird of these have been identified and, as expected, most turn out to play critical roles in epigenetic gene silencing [1,9]. These include genes encoding proteins involved in heterochromatin formation, for example, HP1 and histone methyltransferases [8].
Recently, we established a similar screen in the mouse using a transgenic line that expresses green fluorescent protein (GFP) in a variegated manner in erythrocytes [10]. We anticipated that the screen would produce mutant lines that would help clarify the role of epigenetic gene silencing in mammals. Offspring of N-ethyl-N-nitrosourea (ENU) treated males were screened for changes in the percentage of erythrocytes expressing GFP (measured by flow cytometry). In those individuals in which the percentage of expressing cells was higher or lower than the wild-type mean by more than two standard deviations, heritability was tested. A preliminary description of the first six heritable mutations, which we refer to as Modifiers of murine metastable epialleles or Mommes, following the screening of 608 G1 offspring, has been published [10]. We reported that all six were dosage-effect genes and five of the six were homozygous lethal, with loss of homozygotes apparent at weaning, but no knowledge of when death occurred. At the time of publication in 2005, none of the underlying genes had been identified. Since then we have identified the underlying mutation in three cases, MommeD1, MommeD2 and MommeD4. MommeD1 is a mutation in SMC hinge domain containing 1 (SmcHD1), encoding a previously uncharacterized protein containing a domain normally found in SMC proteins and we have gone on to show that this protein has a critical role in X inactivation [11]. MommeD2 is a mutation in DNA methyltransferase 1, Dnmt1, encoding a DNA methyltransferase, and MommeD4 is a mutation in Smarca5, encoding Snf2h, a chromatin-remodeling enzyme [12]. The finding of Dnmt1 and Smarca5, both known to be involved in epigenetic reprogramming, validates the screen. Here we report an update of the screen, adding four new MommeDs, identifying the underlying point mutation in two more cases, and further characterizing the phenotypes associated with hetero-and homozygosity.
Integration site of the GFP transgene
We have previously reported that the GFP transgene used in this screen has integrated as an approximately 11 copy array on chromosome 1 [10]. We were keen to further characterize the integration site. PCR using primers specific to the 3' end of the transgene construct in combination with degenerate random tagging primers (genome walking) revealed that the transgene had integrated into chromosome 1 at 47,747,486 bp (UCSC web browser, July 2007 assembly). This site of integration is neither centromeric nor telomeric, and so presumably the silencing is related to the multicopy nature of the transgene array [13,14]. The integration site does not disrupt any annotated genes, and is approximately 1 Mb from the closest annotated transcript.
The identification of MommeD7-D10
We have now screened an additional 400 G1 offspring and recovered four more Mommes, MommeD7-D10 ( Figure 1, Table 1). The fluorescence activated cell sorting (FACS)-based screening is carried out on a drop of blood taken at weaning, using a gate set to exclude 99.9% of autofluorescing cells. Under these conditions, wild-type mice homozygous for the transgene express GFP in 55% of erythrocytes. MommeD7 is a suppressor of variegation, that is, the percentage of cells expressing the transgene was significantly higher than it was in wild-type individuals (Table 1). MommeD8, D9 and D10 are enhancers of variegation, that is, the percentages of expressing cells were significantly lower than they were in wild-type individuals ( Table 1). The mean fluorescence level in expressing cells also changed. We have reported previously that as the percentage of expressing cells increases, the mean fluorescence of the expressing cells increases [10]. We presume that since the mice are homozygous for the GFP transgene, this is mainly due to an increase in the proportion of expressing cells with two active GFP alleles. However, in the case of MommeD7 the level was more than double that seen in the wild-type littermates. We hypothesized that this was likely to be the consequence of an increase in the percentage of reticulocytes in the peripheral blood of this mutant, as mature red cells, with no ability to produce new proteins, would have, on average, less GFP than reticulocytes (see below). In the cases of MommeD8, D9 and D10 the mean fluorescence levels were slightly lower than that seen in the wildtype littermates, consistent with a presumed reduction in the proportion of cells with two active GFP alleles.
For each MommeD, the heritability of the mutation has been tested and confirmed over at least 5 generations by using at least 30 litters. During the breeding of each mutant line, the expression profiles remained constant, consistent with the idea that we were following a single mutation in each case. The frequency with which we found these mutations, 1 in 100 G1 offspring, was similar to our previous results [10].
Homozygous lethality
Following heterozygous intercrosses, the proportion of expression types observed in the offspring at weaning was consistent with a semidominant homozygous lethal mutation in the cases of MommeD7 and MommeD9, since only two GFP expression profiles were observed ( Figure 1, Tables 1 and 2) and there was a significant litter size reduction in both cases ( Figure 2). In the cases of MommeD8 and MommeD10, three expression profiles were observed, suggesting viability of some homozygotes ( Figure 1, Tables 1 and 2) and in the case of MommeD10 this was later confirmed by genotyping for the point mutation. In both cases, fewer individuals with the severe phenotypes were observed than predicted by Mendelian inheritance, suggesting reduced viability of the homozygotes (Table 2). This conclusion is supported by significant litter size reductions in both cases ( Figure 2). There is also a suggestion of some heterozygous death in the case of MommeD8 and, to a lesser extent, MommeD10 but this is not statistically significant.
Homozygous lethality occurs at different stages of development
Litter size reductions following heterozygous intercrosses have already been reported for MommeD1-D6 at weaning [10], but the timing of the losses has only been reported for MommeD1, D2 and D4. MommeD1 is homozygous lethal in females only, with death occurring around mid-gestation [10,11]. MommeD2 and MommeD4 are homozygous lethal at 8.5 days post-coitus (dpc) and 17.5 dpc, respectively [10,12]. Here we describe the timing of the losses for MommeD3, D5, D6, D7, D8, D9 and D10.
Following intercrosses between MommeD3 -/+ (genotypes determined by GFP fluorescence and progeny testing), dissections at 14.5 dpc suggested that death of homozygotes had already occurred (Table 3). This was confirmed following a FVB/C57 F 1 MommeD3 -/+ intercross, where embryos could be genotyped using microsatellite markers across the linked interval (Table 3). These data suggest MommeD3 -/-mice were dying at or before 14.5 dpc. Similar results were obtained for GFP expression profiles in MommeD7-D10 Figure 1 GFP expression profiles in MommeD7-D10. Erythrocytes from 3-week-old mice were analyzed by flow cytometry. In each case, the expression profiles from one litter of a heterozygous intercross are displayed. The phenotypically wild-type mice are shown in black, heterozygotes in dark grey, and homozygotes (MommeD8 and D10 only) in light grey. The x-axis represents the erythrocyte fluorescence on a logarithmic scale, and the y-axis is the number of cells detected at each fluorescence level. For quantitative and statistical significance, see Table 1. The percentage of expressing cells was determined by using a GFP + gate, which was set to exclude 99.9% of wild-type (WT) cells. Data were collected from at least four litters in each case. The data for the MommeD9 colony were collected using a different laser. Each mutant line has a significantly different expression profile to that of wild-type littermates, reproducible over many generations. *p 0.0001.
MommeD5 at 14.5 dpc (Table 3). Once the MommeD5 point mutation had been found (see below), these crosses were repeated and dissections were performed at 10.5 dpc. Again, a significantly higher than expected proportion of developmentally delayed embryos were detected (Table 3). These embryos were genotyped and found to be MommeD5 -/-in all cases, indicating developmental arrest at around 8-9 dpc.
Following MommeD7 -/+ intercrosses (genotypes determined by GFP fluorescence and progeny testing), a small but significant increase in abnormal embryos was detected at 14.5 dpc (Table 3). This increase is not enough to account for all expected MommeD7 -/-mice. At 17.5 dpc, approximately onequarter of the embryos were pale (Table 3, Figure 3a), suggesting a red cell defect in the homozygotes. Homozygous MommeD7 mutants were never seen at weaning (Table 2), and preliminary observations suggest that they die in the first few days after birth. Further analysis of adult heterozygous individuals revealed severe splenomegaly ( Figure 3b) and a marked increase in reticulocytes in peripheral blood ( Figure 3c).
We hypothesized that this increase in reticulocytes was responsible for the larger than expected increase in the average fluorescence level of the GFP transgene in expressing cells observed in this line (Table 1). We performed FACS analysis on whole blood after staining for reticulocytes with propidium iodide. As expected, MommeD7 -/+ mice had a threefold increase in the percentage of reticulocytes compared to MommeD7 +/+ mice (Figure 3d), and the percentage of GFP fluorescence in both MommeD7 -/+ and MommeD7 +/+ was higher in reticulocytes than mature red cells ( Figure 3e).
Litter size at weaning following heterozygous intercrosses Figure 2 Litter size at weaning following heterozygous intercrosses. Litter size at weaning in all of MommeD7-D10 is significantly lower than that found in a wild-type (WT) cross. n represents the number of litters. *p < 0.05; **p < 0.001. Although this is only significant for MommeD7 -/+ (p < 0.005), the trend is there for MommeD7 +/+ mice (p = 0.07). This is consistent with the idea that a change in the erythroid cell populations contributes to the dramatic increase in the average fluorescence level of the GFP transgene in MommeD7 -/+ mice.
Some MommeD8 -/-mice (classified by their GFP expression profile and progeny testing) were viable at weaning but they were rare ( Figure 1, Table 2). Following MommeD8 -/+ intercrosses, dissections at 14.5 dpc showed no increase in the number of abnormal or resorbed embryos (Table 3). Litter size at birth was not significantly different from that seen in wild-type litters (data not shown), suggesting that the death of most MommeD8 -/-individuals occurred after birth and before weaning. The only obvious phenotypic abnormality seen in MommeD8 -/-mice that survived to weaning was reduced size. MommeD8 homozygotes were significantly smaller (6.60 g ± 0.25 standard error of the mean (SEM)) than their wild-type (8.54 g ± 0.33 SEM, p < 0.001) and heterozygous (8.65 g ± 0.29 SEM, p < 0.0001) littermates.
Dissections following MommeD9 -/+ (determined by GFP fluorescence and progeny testing) intercrosses revealed a pattern similar to that seen for MommeD5 and MommeD6, suggesting MommeD9 -/-embryos arrest before 9.5 dpc (Table 3). In the case of MommeD10 the data suggest that death of homozygotes occurred after birth (Table 3), and preliminary data collected at P7 indicated death in the first week of life (data not shown). Some MommeD10 -/-individuals survived to weaning but they were extremely rare. This was confirmed by genotyping once the point mutation was identified.
So in all ten MommeDs produced so far, homozygosity for the mutation is associated with embryonic or perinatal lethality (Tables 3 and 4).
Abnormal phenotypes associated with heterozygosity for MommeD7-D10
Extensive phenotyping of the heterozygous MommeD mutant lines has not been carried out. However, in some cases heterozygous effects were obvious, for example, the haematopoietic defect in MommeD7 -/+ mice described above. We have also noticed some litter size reduction during the breeding of these strains. The data for the breeding of MommeD7, D8, D9 and D10 are shown in Figure 4. Following crosses between heterozygous males and wild-type females in the FVB strain, we found significant litter size reductions in the cases of MommeD9 and MommeD10, but not in the cases of MommeD7 and MommeD8. A breakdown of the offspring by sex and genotype revealed that for MommeD9 and MommeD10, the litter size reduction was associated with a deviation from Mendelian patterns of inheritance (p < 0.05 in both cases) and a reduction in the number of mutants ( Figure 4). These two cases of transmission ratio distortion have not been investigated further but they do suggest that heterozygosity for the MommeD mutations is associated with some level of disadvantage. There also appears to be a skewed sex ratio in the wild-type offspring of MommeD9 sires, suggesting the phenotype of the father can affect his wild-type offspring. While we have not characterized this in any more detail, the idea of a paternal effect is not new. We have previously published examples of paternal effects resulting from haploinsufficiency of modifiers of epigenetic gene silencing in the mouse [12].
Mapping
We have mapped the mutations in all ten cases to relatively small regions of the genome ( Table 4). The mapping of MommeD1-D6 has been documented [10]. Here we report the mapping of MommeD7-10. MommeD7 maps to a 0.25 Mb region on chromosome 7 between markers D7Mit220 and rs13479441 (using 134 phenotypically mutant and 135 phenotypically wild-type mice). This region contains 10 genes.
MommeD8 maps to a 4 Mb region on chromosome 4 between markers rs6337795 and D4Mit279 (using 234 phenotypically mutant and 177 phenotypically wild-type mice). This region contains 54 genes. MommeD9 maps to a 3 Mb region on chromosome 7 between markers rs31712695 and rs6193818 (using 103 phenotypically mutant and 127 phenotypically wild-type mice). This region contains 80 genes. MommeD10 maps to a 4 Mb region between markers D5Mit165 and rs13478547 on chromosome 5. Twenty-four phenotypically homozygous and 312 phenotypically non-homozygous (heterozygous and wild-type mice combined) were used (see Materials and methods). These data show that each of the ten MommeD mutations maps to a unique region of the genome.
MommeD5 has a mutation in Histone deacetylase 1
MommeD5 was localized to a 1.42 Mb region on chromosome 4 flanked by the markers rs27486641 and rs27541967 [10] ( Table 4). This interval contains 46 genes and Hdac1 was chosen as the best candidate because of its known role in chromatin modification. Sequencing of all exons, including exon/ intron boundaries, from two heterozygous and two wild-type individuals revealed a 7 bp deletion (GAAGCCA) in exon 13 in the mutants (Figure 5a). This mutation was subsequently verified in over 100 mice classified as mutants by GFP expression profiling. The chance of a second point mutation occurring in a functional region in a linked interval of this size is extremely small. Using the estimated mutation rate following ENU mutagenesis of 1 in 1.82 Mb [15,16], the probability of such an event can be calculated [15,17]. This website takes into account the amount of coding sequence in a given linked interval. In this case, the probability of a second point mutation occurring in the coding region is p < 0.0005.
The mutation produces a frameshift, resulting in changes to the last 12 amino acids, and an additional 25 amino acids. Protein modeling predictions based on human HDAC8, for which the crystal structure has been solved [18,19], suggest that the carboxyl terminus of Hdac1 is relatively unstructured and so the mutation is unlikely to affect the stability of the protein (J Matthews, personal communication). An antibody that recognizes the carboxyl terminus of Hdac1 showed a 50% reduction of binding in 10.5 dpc MommeD5 -/+ embryos, and negligible binding in MommeD5 -/-embryos (Figure 5b), confirming that this region of the protein has been altered in the MommeD5 line. An antibody that recognizes the amino terminus of Hdac1 showed that the levels of the protein are not altered between mutant and wild-type mice (Figure 5b). Lysine 476 at the carboxyl terminus has been shown to be sumoylated and important for enzymatic function of the wildtype protein [20] and the absence of this amino acid in the MommeD5 mutant protein is likely to impair function. A knockout of Hdac1 has been reported and the homozygous embryos died around 9.5 dpc [21], similar to the time of death Hematopoietic abnormalities in MommeD7 mice All mapping data are current for Ensembl build 37.1, release 49. *Reported in [11]. † Reported in [12].
Genotypes and sex of offspring, and litter size following paternal transmission of MommeD7-D10 observed in MommeD5 -/-embryos ( Figure 5c). Together, these results argue that the mutation in Hdac1 is causative of the MommeD5 phenotype. Consistent with this, the level of Hdac2 increased in both MommeD5 -/+ and MommeD5 -/embryos, as predicted from the reports of compensatory upregulation of Hdac2 in embryonic stem cells null for Hdac1 [21]. Indeed, this upregulation may explain why MommeD5 was identified as an enhancer, rather than a suppressor, of variegation. Loss of histone deacetylase function is generally associated with transcriptional activation, but exceptions to this have been reported and the upregulation of Hdac2 could explain these results [22].
MommeD10 has a mutation in Baz1b
MommeD10 was localized to a 4 Mb region on chromosome 5 flanked by the markers D5Mit165 and rs32250347 (Table 4). Interestingly, this interval encompasses the region syntenic with the Williams Beuren syndrome (WBS) critical region in humans. WBS, also known as Williams syndrome, is an autosomal dominant disorder affecting approximately 1 in 10,000 individuals. The classic presentation of WBS includes a characteristic craniofacial dysmorphology (elfin face), supravalvular aortic stenosis, multiple peripheral pulmonary arterial stenoses, statural deficiency, infantile hypocalcaemia and a distinct cognitive profile with mild mental retardation. The linked interval for MommeD10 contains 52 genes and Baz1b was chosen as the best candidate because it contains a bromodomain (a domain commonly associated with chromatin remodelers) and has recently been shown to form at least two chromatin remodeling complexes and associate with replication foci and promoters [23][24][25]. Sequencing of all exons, including exon/intron boundaries, from two homozygous, one heterozygous and one wild-type individual revealed one point mutation (T G transversion) in exon 7 in the mutants (Figure 6a). This mutation was subsequently verified in over 100 mice classified as mutants by GFP expression profile. The mutation results in a non-conservative amino acid change, L733R, in a highly conserved region of the protein ( Figure 6b). Western blot analysis showed reduced levels of Baz1b protein in both embryonic and adult MommeD10 -/-tissue, with MommeD10 -/+ tissue showing intermediate levels (Figure 6c and data not shown), suggesting that the mutant protein is much less stable than its wild-type counterpart. Quantitative real-time PCR performed on cDNA from 14.5 dpc embryos showed all three genotypes have similar levels of mRNA (Figure 6d).
Effects of MommeD5 and MommeD10 on DNA methylation at the transgene locus
Transgene silencing can be associated with changes in both DNA methylation [26,27] and chromatin accessibility [28]. This particular transgene promoter consists of a GC-rich segment of the human alpha-globin promoter, which we were unable to analyze by bisulfite sequencing because the cloned bisulfite converted fragment was refractory to growth in bacteria. The transgene also contains the HS-40 enhancer, which is known to regulate the locus in humans [29]. We analyzed the methylation state at this region by bisulfite sequencing. As predicted from the variegated nature of the transgene expression, the methylation pattern differed from clone to clone in all cases (data not shown). The percentage of methylated CpGs in the HS-40 element was approximately 55% (averaged across all clones) in spleen from 4-week-old wild-type FVB/NJ mice (Figure 7a). Samples from MommeD5 -/+ , MommeD10 -/+ , and MommeD10 -/-mice showed similar levels of CpG methylation (52%, 47%, 59% respectively; Figure 7a). Mice heterozygous for a null allele of Dnmt3b, which showed an increase in expression of the GFP transgene from 37 ± 3% in the wild-type mice to 55.5 ± 2.5% in the Dnmt3b +/ -mice (in both cases mice were hemizygous for the transgene; see Materials and methods), showed a decrease in CpG methylation at the HS-40 element (31%; Figure 7b) compared to that seen in the wild-type C57BL/6J mouse strain (60%; Figure 7b). These data suggest that MommeD5 and MommeD10 mutants alter the expression of the transgene by changing the chromatin state rather than by altering DNA methylation levels. This is consistent with the fact that both genes encode proteins involved in histone modification and chromatin remodeling [21,[23][24][25][30][31][32][33]. Modifiers identified in this screen include DNA methyltransferases, chromatin remodelers and genes involved in histone modification, all of which have wide-ranging effects in the genome, making it difficult to unravel direct and indirect effects at any particular locus.
Craniofacial analysis of MommeD10 mice
Surviving MommeD10 homozygotes were significantly smaller than littermates at weaning (Student's t-test, p < 0.0001; Figure 8a). A similar size differential was evident in utero at 18.5dpc (Student's t-test, p < 0.01), indicating that this is not simply due to poor post-natal feeding (Figure 8a). MommeD10 homozygotes also appeared to have widened, bulbous foreheads and shortened snouts (Figure 8b). To examine the craniofacial phenotype more accurately, three heads from 4-week-old male mice of each genotype (MommeD10 -/-, MommeD10 -/+ and MommeD10 +/+ ) were subjected to micro-computed tomography. Heads from one 4-week-old female of each genotype were also examined at this level. They followed the same trend as males. Visual inspection of the three-dimensional reconstructions confirmed the original observation that homozygote's skulls were more bulbous and showed a flattening of the nasal bone and upward curvature of the nasal tip (Figure 8c).
Twenty cranial landmarks and nine mandibular landmarks were located on each skull using approximately 70 micron resolution datasets and inter-landmark measurements were compared (Figure 8d and Additional data file 1). Statistical analyses were carried out using the data collected from males only. Homozygote skulls were significantly different to wild type (Student's t-test, length:height ratio, p < 0.01; width:height ratio, p < 0.01; length:width ratio, p < 0.05), confirming the bulbous appearance of the skulls on the reconstructed images. Much of this difference could be attributed to reduction of the parietal and nasal bones (both > 12.5% shorter in homozygotes compared to an overall mean length and width reduction of approximately 9%). The reduced parietal bone length and the reduction and upward angulation of the nasal bones in these mice (Figure 8c, d) are reminiscent of the decrease in the volume of the parieto-occipital region and the appearance of the nose in WBS patients [34,35]. Heterozygotes also had a decreased cranium width:height ratio (Student's t-test, p < 0.05) and decreased length:height ratio (Student's t-test, p < 0.05) compared to wild-type skulls. Of note, heterozygotes showed a reduction in palatine bone width of similar magnitude to that seen in homozygotes, suggesting a greater sensitivity of some parts of the skull to decreased Baz1b protein levels. Measurements of the lower jaw revealed relative mandibular hypoplasia in homozygotes that was most pronounced in the posterior region (approximately 20% reduction), encompassing the condyle, angle and coronoid processes (Figure 8d and Additional data file 1). The posterior aspects of the mandibles of heterozygotes were also reduced in size when compared to wild-type mandibles, albeit to a lesser degree than in the homozygotes.
Expression of Baz1b during mouse embryogenesis
It has previously been shown that Baz1b is expressed in the mouse embryo from around 9.5 dpc and whole mount in situ at 11.5 dpc showed expression in limb buds, tail and brain [24]. We have gone on to characterize the expression of Baz1b in more detail, and show that at 8.25 dpc Baz1b is expressed in the headfolds, the caudal tail bud region and the presumptive hindbrain in a pattern reminiscent of rhombomere staining (Figure 9a, f). From 9.5 dpc expression is evident in the somites and in the forelimb bud as it emerges from the lateral plate mesoderm (Figure 9b). Diffuse mesenchymal expression in both the forelimb and hindlimb continues until 12.5 dpc when it is restricted to the interdigital mesenchyme (data not shown).
In the facial primordia, Baz1b is expressed in branchial arch 1 as it first emerges (Figure 9g Left side: lateral views show the overall size and shape of heterozygous skulls is similar to that of wild-type skulls, whereas skulls of homozygotes were around 20% shorter. Homozygous skulls showed variable anomalies, but consistently had a bulbous appearance, and a short, flattened, or upwardly angulated nasal bone (yellow arrowhead). Slight angulation of the nasal bones was also noted in one heterozygote. Right side: dorsal view of the homozygote skull shown on the left side showing the abnormal shape and more rostral connection of the zygomatic process with the squamosal bone (yellow arrow), skewing of the midline frontal bone suture (black arrow) and minor bilateral anomalies of the frontal:parietal suture (black arrowheads). (d) Twenty cranial landmarks and nine mandibular landmarks (based on those of Richtsmeier [49]) were located on each of nine skulls and inter-landmark measurements recorded. The mean value of each measurement, including analysis of cranium height:width and cranium length:height ratios, was compared between homozygous, heterozygous and wild-type animals. . Baz1b is also expressed in the frontonasal process ( Figure 9b) and later in the mesenchyme of both the medial and lateral nasal prominences as they elevate to surround the nasal pits (Figure 9c, d, h, i). Baz1b is expressed strongly in all the major facial primordia from early in embryogenesis.
A possible role for Baz1b in Williams syndrome
Overall, the skull shape in mutant animals is reminiscent of the head shape seen in WBS, including a small upturned nose with flat nasal bridge, micrognathia (or mandibular hypoplasia), malocclusion, bi-temporal narrowing and prominent forehead [34]. WBS is known to be associated with a hemizygous deletion of approximately 28 genes in humans, but which of these genes are responsible for the craniofacial phenotype remains controversial. People with atypical deletions, and varying degrees of craniofacial abnormalities, implicate both proximal and distal ends of the deletion, suggesting that more than one gene is responsible [36][37][38][39][40][41]. Tassabehji and colleagues [42] reported craniofacial defects in a transgenic (c-myc; Tgf-) mouse line that harbored a chromosomal translocation fortuitously disrupting the Gtf2ird1 gene, the human orthologue of which resides in the WBS critical interval [43]. Mice homozygous for this transgene produced little Gtf2ird1 mRNA and exhibited craniofacial dysmorphism, suggesting a role for Gtf2ird1 in the craniofacial phenotype. Mice hemizygous for the transgene were indistinguishable from wild-type animals. Disruption of Gtf2ird1 in this mouse was associated with a 40 kb deletion at the site of integration, the addition of 5-10 tandem copies of a c-myc transgene, and translocation of the entire segment to chromosome 6 [43], providing opportunity for disruption to the expression of other genes, such as Baz1b, in the region. A targeted knockout of Gtf2ird1, produced by others, failed to find craniofacial dysmorphology or dental abnormalities [44]. We checked the sequence and expression of Gtf2ird1 in MommeD10 mutants and found no abnormalities (data not shown). The chance of a second point mutation occurring in a coding region in this linked interval is extremely small (p = 0.0008, based on a mutation rate of 1 in 1.82 Mb) [15][16][17].
Our studies show that the chromatin remodeler Baz1b is expressed strongly in cranial neural crest-derived mesenchyme that drives facial morphogenesis and that homozygosity for a missense mutation in Baz1b produces an array of craniofacial features that are similar to those that characterize the typical WBS face. Significantly, some craniofacial features are also apparent in heterozygotes. These results suggest that reduction in Baz1b protein levels contributes to the elfin facies characteristic of WBS and that WBS is, at least in part, a chromatin-remodeling factor disease.
Conclusion
Extension of the screen has produced four new MommeDs, MommeD7-D10, all of which behave in a semidominant fashion, as do the six previously reported [10]. We have now identified the underlying genes in five of the ten cases, two of which, Hdac1 and Baz1b, we report here. Both are already known to be involved in epigenetic processes, further validating the screen design. In the case of Baz1b this is the first report of a mouse carrying a disrupted allele at this locus and we have shown a role for the protein in craniofacial development.
The screen, designed primarily to identify genes involved in the epigenetic gene silencing of foreign DNA, such as transgenes, has revealed the critical role that such genes play in embryonic and fetal development. It is interesting that at least half of the MommeDs are important during gastrulation. Furthermore, the identification of heterozygous effects sug-Whole mount in situ hybridization analysis in mouse embryos shows expression in the developing facial prominences
8.25dpc
(a) 10.5dpc (h) 10.5dpc (c) 11.5dpc (e) 11.0dpc (d) 8.75dpc (g) 8.25dpc (f) 11.5dpc (i) 9 gests that a reduction in the amount of protein product of less than 50% has effects on the health of the mouse. One of the hallmarks of epigenetic gene silencing across all multicellular organisms from plants to humans is the stochastic nature by which they operate [45] and these studies re-emphasize the importance of probabilistic events during embryogenesis. We believe that this screen will provide new tools with which to study these processes.
Mouse strains
Wild-type inbred C57BL/6J and FVB/NJ mice were purchased from ARC Perth. Procedures were approved by the Animal Ethics Committee of the University of Sydney and the Animal Ethics Committee of the Queensland Institute of Medical Research. The ENU screen was carried out in the FVB/NJ inbred transgenic line as described previously [10]. All MommeD lines were maintained in this strain unless stated otherwise. Most crosses between MommeD individuals were performed on individuals three generations or more removed from the MommeD progenitor. A total of 1,000 G1 offspring were screened at 3 weeks of age, from which ten heritable dominant mutations were identified.
The Dnmt3b null mice were a kind gift from En Li. They have been subsequently backcrossed for more than ten generations to the C57BL/6J strain. GFP fluorescence in Dnmt3b +/-mice was analyzed in the F 1 offspring of crosses between Dnmt3b +/ -mice and the FVB/NJ transgenic line, and as such each mouse was hemizygous for the transgene.
Flow cytometry
GFP fluorescence in erythrocytes was analyzed by flow cytometry at weaning. A drop of blood was collected in Osmosol buffer (Lab Aids Pty Ltd, Narrabeen, NSW, Australia) and analyzed on a FACScan (Becton Dickinson, Franklin Lakes, NJ, USA) with excitation at 488 and 550 nm. The 488 nm channel predominantly measures GFP fluorescence and the 550 nm channel measures autofluoresence. The data were analyzed using CELL QUEST software with a GFP-positive gate set to exclude 99.9% of wild-type erythrocytes. Mean fluorescence was calculated using cells within the positive gate. Histograms depict only the GFP fluorescence channel.
Linkage analysis
For MommeD7, D8 and D9, FVB/C57 F1 heterozygous individuals were backcrossed to C57, and linkage analysis per-formed on their offspring. Phenotype was assigned based on GFP fluorescence profile. A panel of microsatellite markers that differ in size between FVB and C57 were used to localize the mutation to a small area of the genome. Mice wild type for the mutation should only have C57 chromosomes at the linked interval, while mice heterozygous for the mutation should carry both FVB and C57 chromosomes.
Linkage analysis in MommeD10 was carried out using an FVB/C57 F1 MommeD10 -/+ intercross to produce 337 F2 individuals. MommeD10 -/-mice were distinguished from their littermates by their dramatically reduced size at weaning and their reduced GFP expression profile. Recombination events allowed the linked region to be localized to a small genomic interval. MommeD10 -/-mice should only carry FVB chromosomes at the MommeD10 linked region, while the remaining mice should be either FVB/C57 or C57/C57.
Reticulocyte counts
Blood smears were made from blood taken from the tail tip of MommeD7 -/+ and MommeD7 +/+ mice, and stained with New methylene blue. Full blood analyses were done on the automated haematology analyzer Sysmex Xe-2100 (Sysmex Corporation, Woodlands Spectrum, Singapore).
Reticulocyte analysis by FACS
Nucleated cells and reticulocytes were separated from mature erythrocytes based on propidium iodide fluorescence levels. An RNase control was performed and the presumptive reticulocyte cell population could no longer be detected. Mean GFP fluorescence was determined for reticulocyte and mature erythrocyte cell populations. This was done essentially as described in [46]. Three adult MommeD7 -/+ and three wildtype littermate controls were used. Approximately 25 l of whole blood was collected from the tail in cold Osmosol buffer (Lab Aids Pty Ltd). Cells were fixed for 1 h at 4°C in 1% paraformaldehyde in Osmosol and then washed once in cold Osmosol. Cells were then permeabilized by adding -20°C ethanol to a cell pellet whilst vortexing, and incubated with rotation for 2.5 h at 4°C. A 40 g/ml solution of propidium iodide was added to pelleted cells and the cells were incubated at 37°C for 30 minutes. Analysis was performed on a FACScan (Becton Dickinson). The data were analyzed using CELL QUEST software.
Genotyping assay
Following identification of the MommeD10 point mutation, genotyping was carried out by AciI digestion of a PCR product of exon 7 of Baz1b (primers available on request). The AciI site is created by the MommeD10 point mutation. Following identification of the MommeD5 mutation, genotyping was carried out by PCR amplification across the deleted region, and gel electrophoresis using Ultra low-range agarose (Bio-Rad, Hercules, CA, USA).
RNA preparation and quantitative RT-PCR
Total RNA was extracted from 14.5 dpc embryo heads using TRI reagent (Sigma-Aldrich). cDNA was synthesized from total RNA using SuperScriptII reverse transcriptase (Invitrogen) and random hexamer primers. Quantitative RT-PCR was performed with the Platinum SYBR Green qPCR Super-Mix-UDG with primers designed to span exon/intron boundaries (available on request). All reactions were performed in triplicate and normalized to both GAPDH and ribosomal highly basic 23-kDa protein (Rpl13A) [47]. The reaction was run on a Corbett Research Rotor-Gene (Qiagen, Doncaster, Vic, Australia).
Micro-CT analysis
Three heads of 4-week-old male mice of each genotype, and one head of female mice of each genotype (MommeD10 -/-, MommeD10 -/+ and MommeD10 +/+ ) were scanned at 8.7 micron resolution using a SkyScan 1076 micro-computed tomography unit and the data set reduced to approximately 17 microns for three-dimensional reconstruction of the serial slices (CT Analyser software V.1.6.1.1; SkyScan, Kontich, Belgium). Twenty cranial landmarks and nine mandibular landmarks (based on those of Richtsmeier [49]) were located on each of nine skulls and inter-landmark measurements recorded using DataViewer software (V.1.3.0.0; SkyScan). To verify accuracy of the measurements, any landmarks showing marked differences between genotype groups were re-located on a separate day and the measurement repeated. The mean value of each measurement was compared between homozygotes, heterozygotes and wild-type animals.
RNA probes and whole-mount in situ hybridization
DIG-labeled RNA probes were transcribed from linearized DNA templates and used in whole mount in situ hybridization analysis of wild-type FVB/NJ mouse embryos at a range of gestational stages. The probe was directed to 1.1 kb of the 3' untranslated region of Baz1b. A sense probe was used in earlier experiments to confirm specificity of the antisense probe. Whole mount in situ hybridization was performed as previously described [50]. Embryo images were captured digitally using an Olympus SZX12 microscope with DP Controller software (Olympus Corporation). | 2017-06-28T19:54:55.456Z | 2008-12-19T00:00:00.000 | {
"year": 2008,
"sha1": "e7df27dcd4b473b88794a52cb40fdeaf71076b7d",
"oa_license": "CCBY",
"oa_url": "https://genomebiology.biomedcentral.com/track/pdf/10.1186/gb-2008-9-12-r182",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "6d40df9ca182ce1e5458b65cc7b9d966a51ca4fb",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
259518219 | pes2o/s2orc | v3-fos-license | Insight of a lipid metabolism prognostic model to identify immune landscape and potential target for retroperitoneal liposarcoma
Introduction The exploration of lipid metabolism dysregulation may provide novel perspectives for retroperitoneal liposarcoma (RPLS). In our study, we aimed to investigate potential targets and facilitate further understanding of immune landscape in RPLS, through lipid metabolism-associated genes (LMAGs) based prognostic model. Methods Gene expression profiles and corresponding clinical information of 234 cases were enrolled from two public databases and the largest retroperitoneal tumor research center of East China, including cohort-TCGA (n=58), cohort-GSE30929 (n=92), cohort-FD (n=50), cohort-scRNA-seq (n=4) and cohort-validation (n=30). Consensus clustering analysis was performed to identify lipid metabolism-associated molecular subtypes (LMSs). A prognostic risk model containing 13 LMAGs was established using LASSO algorithm and multivariate Cox analysis in cohort-TCGA. ESTIMATE, CIBERSORT, XCELL and MCP analyses were performed to visualize the immune landscape. WGCNA was used to identify three hub genes among the 13 model LMAGs, and preliminarily validated in both cohort-GSE30929 and cohort-FD. Moreover, TIMER was used to visualize the correlation between antigen-presenting cells and potential targets. Finally, single-cell RNA-sequencing (scRNA-seq) analysis of four RPLS and multiplexed immunohistochemistry (mIHC) were performed in cohort-validation to validate the discoveries of bioinformatics analysis. Results LMS1 and LMS2 were characterized as immune-infiltrated and -excluded tumors, with significant differences in molecular features and clinical prognosis, respectively. Elongation of very long chain fatty acids protein 2 (ELOVL2), the enzyme that catalyzed the elongation of long chain fatty acids, involved in the maintenance of lipid metabolism and cellular homeostasis in normal cells, was identified and negatively correlated with antigen-presenting cells and identified as a potential target in RPLS. Furthermore, ELOVL2 was enriched in LMS2 with significantly lower immunoscore and unfavorable prognosis. Finally, a high-resolution dissection through scRNA-seq was performed in four RPLS, revealing the entire tumor ecosystem and validated previous findings. Discussion The LMS subgroups and risk model based on LMAGs proposed in our study were both promising prognostic classifications for RPLS. ELOVL2 is a potential target linking lipid metabolism to immune regulations against RPLS, specifically for patients with LMS2 tumors.
Introduction
Retroperitoneal liposarcoma (RPLS) is a rare type of mesenchymal tumor, but the most common subtype of retroperitoneal sarcoma (1). It is also characterized by accumulation of intracellular lipid, induction of adipocyte-specific genes (2), dismal immunotherapy response and poor prognosis (3), since the significant challenges of large tumor size and adjacent organ involvement (4). In addition, other therapeutic strategies, such as combination chemotherapy and molecular targeted drugs also have limited efficacy due to intrinsic chemo-resistance, even in histology-tailored neoadjuvant chemotherapy (5). Therefore, novel strategies are needed to improve the therapeutic condition of RPLS.
Aberrant lipid metabolism and lipid metabolism reprogramming are critically involved in drug resistance (6), the adaptation of immune microenvironment (7), energy supplement, cell signaling (8) and regarded as a new hallmark of tumor ecosystem (9). Emerging evidence indicated that targeting the lipid metabolism pathway was an attractive tumor treatment strategy (10). Previous studies have indicated that lipid metabolism-associated genes (LMAGs) exhibit potent capability in predicting the prognosis of various tumors (11)(12)(13)(14). However, lipid metabolism dysregulation in patients with RPLS remains unknown.
Immunotherapy has been extensively studied as a promising treatment, but has had limited therapeutic benefit in RPLS, which is considered as a "non-immunogenic" and highly variable tumor (15). Increasing evidence suggest that the alterations in lipid metabolism, including metabolite abundance and accumulation of lipid biomolecules, lead to local immunosuppression in the tumor microenvironment (16). However, the association between abnormality of lipid metabolism and immune microenvironment remains obscure in RPLS.
Elongation of very long chain fatty acids protein 2 (ELOVL2), an enzyme that catalyzes the elongation of fatty acids with chain lengths greater than 18 carbons. Research has shown that ELOVL2 is involved in the maintenance of cellular homeostasis in normal cells (17). Specifically, ELOVL2 was implicated in the regulation of autophagy (18) and the activity of the mTOR signaling pathway (19), which played a key role in the regulation of cell growth and proliferation. It has been suggested that the decline in ELOVL2 expression with age may contribute to the aging process and agerelated diseases. Furthermore, mutations in the ELOVL2 gene have been associated with intellectual disability and developmental delay (20). However, further research was still needed to fully elucidate the functions of ELOVL2 in normal cells and its potential implications for cancer treatment.
Therefore, in this study, we explored the role of lipid metabolism dysregulation in RPLS through LMAGs related immune landscape, using multiple bioinformatics methods. A novel LMAGs based prognostic risk model was established and validated in independent cohorts. To the best of our knowledge, this is the first study to promote the understanding and clinical applications about lipid metabolism dysregulation and serve as a reliable reference for further developing target in RPLS.
Patient and clinical specimens
The cohort-FD consists of 50 RPLS patients (34% female and 66% male) with a mean age of 55 years. In total, 50 tumor samples were surgically resected and collected in Zhongshan Hospital, Fudan University between 2018 and 2020. For scRNA-seq, four fresh surgical specimens (four primary tumors and matched PBMC) were sequenced and incorporated in further analyses. All samples were confirmed by pathologists through both cytological detections during the surgery and the paraffin section after surgery. Clinical information, including demographics and tumor clinicopathologic characteristics of all cohorts were summarized in Supplementary Tables 1 and 2.
Database search
RNA transcriptome sequencing data of 150 RPLS patients and detailed characteristics were obtained from the TCGA (https:// portal.gdc.cancer.gov/) and GEO (https://www.ncbi.nlm.nih.gov/ geo/) databases. We excluded RPLS samples with no complete expression profile data and unknown overall survival (OS) or living status, and included clinical features such as gender, pathological grade, tumor stage and survival status in this study. Additionally, all the obtained data, TPM (transcripts per kilobase of exon model per million mapped reads) values, were normalized using the log2 (TPM + 1) transformation.
Consensus clustering
Firstly, 135 genes were found to be associated with the prognosis of RPLS through the univariate Cox regression analysis. Consensus clustering was performed according to the expression matrix of the 135 genes using the R package "Consensus Cluster Plus".
Construction, validation and evaluation of risk model based on LMAGs
Least absolute shrinkage and selection operator (LASSO) analysis was performed to downsize the OS and DFS related genes previously filtrated using "glmnet" R package. The minimum lambda value was defined as the optimal value. The genes applied for the establishment of risk model was enrolled by multivariate Cox regression analysis. OS related risk score of each patient in each cohort was calculated as: OS related risk score=1.866577428*ACOT7-0.040721477*ARSJ-0.451127087* ARSK+ 0.479584604*CPT1B-0.000114123*CYP21A2 + 0.424535615*ELOVL2-1.41 4705024*FDX2-0.32557149 *GSTM4 + 0.692304756*HACD1-0.670006158*HSD17B14-0.763286635*MTMR8-0.734212004*ORMDL2 + 1.159963024 *TNFRSF21. DFS related risk score of each patient in each cohort was calculated as: DFS related risk score=0.468299088*ACOT1-0.286593499*FABP6. Patients were divided into high and low risk groups according to the medium value. ROC and Martingale residuals method were used to evaluate the predictive efficiency of model.
Functional analysis
Differentially expressed genes (DEGs) between two LMSs were visualized using R package "Limma". Gene Ontology (GO) analysis and Kyoto Encyclopedia of Genes and Genomes (KEGG) analysis were performed through "clusterProfifiler" R package and visualized by Metascape5 (21). Based on "GO biological process" gene dataset was downloaded from molecular signature database, Gene Set Enrichment Analysis (GSEA) was conducted to analyze the difference between subtypes. Compared with LMS2, the upregulated differential genes in LMS1 was visualized by PPI network through the Search Tool for the Retrieval of Interacting Genes (STRING) online tool and the minimum required interaction value was set as 0.7 (22). cBioPortal analysis cBioPortal for cancer genomics (23) (cBioPortal, http:// www.cbioportal.org, version v3.2.11) is an open-access online tool integrating the raw data from large scale genomic projects. In this study, cBioPortal was used to visualize the gene alteration of potential antigens against tumors in cohort-TCGA, including the correlation between ELOVL2 gene expression and DNA methylation.
TIMER analysis
Tumor Immune Estimation Resource (24) (TIMER, https:// cistrome.shinyapps.io/timer/) is a comprehensive resource for the systematical analysis of the immune infiltrates across diverse cancer types. In this study, TIMER was used to visualize the correlation between antigen-presenting cell (APC) infiltration and the expression of the identified potent antigens. The partial Spearman's correlation was used to perform purity adjustment. Spearman correlation analysis was used to analyze the correlation between the abundance of APCs and the expression of the selected antigens. Statistical significance was set at P < 0.05.
Construction of immune landscape
The immune score and stromal score of each sample in cohort-TCGA was calculated by the "estimate" package in R. The proportion of the 22 types of immune cells in the tumor microenvironment (TIME) of each sample was evaluated via the CIBERSORT algorithm in R software (25).
Intra-cohort immune classifications
Unsupervised clustering of samples in each cohort was performed based on the metagene Z-score for the included populations of MCP-counter using R software, with the Euclidian distance and Ward's linkage criterion, using the gplots package. Cohort-TCGA and Cohort-GSE30929 were further divided into 5 groups (SIC-A, B, C, D and E) (26).
Establishment of LMAGs based nomogram
Univariate Cox regression analysis was performed to evaluate the prognostic value of identified signatures and clinicopathological features. Multivariate Cox regression analysis was used to further determine the independent prognostic factors. Two nomograms were established by the "rms" package for predicting OS and DFS.
The C-index and calibration plot were constructed to estimate the accuracy and consistency of the prognostic models.
Gene signatures for the functional orientation
The gene signatures used to determine the functional orientation were reported as previously described. Each signature was summarized as the following: immunosuppression (CXCL12, TGFB1, TGFB3 and LGALS1), T cell activation (CXCL9, CXCL10, CXCL16, IFNG and IL15), T cell survival (CD70 and CD27), r e g u l a t o r y T c e l l s ( F O X P3 a n d T N F R S Estimation of TLS and immune cell enrichment 12 chemokines were highly expressed by TLS, including CCL2, CCL3, CCL4, CCL5, CCL8, CCL18, CCL19, CCL21, CXCL9, CXCL10, CXCL11 and CXCL13, was applied as the gene signature of TLS. The enrichment score of TLS was calculated by single-sample gene set enrichment analysis (ssGSEA) method as implemented by Rpackage (27).
Weighted gene co-expression network analysis WGCNA algorithm was used to identify the hub genes among model genes (28). Gene co-expression modules were identified after a weighted gene co-expression network, and the association between gene network and clinical phenotype were also explored. The WGCNA-R package was applied to establish the co-expression network of all genes in the cohort-TCGA, and the genes with variance within the first 5000 were identified by the algorithm for subsequent analysis. The soft-threshold b was determined by the function "sft$powerEstimate". The weighted adjacency matrix was transformed into a topological overlap matrix (TOM) to estimate the network connectivity, with hierarchical clustering being used to create the clustering tree structure of the TOM. Different branches of the clustering tree indicated different gene modules with different colors. Tens of thousands of genes were classified into modules based on having similar expression patterns (using their weighted correlation coefficients).
Single-cell RNA sequencing
The single cell suspensions were converted to barcoded scRNAseq libraries using the Chromium Single Cell 30 Library, Gel Bead & Multiplex Kit, and Chip Kit (10x Genomics), aiming for 6,000 cells per library. Samples were processed using kits pertaining to V2 barcoding chemistry of 10x Genomics. Single samples were always processed in a single well of a PCR plate, allowing all cells from a sample to be treated with the same master mix and in the same reaction vessel. For each experiment, all samples were processed in parallel in the same thermal cycler. Libraries were sequenced on an Illumina HiSeq4000, and mapped to the human genome (buildGRCh38) or to the mouse genome (build mm10) using CellRanger software (10x Genomics, version 3.0.2).
Single-cell transcriptome data processing
The output of the cell-gene count matrix was processed with the Seurat (v 3.1.0) package of R software (version 3.6.1) for quality control and down-streaming analysis. Low-quality cells with < 200 genes or with > 40% mitochondrial genes were removed from the analysis. As the cells from tumor and adjacent normal tissues were loaded in batch for each patient, the data for each patient as individual Seurat Object. The Seurat object for each patient was integrated with the harmony algorithm (R package, Harmony, version 1.0). The top 50 principal components (PCAs) were used for graph-based clustering to identify a distinct group of cells at the indicated resolution. In the subgroup analysis, significant PCAs identified with the ElbowPlot() function were used for graph-based clustering for each cell cluster to identify subgroup cells based on the t-SNE analysis (29). The cell types of the identified cells were
Multiplexed immunohistochemistry
mIHC was performed in four DDLPS cases of cohort-sc-RNA seq. Briefly, Fresh tumor tissues were fixed in 4% paraformaldehyde solution and embedded in paraffin. FFPE slides were made using 4 mm sections of the tumor samples. Deparaffinization and rehydration were performed with xylene and ethanol respectively, followed by microwave antigen retrieval using heated citric acid buffer (pH 6.0) for 10 minutes and endogenous peroxidase blocking in 3% H2O2 for 20 minutes. Blocker/Diluent was used to block nonspecific binding sites. Afterward, relevant primary antibodies were incubated for 1 hour at room temperature, followed by the corresponding secondary antibodies for 30 minutes. Slides were then incubated with fluorescein TSA plus for 10 minutes (Akoya Boscience, NEL861001KT), after which microwave antigen retrieval was repeated with the above steps until the last antibody was added. After multiplexing, DAPI (Sigma, D9542) was used to stain the nuclei. Antibodies and fluorescent dyes used for multiplexing are listed in Supplementary Table 3. The slides were scanned by Vectra 3 automated high-throughput multiplexed biomarker imaging system (Akoya Phenoimager HT). Immune cells were classified into the following types: T cells (CD3 + ), B cells (CD20 + ), DC cells (CD11b + ), NK cells (CD57 + ), Mac (CD68 + ) and ELOVL2 + cell.
Statistical analysis
SPSS 22.0 (SPSS Inc., Chicago, IL, USA) and R 4.0.4 (R Foundation for Statistical Computing, Vienna, Austria; http:// www.r-project.org/) were used for all statistical analyses. Univariate and multivariate Cox regression analyses, ROC curve analysis and K-M survival analysis were performed by R software and the corresponding R packages. The continuous data are expressed as the mean ± standard deviation (SD). The Wilcoxon test was used for comparisons between the two groups, and the Kruskal-Wallis test was used for comparisons of prognosis between groups. Except for the special instructions, all statistical tests were two-tailed, and a P < 0.05 was considered to be statistically significant.
Identification of multi-omics landscape and prognostic LMAGs in RPLS
The whole study process, including consensus clustering, immune landscape visualization, DEGs analysis, nomogram construction and WGCNA analysis were systematically evaluated and depicted in the workflow chart, as shown in Supplementary Table 4.
To systematic appraise the lipid metabolism dysregulation in RPLS, 741 LMAGs were obtained from the MSigDB database.
Initially, we encapsulated the incidence of copy number variations and somatic mutations in cohort-TCGA, where 14862 amplified genes were screened to identify potent antigens ( Figure 1A), while missense mutation was the most common variant classification ( Figure 1B). Additionally, we investigated the incidence of CNV among these LMAGs. By evaluating the frequency of CNV status, existing CNV alterations of LMAGs were also clarified, and the top 35 genes in amplified or deleted CNV status were summarized (Supplementary Figure 1A).
Subsequently, according to the transcriptomic data, consensus clustering was performed to cluster RPLS patients into two lipid metabolism subgroups (LMSs) (K = 2) ( Figure 1C) based on 135 overall survival (OS) related genes by univariable Cox analysis (Supplementary Figure 1B). 31 and 27 patients were clustered into LMS1 and LMS2, respectively (Supplementary Figure 1C). Heatmap visualization also indicated that prognostic LMAGs profiles differed significantly between LMS1 and LMS2, while LMS2 was enriched with LMAGs ( Figure 1D), but predicted poor prognosis ( Figure 1E). Interestingly, two subtypes harbored heterogenous somatic mutations profiles, demonstrated ATRX and B4GALNT1 as the genes with the highest mutation frequency in LMS1, but MUC16 in LMS2 (Supplementary Figures 1D, E). However, not significant difference was found in genome altered fraction and mutation counts between LMS1 and LMS2 (Supplementary Figures 1F, G).
Moreover, we also investigated 26 disease free survival (DFS) related LMAGs through univariate Cox analysis (Supplementary Figure 1H). Intersecting the results of OS-related and DFS-related genes, 8 overlapping LMAGs (GSTM4, GRHL1, PI4K2B, GK3P, ARSK, ELOVL2, NR1H4, and ABHD4) were excavated and eligible for further screen prognostic relevant antigens (Supplementary Figure 1I). The location on chromosomes and expression levels were visualized in Figure 1F. The regulatory network described the comprehensive landscapes of the 8 LMAGs indicating their interactions, correlation feature ( Figure 1G) and prognostic values ( Figure 1H). These findings indicated that the LMAGs classified RPLS patients into two subtypes with different molecular features and prognosis.
Heterogeneous functional enrichment and immune landscape in LMAGs subtypes
To better understand the innate difference of survival and underlying signaling mechanisms between LMS1 and LMS2, DEGs and functional enrichment analyses were performed, respectively. A total of 4144 DEGs were identified, of which 3586 genes were downregulated and 558 genes were upregulated in LMS2, as compared with LMS1 ( Figure 2A). GO enrichment analysis indicated that these up-regulated DEGs were involved in positive regulation of immune system process, immune response and regulation of immune system process ( Figure 2B). Similarly, KEGG enrichment analysis also validated these pathways associated with cytokine-cytokine receptor interaction, chemokine signaling pathway and B cell receptor signaling pathway, of which are part of the immune response ( Figure 2C). Meanwhile, PPI analysis further confirmed 15 submodels, all of which were closely associated with immune and metabolism (Supplementary Figure 2A). These results depict that the expression of LMAGs is closely related to immunerelated biological processes, confirming that lipid metabolic reprogramming is significantly associated with tumor immune microenvironment (TIME) in RPLS.
To further evaluate the dysregulation of immune and metabolism remodeling involved in RPLS, a series of TIME profiles was conducted. Firstly, we illustrated the distribution feature of previously reported six pan-cancer immune subtypes (C1-C6) (30), of which LMS1 and LMS2 were mostly clustered into C1 (Wound Healing), C2 (IFN-g Dominant), C3 (Inflammatory), C4 (Lymphocyte Depleted) and C6 (TGF-b Dominant) (Supplementary Figure 2B). Intriguingly, C3 and C6 presented a higher proportion in LMS1, while they were associated with better outcomes. Accordingly, immune score, stromal score and ESTIMATE score were also calculated using the ESTIMATE algorithm ( Figure 2D). Remarkably, LMS1 demonstrated significantly higher TIME scores and better prognosis than those in LMS2, implying that favorable immune components and immune-related molecules were abundant in LMS1.
Next, the relationship between LMSs and 34 infiltrated immune cells was further explored. In concordance with previous TIME scores, LMS1 harbored more infiltrated DC, B cells, CD4+Tem, macrophages, monocytes and NKT than those in LMS2 ( Figure 2E), indicating the significant impacts of these immune cells in the progression of RPLS. However, the complete immune response involves a close combination of multiple events, not only the infiltrated immune cells (4). Thereafter, we further calculated and compared the immune activity score of each step through TIP analysis. Similarly, the abundance of anti-tumor immune cells was significant higher in LMS1 than those in LMS2, as well as in Step1, Step3 and Step5 (Supplementary Figure 2C). In addition, we explored substantial differences in TLS-associated 12-chemokine signature between LMS1 and LMS2 ( Figure 2F), while the expression of lymphoid-structures-associated B-cell-specific chemokine CXCL13 was notably higher in LMS1 ( Figure 2G). Accumulating evidence indicated that tumor with high TMB level predicted better efficacy of immunotherapy (31). We then estimated the value of TMB in both LMS1 and LMS2, but not significant difference was found (Supplementary Figure 2D). Intriguingly, patients in LMS1 with low TMB demonstrated a satisfactory survival benefit. Considering the importance of immune checkpoint inhibitors in the treatment of solid tumor (3), we further examined the differences in immune checkpoint profiles and found notably substantial differences in CD28, CD40, CD86, HAVCR2 and PD-1, between these two subtypes (Supplementary Figure 2E). Immunogenic cell death (ICD) has been classified as a form of regulated cell death (RCD) that is sufficient to activate an adaptive immune response (32). We next identified ICD-related genes and analyzed the expression patterns. Importantly, we discovered that significant higher expression of FPR1, TLR4 and CXCL10 were enriched in LMS1 (Supplementary Figure 2F). Taken together, these findings demonstrated the unique characteristics of TIME within two LMSs, offering a conducive complement to previous studies.
Identification of immune gene co-expression modules and immune hub genes of RPLS
The immune gene co-expression module was designed and applied to classify immune-related genes, whose expression may significantly influenced the effectiveness of potential targets (33). Therefore, we re-analyzed and enrolled immune-related genes to establish gene modules through WGCNA ( Figure 3A). The soft threshold was set at four in the scale-free network (Supplementary Figure 3A). The representation matrix was converted to adjacency and next to a topological matrix. The average-linkage hierarchy clustering approach was applied with a minimum of 30 genes for each network according to the standard of a hybrid dynamic shear tree. Eigengenes of each module were calculated and the close modules were integrated (height = 0.25, deep split = 3 and min module size = 30). Notably, six gene modules were identified ( Figures 3B, C), and correlation feature was also visualized (Supplementary Figure 3B). In addition, the module eigengenes in LMS1 were significantly higher in yellow and blue modules ( Figure 3D). Moreover, the prognostic analysis indicated that eigengenes in the brown module was significantly associated with OS in RPLS ( Figure 3E).
Further functional enrichment analysis illustrated that genes involved in cytokine-cytokine receptor interaction and T cell receptor signaling were enriched in the brown module ( Figure 3F). The brown module (361 immune-related genes) was further selected, and three of them (PLCG1, ZC3HAV1L and NFAT5) were filtered to build the risk score through LASSO algorithm (Supplementary Figures 3C, D). Patients were classified into the high-risk and low-risk groups, while high-risk group predicted unfavorable OS (P = 0.02) ( Figure 3G). The area under the receiver operating curve (AUC) was 0.75, indicating a good accuracy of the model (Supplementary Figure 3E). Taken together, this risk model may serve as a novel tool for prognostic predicting in RPLS, based on the immunogenic genes co-expression network.
Development of survival and relapse risk models and nomograms based on LMAGs
Given the significant biological roles of LMAGs in lipid metabolic reprogramming, the association between LMAGsrelated risk score and the prognosis needed thoroughly study. Thus, two prognostic models were conducted for OS and DFS, respectively.
24 LMAGs were found to be considerably linked to the OS of patients through LASSO regression analysis ( Figures 4A, B), 13 of which were tested and selected for the risk score model from multivariate Cox analysis (Supplementary Figure 4A). We also investigated the relationship between risk score and survival status, and the low-risk subgroup harbored significant more alive statuses ( Figure 4C) and better OS than those in high-risk subgroup ( Figure 4D). Specifically, this OS related model indicated a great accuracy with AUC values of 0.94 in 1 year, 0.97 in 3 years and 0.97 in 5 years ( Figure 4E).
To evaluate and validate the universality of this LMAGs-related prognostic model from cohort-TCGA, an independent dataset (cohort-FD) was performed as a validation cohort. The risk score of each case in cohort-FD was calculated using the same formula as that for the cohort-TCGA. Similarly, patients in high-risk group suffered unfavorable OS than those in low-risk group (Supplementary Figure 4B). In addition, the AUC values of this model according to ROC analysis were 0.7 in 1 year, 0.8 in 3 years, and 0.85 in 5 years (Supplementary Figure 4C).
Since this model predicted great potency in clinical prognosis based on LMAGs, we further investigated the correlation between risk score and TIME in cohort-TCGA. In concordance with previous findings, the high-risk subgroup was characterized with a significantly lower immune score ( Figure 4F), and infiltrated less CD8+ T cells and plasma cells, but more resting memory CD4+ T cells and Tregs than those in low-risk subgroup (Supplementary Figure 4D), implying that high-risk subgroup was considered with the characteristics of immunosuppressive status.
Meanwhile, we also explored the value of model between the two subgroups stratified by different clinical features. Univariate Cox analysis indicated that patients with dismal OS were authenticated with larger tumor size, chemotherapeutic efficacy and high risk score (Table 1). Multivariate Cox analysis further confirmed that all of them were independent risk factors (Table 1). Subsequently, we developed a nomogram for OS prediction using these two clinical parameters and the LMAGs-based risk scores. A calibration plot for internal validation of the nomogram presented excellent consistency between the nomogram-predicted probability and actual observations of the 1-, 3-, and 5-year OS ( Figure 4G).
Given the significant high local recurrence rate in clinical treatment of RPLS (34), 21 LMAGs were also found to be considerably linked to the DFS of patients through LASSO regression analysis (Supplementary Figure 4E), 2 of which were tested and selected for the prediction model in the multivariate Cox analysis (Supplementary Figure 4F). The association between risk score and recurrence status was next evaluated, and the low-risk subgroup harbored significant more alive statuses (Supplementary Figure 4G) and better DFS than those in high-risk subgroup (Supplementary Figure 4H). Similarly, this DFS related model also Figure 4I). In addition, univariate Cox analysis indicated that patients with worse DFS were authenticated with tumor residue, poor chemotherapeutic efficacy and high risk score (Table 2). Multivariate Cox analysis further confirmed that dismal chemotherapeutic efficacy and high risk score were independent risk factors (Table 2). Furthermore, a nomogram for DFS prediction was conducted with excellent consistency (Supplementary Figure 4J).
Identification of lipid metabolismassociated targets
To explore key genes that functioned as potential candidates for RPLS, we further systematically screened and identified two candidates (NR1H4 and ELOVL2) with both gene amplification and mutation, which were also associated both OS and DFS from the 135 LMAGs ( Figure 5A). Given the essential role of antigenpresenting cells (APCs) in the function of immunological reaction, we also evaluated the relationship of these two genes with APCs using TIMER analysis (35). Intriguingly, ELOVL2, but not NR1H4, was identified with closely related to APCs (Pearson correlation coefficient > 0.3; Figure 5B; Supplementary Figure 5A), which could serve as a potential target and triggered strong immune response. In addition, similar results were also found in TCGA-SARC (Supplementary Figure 5B). Notably, survival analysis demonstrated that high mRNA expression of ELOVL2 was associated with unfavorable OS and DFS ( Figures 5C, D), suggesting ELOVL2 was of importance in RPLS development and progression. In concordance with cohort-TCGA, the mRNA expression of ELOVL2 validated similar prognostic efficiency in both cohort-FD and cohort-GSE30929 (Supplementary Figures 5C, D). Dedifferentiated liposarcoma (DDLPS) was often progressed from primary or recurrent well-differentiated liposarcomas (WDLPS), which constituted the most common pathological type of RPLS (36). Thus, we also investigated the heterogeneous expression of ELOVL2 in WDLPS and DDLPS. Interestingly, the expression of ELOVL2 was significantly higher in LMS2 than that in LMS1 ( Figure 5E). Specifically, ELOVL2 exhibited a potential enrichment in DDLPS, as compared with WDLPS in both cohort-FD and cohort-GSE30929 (Supplementary Figures 5E, F), indicating the significant impacts of ELOVL2 in the dedifferentiation evolution of RPLS.
To further identify the association between immune status and ELOVL2 expression, a series of TIME profiles was also conducted. Accordingly, immune score, stromal score, ESTIMATE score ( Figure 5F), infiltrating immune cells and TLS signature (Supplementary Figures 6A, B) were all calculated. As expected, we discovered that significant difference was found in the immune status between ELOVL2 high and ELOVL2 low subgroups, suggesting that RPLS patients with high expression of ELOVL2 might towards a status of immune desert. In addition, GO enrichment analysis indicated that high expression of ELOVL2 was involved in immune system process and cell surface reportor signaling. Similarly, KEGG enrichment analysis also validated cytokine-cytokine receptor interaction and cytokine signaling pathway ( Figures 5G, H).
Accumulating evidence indicated that the increase of DNA methylation of ELOVL2 leaded to the decrease of its protein expression and polyunsaturated fatty acid synthesis, but the accumulation of short chain fatty acids, which is closely related to aging (37). Thus, we investigated the genome and epigenome landscape of ELOVL2 in cohort-TCGA (Supplementary Figure 6C), and 13 ELOVL2 related CpG sites were identified. However, only the cg20462795 exhibited significantly survival patterns (Supplementary Figure 6D), depicting that ELOVL2 associated epigenetic metabolic axis could be a novel therapeutic target in RPLS.
Transcription factor (TF) is one of the most common tool that involving in regulating gene expression (38). Thereafter, we further systematically screened and identified three ELOVL2 related TFs (TFDP1, TP73 and MYBL2) in TCGA-SARC cohort (Supplementary Figure 6E). Interestingly, high expression of them were significantly associated with decreased OS (Supplementary Figure 6F). Consistent with survival prediction occurring in our ELOVL2 played an essential role in tumor progression and even dedifferentiation, and served as the only one lipid metabolism related target biomarker. Moreover, PLCG1, ZC3HAV1L and NFAT5 were selected as immune hub genes through immune gene co-expression modules analysis and predicted great prognostic efficacy. Therefore, we re-analyzed the association between ELOVL2 and three hub genes. Firstly, we validated these hub genes in cohort-FD and cohort-GSE30929 through survival analysis. Interestingly, the expression of PLCG1 was positive associated with ELOVL2 in all cohorts ( Figure 6A). In addition, PLCG1 was significantly higher in LMS2 than that in LMS1, and exhibited similar survival patterns in all cohorts ( Figure 6B). Next, the combined prognostic analysis of ELOVL2 and PLCG1 was also performed. Patient stratification based on these three groups presented that the Group I was associated with favorable prognosis, whereas the Group III was associated with dismal prognosis, and Group II was associated with medium prognosis (Supplementary Figure 7A). Subsequently, we developed another prognostic model for OS prediction based on ELOVL2 and PLCG1, which suggested a great prognostic efficacy ( Figure 6C).
In order to systematically comprehend the extent of relevancy between ELOVL2 and PLCG1 within TIME, we re-analyzed the expression profile in cohort-TCGA. Based on comparative analysis, we observed remarkable positive correlation between ELOVL2 and PLCG1. In addition, the infiltration of T cells, monocytics, dendritic cells, MDSC and M2-TAM were significantly lower with high expression of ELOVL2 and PLCG1. Moreover, the chemokine enrichment of CCL3, CCL4, CCL5 and CCL18, as well as PDCD1LG2 and HAVCR2 were significantly negative associated with high expression of ELOVL2 and PLCG1 ( Figure 6D). These results demonstrated that highly expression of ELOVL2 and PLCG1 in TIME of RPLS was involved with a immune-exculded phenotype.
To fully characterize the specific cellular localization ELOVL2 and PLCG1 in RPLS, we first evaluated them in the human protein atlas and further validated them in situ single cell spatial phenotype analysis through single cell RNA-sequencing and immunohistochemistry (IHC) in cohort-FD. Interestingly, ELOVL2 was seemed enriching in fibroblasts (no images in database), while PLCG1 in T cells (Supplementary Figure 7B). At the single-cell level, ELOVL2 was expressed in tumor cells, cancer associated fibroblasts (CAFs) and smooth muscle cells (SMCs), while PLCG1 was presented in CD4+ and CD8+ T cells, as expected ( Figure 6E; Supplementary Figure 7C). According to the analysis of bulk-RNA seq data, case 1/2 was defined as LMS2 and case 3/4 with LMS1, respectively. Next, mIHC was performed to validate previous findings in these four DDLPS. Consistent with RNA-seq data, CD3 + T cell, CD20 + B cell, CD11b + DC cell and CD68 + macrophages presented a higher infiltration in case3 and case4, but less ELOVL2 + cells. On the contrary, less CD3 + T cells, CD20 + B cells and CD68 + macrophages, but more ELOVL2 + cells were found in case 1 and case 2 ( Figure 6F; Supplementary Figure 7E). Moreover, the densities of CD3 + and cytotoxic CD8 + T cells in the area of tumor and invasive margin were quantified by IHC. We observed that the density of CD3 + and cytotoxic CD8 + T cells were significantly but negatively associated with ELOVL2 + cells (Supplementary Figure 7D). In concordance with previous findings, the mRNA expression level of CD3 and CD8 was negatively associated with ELOVL2 (Supplementary Figure 7F). Taken together, the dysregulation of lipid metabolism might remodel an immune desert of TIME further supported their crucial roles in the evolution of RPLS.
Discussion
RPLS is one of the most aggressive malignancies with a heterogeneous molecular profile, lipid metabolism dysregulation, limited medical efficacy and highly local recurrence rate (39). The combination of doxorubicin and ifosfamide is the first-line option in treating advanced RPLS, with limited clinical benefits and a median OS of just 8 to 14 months (40). Although immunotherapy has revolutionized oncology from the therapeutic point of view, its effectiveness in RPLS remains limited (41).
To the best of our knowledge, this is the first study aimed to systematically screen lipid metabolism related targets for RPLS. We conducted the profiles of somatic mutations and amplification in LMAGs, revealing potential targets that might be considered in RPLS. Considering the targets selected using the gene alteration profile might not be functionally significant, prognostic efficiency in OS and DFS and immune correlations were further evaluated to confirm the clinical relevance of the targets. ELOVL2 correlated with significantly prognosis and infiltration of APCs was identified. For instance, ELOVL2 was identified as an unique tissueindependent age-associated DNA methylation marker (42) and a specific superenhancer-associated gene implicated the LC-PUFA synthesis network as a critical metabolic dependency (43), and was associated with worsened patient survival in glioblastoma (44). In addition, ELOVL2 deficient indicated an increased infiltration of TH1/TH17 cells and a decrease of Treg (45). Thus, although basic and clinical investigation is further required, the potential of ELOVL2 to be successful targets was consolidated in RPLS.
Lipid metabolism and synthesis is based on the normal function of endoplasmic reticulum (ER). Previous results indicated that the accumulation of free fatty acids in the ER would eventually lead abnormal protein overloading and chronic ER stress (46). Meanwhile, the reduction of PUFA synthesis upon ELOVL2 decreasing can affect cellular metabolic homeostasis and mitochondrial energy metabolism (47). Unexpected, we found significantly activation of glucose metabolism pathway with ELOVL2 deficiency, such as gluconeogenesis, pentose phosphate pathway, pentose and glucuronate interconversions, fructose and mannose metabolism. This finds indicated that ELOVL2 deficiency may induce a switch in metabolism from the tri-carboxylic acid cycle to glycolysis, which eventually produces more reactive oxidative species (ROS) and causes oxidative stress. In addition, more mannose type O-glycan biosynthesis and lysine degradation was also found in the overexpression of ELOVL2. As the key regulator in PUFAs synthesis, more fatty acid elongation, but less arachidonic acid metabolism could be found in high group of ELOVL2, which was consistent with the recently published data (48). However, the relationship between abnormal expression phenotype of ELOVL2 and ER stress, mitochondrial dysfunction and cellular senescence still needs to be further studied.
Lipid metabolism reprogramming is a cretical marker of tumorigenesis and development. Previous study had indicated that a large number of lipid related genes underwent copy number amplification in the process of malignant development of RPLS (49). Therefore, it could be inferred that the amplification and high expression of lipid related gene-ELOVL2, was associated with malignant progression of RPLS with dismal prognosis. However, ELOVL2 displayed a heterogeneous role in the prognostic value in breast cancer (BC) (50) and glioma (44) (Supplementary Figure 8), respectively. In BC, ELOVL2 was hypermethylated and downregulated in the samples from tamoxifen resistance BC patients compared with those from tamoxifen-sensitive patients. Strikingly, in addition to having tumor suppressor activity, ELOVL2 was shown to recover tamoxifen sensitivity up to 70% in the MCF-7/tamoxifen resistance cells and in a xenograft mouse model (50). Furthermore, the depletion of ELOVL2 induced metastatic characteristics in BC cells via the SREBPs axis (51). In contrast, ELOVL2 depletion altered the phospholipid composition of the cell membrane by controlling fatty acid elongation, disrupting the structural characteristics of the cell membrane, and reducing EGFR signaling in glioblastoma cells (44). Taken together, we believed that abnormal lipid metabolism, including abnormal activation and inhibition, could both interfered with lipid metabolism reprogramming through different signal pathways in different types of cancers.
Since the tumor immune status is an another determinant of cancer associated efficacy, we further characterized the immune landscape in the different ELOVL2 subtypes. Interestingly, high ELOVL2 group is defined as immunological "cold" phenotype. In addition, the molecular signatures of this subtype is consistent with the immune status, indicating that patients with different immune signature may respond distinctly to immune therapy. To circumvent the poor immunogenicity of this subtype, targeted immune associated-biomarkers that reinvigorate the immune system by fascinating immune cell infiltrating may be a suitable option.
We demonstrated that RPLS patients could stratified into two LMSs with significant differences in molecular features and clinical prognosis. Patients with the LMS1 tumor had immune "hot" phenotype, whereas those with the LMS2 tumor had immune "cold" phenotype. Therefore, the lipid metabolism-associated molecular subtypes and risk model based on LMAGs were promising and complements the previous classification for RPLS.
With the prevalence of COVID-19 in the worldwide, the mRNA vaccine has highlighted its important strategic position, and greatly accelerating the development process of mRNA vaccine. Meanwhile, it also speeds up the research and development of mRNA cancer vaccine (52). The mRNA cancer vaccines represent promising novel method to treat malignancies with monotherapy or combination therapy (53). ELOVL2 is a potential target linking lipid metabolism to immune regulations for RPLS, specifically for patients with LMS2 tumors. However, given the function of ELOVL2 in lipid homeostasis and cellular homeostasis in normal cells, there is likely profound central tolerance against ELOVL2. On the contrast, ELOVL2 may be a target for small molecule inhibition for RPLS.
In summary, our study identified ELOVL2 as the potential effective target for RPLS, and patients with LMS2 tumor indicated an immune-excluded phenotype might benefit more from small molecule inhibition targerted ELOVL2.
Data availability statement
The transcriptome data presented in the study are deposited in the Sequence Read Archive (SRA) repository, accession number PRJNA987378.
Ethics statement
This study was reviewed and approved by the Institutional Review Board of Zhongshan Hospital (ID: B2022-586R), Shanghai, China. Informed consent was obtained from all study participants. All studies were performed in accordance with the Declaration of Helsinki. | 2023-07-11T15:31:43.541Z | 2023-07-06T00:00:00.000 | {
"year": 2023,
"sha1": "c40b2571914916847c3cf24c210e33e1f3f0f1ed",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2023.1209396/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e430f3f23ec5ae6b6b6f34d82c8b3776f90ccb36",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
7496006 | pes2o/s2orc | v3-fos-license | Short-Term Exposure of Multipotent Stromal Cells to Low Oxygen Increases Their Expression of CX3CR1 and CXCR4 and Their Engraftment In Vivo
The ability of stem/progenitor cells to migrate and engraft into host tissues is key to their potential use in gene and cell therapy. Among the cells of interest are the adherent cells from bone marrow, referred to as mesenchymal stem cells or multipotent stromal cells (MSC). Since the bone marrow environment is hypoxic, with oxygen tensions ranging from 1% to 7%, we decided to test whether hypoxia can upregulate chemokine receptors and enhance the ability of human MSCs to engraft in vivo. Short-term exposure of MSCs to 1% oxygen increased expression of the chemokine receptors CX3CR1and CXCR4, both as mRNA and as protein. After 1-day exposure to low oxygen, MSCs increased in vitro migration in response to the fractalkine and SDF-1α in a dose dependent manner. Blocking antibodies for the chemokine receptors significantly decreased the migration. Xenotypic grafting into early chick embryos demonstrated cells from hypoxic cultures engrafted more efficiently than cells from normoxic cultures and generated a variety of cell types in host tissues. The results suggest that short-term culture of MSCs under hypoxic conditions may provide a general method of enhancing their engraftment in vivo into a variety of tissues.
INTRODUCTION
Bone marrow contains several subpopulations of stem/progenitor cells that are capable of differentiating into various nonhematopoietic cells [1][2][3][4]. Among the best studied subpopulations are the cells that are isolated by their adherence to tissue culture surfaces and are referred to as mesenchymal stem cells or multipotent stromal cells (MSCs) [1,2,4]. MSCs have emerged as a promising tool for clinical applications such as tissue engineering and cell-based therapy, because they are readily isolated from a patient, can be expanded in culture, and have a limited tendency to form tumors. In addition, the cells tend to home to sites of tissue growth and repair, and to enhance tissue regeneration. Homing and engraftment of the cells is readily detected in rapidly growing embryos, including mouse [5], chick [6] and sheep [7], and following tissue injury, such as ischemic damage to heart [8,9] and brain [10]. However, various studies have shown the degree of engraftment of MSCs in naive adult animals is very low [11].
Several attempts are currently being made to enhance the engraftment of stem/progenitor cells in vivo. Exogenously delivered or endogenously produced stromal cell-derived factor-1a (SDF-1a) plays a crucial role in recruitment of endothelial progenitor cells, bone marrow-derived stem cells, or embryonic stem cells to the ischemic tissues such as heart and brain [8,[12][13][14]. Engraftment of hematopoietic stem cells (HSCs) was also recently improved by either overexpression of the chemokine receptor CXCR4 or by an inhibitor for CD26, a protease that cleaves the NH 2 -terminus of CXCL12 (SDF-1a), a ligand for CXCR4 [15,16]. Since bone marrow is hypoxic, we tested the possibility that short-term exposure of human MSCs to hypoxic conditions may increase their engraftment in vivo.
Effects of hypoxia on apoptosis and subsequent expansion of MSCs
We first determined whether exposure of MSCs to hypoxia increased apoptosis or limited their proliferative capacity in normoxic conditions. Assay of cultures with a dye that detects membrane alterations (phosphatidylserine flip) [17] did not reveal an increase in apoptosis after exposure of MSCs in CCM to 1% oxygen for 2 days ( Figure 1A). In contrast, apoptosis was readily detected in control cultures that were incubated in serum-free medium for 2 days. With cells plated at 50 cells/cm 2 , MSCs grown under hypoxic conditions expanded 148-fold in 10 days, whereas control cells grown under normoxic conditions expanded 535-fold ( Figure 1B,C). With cells plated at 1,000 cells/cm 2 , hypoxic MSCs expanded 29-fold in 10 days, whereas control cells expanded 35-fold (p,0.01; n = 3). In addition, after plating at 1.5 cells/cm 2 to assay colony forming units, the hypoxic MSCs formed decreased numbers of single-cell derived colonies (p,0.01; n = 3) as compared to controls ( Figure 1D). Therefore exposure to hypoxia decreased both the rate of proliferation and the colony forming capacity of the MSCs.
Exposure to hypoxia decreases the capacity of MSCs to differentiate
To examine the effects of hypoxia on differentiation, MSCs were plated at 1.5 cells/cm 2 in a 60 cm 2 dish, cultured in CCM under normoxic or hypoxic conditions for 7 days so that they generated small single-cell derived colonies. The cells were then continuously cultured under normoxic or hypoxic conditions and induced in osteogenic or adipogenic media for additional 7 to 21 days. Hypoxia decreased both the osteogenic differentiation and adipo-genic differentiation as shown by the decrease in the numbers of the Alizarin Red-S and Oil Red-O positive colonies ( Figure 2). The inhibition of osteogenic differentiation and adipogenic differentiation by hypoxia was also observed when MSCs were plated at a high density of 10 4 /cm 2 and induced to undergo
Reoxygenation reverses the effects of hypoxia on proliferation and differentiation
To examine the effects of reoxygenation, cells were first plated at 50 cells/cm 2 and incubated under either normoxic or hypoxic conditions for 8 days. The cells were then replated at varying densities and assayed for rates of proliferation and differentiation under normoxyic conditions. Over a 3 to 10 day period, the hypoxia-exposed cells proliferated at the same rate as controls (Figure S1A-C). Cells transferred to osteogenic medium after incubation in normoxic conditions, differentiated into osteoblasts at about the same rate as controls both in high density and low density cultures. Also cells transferred to adipogenic medium after incubation in normoxic conditions, differentiated into adipocytes at about the same rate as controls (Supplementary Figure 1D).
Short-term exposure of MSCs to hypoxia increases expression of chemokine receptors
A recent report demonstrated that chemokine receptors expressed on MSCs mediated their migration to tissues [18]. Therefore, we harvested passage 2 or 3 MSCs directly from the frozen vials and plated them at 1,000cells/cm 2 , incubated them under hypoxic or normoxic conditions for 4 to 20 hr, and assayed extracted mRNAs by RT-PCR for CX3CR1 and CXCR4, receptors for fractalkine and SDF-1a. Under normoxic conditions, the cells continued to express CX3CR1 but the levels of CXCR4 decreased by 20 h ( Figure 3A). Under hypoxic conditions, the expression of CX3R1 increased with time. Expression of CXCR4 was unchanged ( Figure 3A). However, the levels of both CX3CR1 and CXCR4 increased in a dose-dependent manner in the presence of the iron chelator desferroxamine (DFX), which directly inhibits prolyl hydroxylases with concomitant HIF-1a stabilization ( Figure 3B). These data were reproduced by Quantitative RT-PCR ( Figure 3C) and further RT-PCR assays indicated that both normoxic and hypoxic cells expressed about the same levels of other CC, CXC, CX3C, and CX chemokine receptors (not shown).
Increased expression of CX3CR1 and CXCR4 by hypoxic MSCs was also confirmed at the protein level by flow cytometry ( Figure 3D) and western blots ( Figure 3E). As shown in Figure 3C and 3D, after culture in hypoxic conditions for 20-24 h, MSCs contained higher levels of CXCR4 and CX3CR1 than control cells, and DFX treatment, either in hypoxic or normoxic conditions, achieved much higher levels. For reasons that were not apparent, DFX had a slightly greater effect on cells incubated under normoxic than hypoxic conditions ( Figure 3D and 3E). We also detected an increase in protein level of hypoxia-inducible factor-1a (HIF-1a) in hypoxia or DFX-treated MSCs ( Figure 3E); however no difference could be detected in mRNA levels (not shown). These data support previous reports that HIF-1a is regulated at the protein level, not at the mRNA level [19].
Since induction of CXCR4 by hypoxia has previously been shown to be driven by HIF-1a [19,20], we explored whether the HIF-1a also drives expression of CX3CR1. To obtain direct evidence for the interaction between HIF-1a and the CX3CR1 promoter, we used the ChIP assay to measure the HIF-1a recruitment to the CX3CR1 promoter. Although no interaction between HIF-1a and the CX3CR1 promoter was observed in normoxic condition, recruitment of HIF-1a to the CX3CR1 promoter was clearly detected at 4 h after cultured in hypoxic condition ( Figure 3F). This result is consistent with the induction of The MSCs were cultured for 4 h in normoxic or hypoxic condition. ChIP was performed with or without rabbit antibody specific for HIF-1a. There is no difference between PCR of the CX3CR1 promoter using input chromatin for normoxic and hypoxic cells. However, a band was noted for hypoxic cells but not for normoxic cells, when PCR of the CX3CR1 promoter was performed using immunoprecipitation with HIF-1a antibody. doi:10.1371/journal.pone.0000416.g003 CX3CR1 expression in hypoxic condition and in DFX-treated normoxic condition ( Figure 3A-D). Overall, these data demonstrate the involvement of HIF-1a in the induction of the CX3CR1 promoter.
Increased migration of MSCs in response to chemokines after short-term exposure to hypoxia
To investigate the effect of hypoxia on cell migration, we used a modified Boyden chamber method to examine the migration of MSCs over a 14 h period, after the cells were cultured under hypoxic conditions ( Figure 4A). Exposure of MSCs to hypoxic conditions for 1 day significantly increased their migration in the absence of chemokines ( Figure 4B). The hypoxic cells also migrated more rapidly in response to the chemokines SDF-1a (CXCL12, ligand of CXCR4) and fractalkine (CX3CL1, ligand of CX3CR1) in a dose dependent manner. The blocking antibodies anti-CXCR4 or anti-CX3CL1 significantly decreased the chemotaxic effects of chemokines ( Figure 4B).
Increased engraftment of MSCs into chick embryo after short-term exposure to hypoxia
To test the ability of MSCs to engraft in vivo, MSCs or PBS were infused into the day-2 chick embryos. Evaluation for engraftment was first carried out by real-time PCR assays to detect a humanspecific fragment of the a-satellite DNA on human chromosome 17 in chick embryos [21]. Seventeen of 28 embryos tested were positive for human chromosome 17. The degree of chimerism with hypoxia-exposed MSCs was slightly higher (mean of 89/10,000 cells with range of 1.5 to 600) than with normoxic MSCs (mean of 30/10,000 cells with range of 0.9 to 170) but the results were not statistically significant, because of the highly variable success rate of the microsurgery in the embryos. To overcome the variability, we used a competitive engraftment assay in which hypoxic MSCs and normoxic MSCs were labeled with CMFDA and CMTMR, respectively. To test the suitability of this assay for the purpose, A 1:1 mixture of CMFDA-labeled and CMTMR-labeled MSCs were incubated under normal expansion condition for 3 days. The labeling of cells with both vital dyes did not block cell proliferation and the dyes were brightly observed after the fixation procedures ( Figure S2). After mixtures of equal numbers of both cells were infused into the embryos, greater numbers of the CMFDA-labeled hypoxic cells were found in the tissues of the day-5 chick embryos, which included heart, brain, liver and spinal cord ( Figure 5A, B).
Differentiation of MSCs into heart and brain tissues of chick embryos without cell fusion
For examination of differentiation potential of hypoxia-exposed hMSC, GFP+MSCs transduced with a lentiviral vector expressing GFP [22] and enriched by FACS (more than 95% positive for GFP) were infused into the day-2 chick embryos. In histological sections of the embryos harvested 3 days later, GFP+cells were detected in multiple developing organs of the embryos, including heart, liver, brain, and spinal cord. The most common site of appearance of the GFP+cells was the heart and spinal cord, apparently because the cells were infused into somites that overlay the dorsal aorta. Immunostaining of sections of heart demonstrated that some of the GFP+cells expressed cardiotin, a protein found in the longitudinal sarcoplasmic reticulum of mature cardiomyocytes ( Figure 6A). In addition, some stained for the cardiac-specific protein a-myosin heavy chain ( Figure 6A). Some GFP+cells in the subventricular zone of the brain expressed the neuronal marker neurofilament H (NF-H) ( Figure 6A). Apparently because of the early stage of development, no astrocyte-lineage differentiation was found by staining with GFAP. To assess whether cell fusion could contribute the lineage differentiation of GFP+MSCs, we compared the expression of GFP with that of the chicken specific marker, 8F3, using immunofluorescence ( Figure 6B). We observed a complete segregation of human and chicken cells at 3 days after implantation; no cell expressed both markers. We also searched for the presence of double nuclei using fluorescent nuclear stains under microscopy. Of 3,120 human cells in 12 embryos, none contained more than one nucleus. In most of the human cells, the human nucleus was readily identified morphologically because of its larger size ( Figure 6B). These data were consistent with our previous observations that adult stem cells from rat marrow engrafted and partially differentiated into heart and brain tissues without evidence of cell fusion. [6]
DISCUSSION
In this report, we demonstrated that MSCs from human bone marrow can be expanded in vitro under hypoxic conditions. We used different plating densities and assays on single-cell derived colonies to evaluate the effects of hypoxia on proliferation and differentiation capacity of MSCs. We found MSCs incubated under hypoxia had decreased rates of proliferation and decreased capacities for both osteogenic and adipogenic differentiation. However, there was no increase in apoptosis under the conditions employed. The results were consistent with some previous reports [23], but not with other reports [24,25]. The discrepancies may be in part be explained by the variation in the oxygen tension and the duration of hypoxic culture that has been used in each study. The other reason to explain the discrepancies is the variation of the system that has been used in each study to control the oxygen level. To minimize the fluctuations in oxygen levels during longterm incubation, we used an incubator that is equipped with two gas sensors, one for CO 2 and the other for O 2 , and the O 2 concentration was controlled using delivery of N 2 generated from a liquid N 2 tank.
Recent reports established that the chemokine receptors CXCR4 and CX3CR1 are important for mediating specific migration of MSCs to bone marrow or damaged tissues [26,27]. Also, previous reports established that CXCR4 was up-regulated by hypoxia in monocytes, endothelial cells, and cancer cells [20] by the stabilization and activation of HIF-1a protein [19]. We found that both CXCR4 and CX3R1 are upregulated by exposure of MSCs to hypoxia or a reagent that mimics the response to hypoxia. We also demonstrated, for the first time, that the upregulation of CX3R1 is also dependent on HIF-1a. The upregulation of CXCR4 and CX3CR1 probably explained the enhanced migration of hypoxia-exposed MSCs in a modified Boyden chamber in response to SDF-1a and fractalkine, and their enhanced in vivo engraftment.
Xenotypic grafting into chick embryo has been a powerful tool in studies of engraftment and in vivo differentiation potential of progenitor or stem cells. Recent reports have shown that human embryonic stem cells, rat mesenchymal stem cells, mouse neural stem cells, and human haematopoietic stem cells can integrate into the chick embryo and differentiate into various cell types with no apparent fusion to the host chicken cells [6,[28][29][30] . The data presented here and previously [6] demonstrated that after MSCs were injected into early stage chick embryos, about two-thirds of the surviving embryos harvested 3 days later contained human cells in multiple tissues. The average percentage of chimerism and the number of dye-labeled human cells were about three-fold greater with hypoxia-exposed than with normoxic MSCs. Therefore, short-term exposure of MSCs to the hypoxic conditions normally found in marrow may prove a simple means of enhancing engraftment of the cells in vivo.
Culture of Human MSCs
Frozen vials of passage 2 to 4 of three extensively characterized human MSCs [31] were used at passage 2 to 4 were obtained from the Tulane Center for Preparation and Distribution of Adult Stem Cells (http://www.som.tulane.edu/gene_therapy/distribute.shtml).
The cells (about 1 million) were plated on a 15 cm diameter plate in complete culture medium (CCM) that consisted of alpha minimal essential medium (aMEM; GIBCO/BRL; Carlsbad, CA; http://www.invitrogen.com), 17% fetal bovine serum (FBS) lot selected for rapid growth of MSCs (Atlanta Biologicals, Inc.; Norcross, GA; http://atlantabio.com/default.htm), 100 units/ml penicillin, 100 mg/ml streptomycin, and 2 mM L-glutamine (GIBCO/BRL). After incubation for 24 hr, the viable adherent cells recovered with 0.25% trypsin and 1 mM EDTA at 37uC for about 5 min. The cells were replated at 100 cells/cm 2 , incubated in CCM with a change in medium every 2 to 3 days, and recovered as passage 2 cells after they reached about 80% confluency in 8 to 9 days. For hypoxic conditions, cells were cultured in a gas mixture composed of 94% N 2 , 5% CO 2 , and 1% O 2 or treated in a medium containing 60-600 mM of DFX that mimics the hypoxic conditions by inhibiting the hydroxylation of a prolyl residue that is essential for the ubiquitination of HIF-1a [20]. For maintenance of the hypoxic gas mixture, we used an incubator with two air sensors, one for CO 2 and the other for O 2 ; the O 2 concentration was achieved and maintained using delivery of nitrogen gas (N 2 ) generated from a liquid nitrogen tank. If O 2 percentage rose above the desired level, N 2 gas was automatically injected into the system to displace the excess O 2 . For reoxygenation experiments, cells were first cultured at 50 cells/cm 2 under normoxic or hypoxic conditions, recovered at day 8, and reseeded at 50 or 1,000 cells/cm 2 and cultured under normoxic conditions. The growth curves and differentiation potentials of cells from normal culture conditions and hypoxic conditions were compared. The APOPercentage Apoptosis Assay (US Vendor: Accurate Chemical&Scientific Corporation., Westbury, NY; www.biocolor. co.uk) which detects membrane alterations (phosphatidylserine flip), was used to quantify apoptosis, according to the manufacturer's instructions.
MSC Differentiation
For low density differentiation of colonies, MSCs (passage 3) were plated at 1.5 cells per cm 2 in a 60-cm 2 dish and cultured in CCM for 7 days. The medium was then replaced with osteogenic or adipogenic differentiation medium, and the cells were cultured for an additional 21 days. The osteogenic medium consisted of CCM supplemented with 10 28 M dexamethasone (Sigma), 50 mg/ml ascorbate-2 phosphate (Sigma), and 10 mM ß-glycerophosphate (Sigma). The adipogenic medium consisted of CCM supplemented with 0.5 mM dexamethasone (Sigma), 0.5 mM isobutylmethylxanthine (Sigma), and 50 mM indomethacin (Sigma). The osteogenic cultures were fixed in 10% formalin for 15 min and stained with 2% Alizarin Red-S for 30 min. Plates were washed 4 times with PBS and dried, and the numbers of Alizarin Red-S positive colonies were counted. The adipogenic cultures were fixed in 10% formalin for over 1 h and stained with fresh Oil Red-O solution for 2 h. Plates were washed three times with PBS and dried, and the numbers of Oil Red-O positive colonies were counted. Separated osteogenic and adipogenic cultures were also stained with crystal violet, and the number of total cell colonies were counted. For high density differentiation, human MSCs (passage 3) were seeded at 10,000/cm 2 and differentiation was induced the next day by replacement of CCM with osteogenic or adipogenic medium. The cultures were washed with PBS and the differentiation induction medium was changed every three days. Staining was done as described for the low density cultures.
RT-PCR
Total cellular RNA was extracted (RNAqueous Total RNA Isolation Kits; Ambion; Austin, TX; http://www.ambion.com), and cDNAs were generated by reverse transcription of 1-2 mg of cellular RNA (M-MLV Reverse Transcriptase; Invitrogen; Madison, WI; http://www.invitrogen.com), according to the manufacturer's instruction. Briefly, 1-2 mg of total RNA was reverse transcribed in a 20 ml reaction mixture. The same amount of template was subjected to 25 cycles of PCR amplification by using b-actin primers as an internal standard so that quantitation of mRNA levels could be corrected among the samples. PCR of CX3CR1 and CXCR4 was performed for 30 cycles. . Each cycle consisted of denaturing at 94uC for 30 seconds, annealing at 60-62uC for 30 seconds, and elongating at 72uC for 30 seconds, with an additional 7 minute incubation at 72uC after completion of the last cycle. PCR primers were designed from the published sequence of each cDNA as follows [26]: CX3CR1 (491 bp), sense: 59-tccttctggtggtcatcg-39 antisense: 59-tgtgcattgggtccatca-39; CXCR4 (260 bp), sense: 59-agctgttggctgaaaaggtggtctatg-39, antisense: 59-gcgcttctggtggcccttggagtgtg-39; b-actin (283 bp), sense: 59tcatgaagtgtgacgttgacatccgt-39 antisense: 59-cttagaagcatttgcggtgcacgatg-39. The samples were then separated on a 1% agarose gel in the presence of ethidium bromide and PCR products detected and documented by using a GelDoc 1000 apparatus (Bio-Rad Laboratories, Hercules, CA). Quantitative RT-PCR was performed using the ABI assay (Applied Biosystems, Foster City, CA) on commercial primers, probes (CX3CR1: Hs00365842_m1; CXCR4: Hs00607978_s1) and Taqman universal PCR master mix on the ABI7700 realtime PCR machine according to the manufacturer's instructions.
Flow Cytometry Analysis
Suspensions of MSCs lifted with EDTA alone were washed and incubated for 30 min at 4uC with monoclonal antibodies to human CXCR4, to human CX3CR1, or to control mouse IgG, followed by incubation for 30 min at 4uC with FITC-conjugated, isotype-matched affinity-purified, goat anti-mouse IgG. The sample was analyzed in Cytomics FC 500 (Beckman Coulter; Miami, FL; http://www.beckman.com).
Western Blotting
Nuclear fraction was prepared for Western blotting of HIF-1a, and cytoplasmic fraction was prepared for Western blotting of CX3CR1 and CXCR4. Briefly, cells were prepared and lysed in buffer (M-PER for total cell lysate or NE-PER for nuclear fraction; Pierce Biotechnology, Rockford, IL; http://www.piercenet.com) supplemented with protease inhibitor cocktail (Halt; Pierce Biotechnology) and protein concentration was determined (Micro BCA Kit; Pierce Biotechnology). The cell lysate (20 mg of protein) was fractionated by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (NuPAGE, 4-12% Bis-Tris gels; Invitrogen, Carlsbad, CA). The sample was transferred to a filter (Immobilon P; Millipore, Bedford, MA) by electro-blotting (XCell II Blot Module; Invitrogen). The filter was blocked for 1 h with TBS containing 5% nonfat dry milk and 0.05% Tween 20 and then incubated overnight at 4uC with the primary antibody. The filter was washed 3 times for 10 minutes each with TBS containing 0.1% Tween 20. Bound primary antibody was detected by incubating for 1 h with horseradish peroxidase-conjugated goat anti-mouse or anti-rabbit IgG (Pharmingen; San Diego, CA; http://www.pharmingen.com). The filter was washed and developed using a chemiluminescence assay (West-Femto Detection Kit; Pierce Biotechnology).
Chromatin Immunoprecipitation (ChIP) Assay
To demonstrate the binding of HIF-1a protein to the promoter of CX3CR1, the ChIP assay was performed with a commercial kit (Upstate Biotechnology; Lake Placid, NY; www.upstatebiotech. com) using the manufacturer's protocol with minor adjustments. The MSCs were grown to confluence, incubated in air or 1% O 2 for 4 h, and formaldehyde was added directly to culture medium to a final concentration of 1% followed by incubation for 20 min at 37uC. The cells were washed at 4uC in PBS, lysed on ice for 10 min in lysis buffer [10 mM Tris HCl, pH 8.0, 1% SDS] containing phosphatase and protease inhibitors. The lysates were sonicated three times for 30 s (Branson Sonifier 450), and the debris was removed by centrifugation. The supernatant was split into several aliquots. Sonication was optimized to produce average DNA fragments of 1 kb. One aliquot of the soluble chromatin was saved at 220uC for preparation of input DNA, and the remainder was diluted 10 times in immunoprecipitation (IP) buffer [10 mM Tris HCl, pH 8.0, 0.1% SDS, 1% Triton X-100, 1 mM EDTA, and 150 mM NaCl] containing phosphatase and protease inhibitors, and incubated overnight (4uC) with polyclonal antibody to human HIF-1a (Novus Biologicals; Littleton, CO; www. novus-biologicals.com). DNA-protein complexes were isolated on salmon sperm DNA linked to protein A agarose beads and eluted with 1% SDS, and 0.1 M NaHCO 3 . Cross-linking was reversed by incubation at 65uC for 5 h. Proteins were removed with proteinase K, and DNA extracted with phenol/chloroform, redissolved and PCR-amplified with CX3CR1 promoter primers, sense: 59attcagcagatatagggcag-39; and antisense: 59-acagtcagctctcattaatg-39 (reverse), which gives a product length of 202 bp. The cycling parameters are 35 cycles, with each cycle consisting of denaturing at 94uC for 30 seconds, annealing at 60uC for 30 seconds, and elongating at 72uC for 30 seconds, with an additional 7-minute incubation at 72uC after completion of the last cycle.
Chemotaxis Assay
Cells were labeled with a vital dye, either CMFDA (green, 492 ex, 516 em, Molecular Probes) or CMTMR (red, 540 ex, 566 em, Molecular Probes; Eugene, OR; http://www.probes.com) at a final concentration of 1-10 mM in pre-warmed Hank's Balanced Salt Solution (HBSS) for 10 min at 37uC. The medium was replaced with pre-warmed CCM and incubated for 30 min at 37uC. Cells were then washed three times with PBS, dislodged with 0.5 mM EDTA and dispersed into homogeneous single-cell suspensions in serum-free and phenol red-free aMEM to a concentration of 10 5 cells/300 ml. To assess chemotaxis, a modification of the Boyden chamber method was used. CMFDA-labeled cells (10 5 ) were added to each insert of 24-Multiwell plate (BD Falcon TM HTS FluroBlock TM Insert System, 8-mm pore size), and 1 ml of phenol red-free aMEM with increasing concentrations of SDF-1a (Pepro-Tech) or fractalkine (PeproTech) was added to the lower compartments. For blocking the activity of SDF-1a, cells were incubated for 30 min on ice with antibody against human CXCR4 (12G5, 10 mg/ mL, R&D) before loading on insert. For blocking the activity of fractalkine, antibody against CX3CL1 (81506, 10 mg/mL, R&D) was added into the medium for 30 min before loading in the lower compartments. Migration was allowed to proceed for 14 to 20 h at 37uC in air. Cells that had migrated to the lower surface of the filter were counted under a fluorescence-equipped microscope at 1006magnifications. The average number of migrating cells per field was assessed by counting at least four random fields per filter. Data points indicate the mean obtained from three separate chambers within one representative experiment.
Infusion of MSCs in Chick Embryos
Chick embryos were removed from storage at 4uC and incubated at 38uC for 40-48 h until they developed to about Hamilton Hamburger stages 12-14. To lower the embryo, 2 ml of albumin was removed from the tapered end of the egg by using an 18-gauge needle and a syringe. An oval opening approximately 1 inch long was made in the shell on the broad side to view the embryo. Filtered 5% India Ink (Pelikan) in PBS was injected under the embryo to increase the contrast. The two or three most recently formed somites of 12-20 somites present in the chick embryos were crushed or removed by using a titanium needle (McCrone Microscopes, Westmount, IL). Before infusion into chick embryos, human MSCs were plated at 1,000 cells/cm 2 and incubated for 1 day under hypoxic or normoxic conditions. A 1:1 mixture of CMFDA-labeled (hypoxic cells) and CMTMR-labeled MSCs (normoxic cells) were suspended at 4,000 cells per ml in PBS at 4uC, and 10-15 ml of cell suspension was injected into the space created by removing the somites. An equal volume of PBS was injected into control embryos. For gene marking to trace the fate of transplanted cells, we also transduced MSCs with lentiviral vectors with the pWPT-GFP construct [22] donated by the Trono lab (Swiss Institute of Technology Lausanne, Lausanne, Switzerland). The embryos were harvested for immunofluorescence or real-time analysis at 3 days after the infusion, at which time they had developed to stage 25-26 and had begun to develop organs such as heart, brain and liver.
Real-time PCR Analysis for the Human 17 Chromosome
Embryos were harvested, rinsed with PBS, fixed in 4% paraformaldehyde in PBS at 4uC overnight, transferred to 30% sucrose solution, followed by flash freezing, and cutting into 20 mm sections with a cryostat. Every 2nd and 3rd section was mounted on a slide for microscopy and the remaining sections were collected into a tube for real-time PCR analysis. For PCR assays, sections were incubated with 1 ml of proteinase K solution (0.4 mg/mL proteinase K/10 mg/mL SDS in Tris buffer at pH 7.4; Sigma) at 55uC for 24 h. The DNA was extracted by using phenol/chloroform (Sigma). The precipitated DNA was purified by using DNeasy tissue kit (Qiagen, Chatsworth, CA).
Approximately 300 ng of chicken embryo DNA was used to analyze the engraftment of human MSCs by using a Universal PCR mix (Applied Biosystems, Foster City, CA) and primers and probes for a human-specific 850-bp fragment of the alpha-satellite DNA on human chromosome 17. The primers and probe set used were sense: 59-caagtcaagcgccccatgaa-39 and antisense: 59-ttgagccaacttgtgcctctctc-39 and probe: 59-FAM-tgcatttatggtgtggtcccgcg-TAMRA-39, respectively [21]. The conditions were initial denaturation at 94uC for 10 min, then 40 cycles with denaturation at 94uC for 1 min and annealing at 60uC for 1 min. Standards were run simultaneously by using pure uninjected chicken embryo DNA and purified DNA from MSCs.
Microscopy
Sections from the embryos were analyzed both by epifluorescence of GFP and immunolabeling. Monoclonal antibody to cardiotin, aheavy-chain myosin, NF-H, GFAP or MSOP (Chemicon Temecula, CA, and Abcam, San Antonio, TX) was incubated under conditions recommended by the supplier overnight at 4uC. For detecting fusion with host cells, slides were also incubated with a mAb against chicken specific antigen (8F3, Developmental Studies Hybridoma Bank at the University of Iowa). The slides were washed three times for 5 min with PBS at room temperature. They were incubated with Alexa-594-tagged (Molecular Probes) secondary anti-mouse antibody at 1:1,000 dilution for 1 h. Controls included omitting the primary antibody. Slides were evaluated by using epifluorescence microscope (Eclipse 800; Nikon). | 2014-10-01T00:00:00.000Z | 2007-05-02T00:00:00.000 | {
"year": 2007,
"sha1": "a3a0b534e947b73273773b7ed905a177f7a20333",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1371/journal.pone.0000416",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a3a0b534e947b73273773b7ed905a177f7a20333",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.