id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
14920095
pes2o/s2orc
v3-fos-license
Clinicopathological features and prognosis of small gastrointestinal stromal tumors outside the stomach The aim of the present study was to assess the clinicopathological features and prognostic factors of primary small gastrointestinal stromal tumors (GISTs) outside the stomach. The clinical data, clinicopathological features and prognostic factors of 20 patients with a pathologically-confirmed diagnosis of non-gastric GIST that were treated at Liaoning Cancer Hospital & Institute between July 2006 and December 2013 were retrospectively analyzed. In total, 15 patients were male and 5 were female, with a median age of 58 years (range, 44–82 years). A change in bowel habits was the original symptom of rectal small GISTs in 6 out of 8 patients, while patients with small GISTs in other locations demonstrated no overt symptoms and the lesions were detected by systematic examinations of other diseases or abdominal surgical procedures performed on other organs. In total, 19 patients out of the total 20 patients underwent surgery, and 1 patient with rectal GIST received continuous oral imatinib mesylate (400 mg once a day) instead of undergoing surgery. The mean diameter of tumors was 1.55±0.54 cm (range, 0.3–2.0 cm) and the median was 1.70 cm. The pathomorphology of the lesions was mainly spindle cell, and immunohistochemistry revealed the expression rate of cluster of differentiation (CD)117, CD34 and discovered on GIST-1 were 85, 80 and 70%. According to the mitosis index, small rectal GISTs were more frequent compared with other positions (P<0.05), while the frequency of small GISTs >1 cm in size was not significantly different from the frequency of small GISTs ≤1 cm in size (P=0.995). All 20 patients were followed up, with a median follow-up duration of 49.5 months (range, 10.5–94.4 months). At the end of the follow-up period, tumor recurrence occurred in 5 patients and 1 patient succumbed following progression. According to the analysis of the tumor sites, the RFS time of patients with small rectal GISTs was significantly different than the RFS time in patients with small GISTs in other positions. The clinical symptoms of non-gastric small GISTs were not evident and were challenging to detect. Small GISTs, regardless of size, possessed malignant potential and once detected, GISTs should be surgically resected. Lesions located in the rectum demonstrated an increased degree of malignancy and were more likely to recur. The tumor size and Ki67 index could not be considered as prognostic factors of non-gastric small GISTs. Introduction Gastrointestinal stromal tumors (GISTs) are the most common mesenchymal tumors of the gastrointestinal tract, with an incidence of 1-2 cases per 100,000 individuals worldwide. The leading site of invasion is the stomach, which accounts for ~45.0% of total invasion, followed by the small intestine, omentum, colorectum and esophagus (1)(2)(3). There is unified agreement that the principle treatment strategy for primary GISTs measuring >2 cm should be surgery if curative resection may be possible. A high mitotic count, non-gastric location, large size, rupture and insufficient adjuvant imatinib are considered to be factors independently associated with poor prognosis. Certain patients with primary GIST are cured by surgery alone, however, administration of adjuvant imatinib mesylate for at least 3 years is now recommended when the risk of recurrence is considered to be significant (4)(5). Imatinib mesylate is the first choice for patients with recurrent, metastatic or unresectable GISTs. Patients receiving preoperative imatinib who exhibit a complete/partial response or stable disease based on the Choi criteria, may be candidates for surgery. Other patients without an indication of successful surgery should accept long-term imatinib treatment until progression, then change to second-line target agents or join clinical trials. Cytoreductive surgery only for recurrent, metastatic or unresectable GISTs is not recommended (6)(7)(8). In general, there are no specific symptoms for early-stage GISTs, which leads to late treatment (9). The clinical presentation of GISTs is highly variable, according to the tumor site and size. The most frequent symptoms are anemia, weight loss, gastrointestinal bleeding, abdominal pain and mass-associated symptoms (10). With the development of endoscopy, particularly the application of endoscopic ultrasonography (EUS), small GISTs in the stomach, duodenum and esophagus are easy to be detected with more associated studies (11), while small GISTs in other sites of the body are challenging to detect, with a smaller number of associated studies. However, controversy remains for the surgical indications and timing of surgery for the treatment of small GISTs with a diameter <2 cm (12). The present study retrospectively analyzed the clinical data of 20 patients with GISTs ≤2 cm in diameter that were located outside the stomach and diagnosed between July 2006 and December 2013, and discussed the clinicopathological features and prognostic factors. The study was approved by the Ethics Committee of Liaoning Cancer Hospital & Institute (Shenyang, China), and written informed consents were obtained from all patients. Materials and methods Patients. Between July 2006 and December 2013, 20 patients with non-gastric small GISTs were treated at the Liaoning Cancer Hospital & Institute (Shenyang, Liaoning, China). In total, 19 of these patients underwent surgery and the lesions were pathologically confirmed to be GISTs subsequent to surgery. The remaining 1 patient did not undergo surgery, but the diagnosis was pathologically confirmed by biopsy. Out of the 20 patients, 15 were male and 5 were female, with ages ranging between 44 and 82 years (median, 58 years). A change in bowel habits was the original symptom in 6 out of 8 patients diagnosed with rectal small GISTs, while small GISTs in other locations resulted in no overt symptoms and were detected during systematic examinations for other diseases or abdominal procedures performed on other organs. None of the patients possessed a history of familial GISTs. Treatment methods. In total, 19 patients underwent the R0 resection and no mortality or other serious complications occurred during the perioperative period. Out of these 19 patients, 7 patients possessed rectal GISTs, among which 3 lesions demonstrated transanal local excision, 2 lesions were excised using high anterior resection (HAR), 1 lesion was excised using Hartmann's procedure and 1 lesion was excised using Miles' procedure. In addition, 4 patients possessed small intestinal GISTs, which resulted in 3 patients undergoing bowel resection and 1 patient undergoing enucleation. Colon GISTs were identified in 4 patients, consisting of 3 lesions located in ascending colon, with 1 patient possessing a synchronous GIST of the descending colon, and 1 lesion located in the transverse colon. All these patients underwent radical colon resection. Peritoneal GISTs, which were located in the mesentery and omentum, were identified in 4 patients, all of whom underwent complete resection. All patients underwent R0 resection without receiving any targeted drugs or undergoing other treatments. The tumor of 1 patient diagnosed with rectal GIST was located in the Dentate line; therefore, the patient did not undergo Miles' procedure and was continuously orally administered with the targeted drug imatinib mesylate (400 mg once a day; Novartis, Basel, Switzerland). Pathology and immunohistochemistry. All tissue samples underwent pathological examination. Firstly, the shape of the tumor cells was assessed according to hematoxylin and eosin staining. The GISTs were revealed to mainly be formed by spindle cells, with few formed by epithelioid cells or mixed cells. Immunohistochemical staining was then performed subsequent to the cells being identified as similar to GIST cells in morphology. The main detection index consisted of the expression of discovered on GIST-1 (DOG-1), cluster of differentiation (CD)117, CD34, α-smooth muscle actin (α-SMA), desmin and S-100, as well as the mitosis count in 50 high power fields (HPFs). Subsequent to 2012, the Department of Pathology of the Liaoning Cancer Hospital & Institute added the detection of the Ki67 index following immunohistochemical staining to the assessment of GIST surgical specimens. As a result, the specimens obtained prior to 2012 lacked records of the Ki67 index, and immunohistochemical staining for Ki67 was therefore performed in the present study. The tissues were observed under the microscope with a x40 object lens (CH-BI45-T; Olympus, Tokyo, Japan), and scoring for the expression of Ki67 was performed by counting at least 500 tumor cells in 50 HPFs. All brown-stained nuclei, regardless of the staining intensity, were considered to be positive for Ki67 expression. However, there may be certain errors as the specimens had been stored for a long period of time. Patients in this group did not undergo gene detection, as all lesions were smaller, with a good prognosis. The majority of patients were not administered with targeted agents and there was a low desire for gene detection. National Institutes of Health (NIH) recurrence risk assessment. In accordance with the NIH risk stratification reported in the study by Joensuu (13) and the NCCN Task Force study (14), GISTs are divided into four recurrence risk stratifications, consisting of extremely low, low, moderate and high risk. The present GISTs were classified according to the NIH risk stratification. Follow-up. Outpatient review and telephone calls were used to perform the follow-up and the last follow-up was August 1, 2014. The recurrence-free survival (RFS) time was calculated from the date of surgery to the date of clear relapse, metastasis or the end of follow-up. Statistical analysis. IBM SPSS Statistics 19.0 software (IBM, Armonk, NY, USA) was used for the present statistical analyses. The data were expressed as the mean ± standard deviation. Categorical data were expressed as the rate or percentage and were analyzed using Fisher's exact test. The RFS time was calculated according to the Kaplan-Meier method. The log-rank test was used to compare the survival distributions. P<0.05 was considered to indicate a statistically significant difference. Results Clinicopathological features. The clinicopathological features of the 20 patients are reported in Table I. The mean tumor diameter was 1.55±0.54 cm (0.3-2.0 cm). In total, 6 GISTs were combined with other digestive system tumors, consisting of 4 GISTs in the small intestine and 2 GISTs in the enterocoelia, and were found during pre-operative examination or incidentally during surgery (Figs. 1 and 2). According to the analysis of cell morphology, 19 tumors consisted of spindle cells (95%), 1 tumor consisted epithelioid cells (5%) and mixed cell morphology was not observed. The results of immunohistochemical analysis revealed that the rates of CD117, CD34, DOG-1, S-100, α-SMA and desmin expression were 17/20 (85%), 16/20 (80%), 14/20 (70%), 6/20 (30%), 4/20 (20%) and 1/20 (5%), respectively. The rate of combined CD117, CD34 and DOG-1 expression was 40%. The mean Ki67 index subsequent to immunohistochemical staining was determined to be 4.65±2.23% (range, 1-10%), and 5% was considered to be a cutoff in the stratified statistics. In total, 13 tissues demonstrated a Ki67 index ≤5% and 7 tissues demonstrated an index >5%. The number of mitoses was observed in 50 HPFs, and 14 tissues were determined to possess a mitotic index of ≤5 mitoses per 50 HPFs and 6 patients were determined to possess a mitotic index of 6-10 mitoses per 50 HPFs. As all cases were non-gastric with a diameter ≤2 cm, according to the NIH risk stratification, regardless of tumor site and size, 14 tumors were classified as extremely low risk and 6 tumors were classified as moderate risk, which was the same result of the mitosis-based risk classification. Clinicopathological associations. The present patients consisted of 20 patients with non-gastric small GISTs, 8 lesions of the rectum and 12 lesions of the non-rectum, consisting of 4 in the colon, small intestine and enterocoelia, respectively. The GISTs were divided into rectal and non-rectal tumors, according to the site, and statistical analysis was performed. No differences were identified between the patient age, patient The titanium clips marked the rectal tumor (blue arrow), which was diagnosed as a highly-differentiated adenocarcinoma, with a diameter of ~0.7 cm, that had infiltrated the submucosa, but had not metastasized to the lymph nodes. The lesion in the lower-right of the image was diagnosde as GIST (red arrow), and spindle cells were observed following hematoxylin and eosin staining, with 2 mitoses per 50 high power fields. The findings of immunohistochemical analysis were CD117(+), discovered on GIST-1(+) and CD34(+), with a Ki67 index of 10%, α-SMA(-), S-100(-) and desmin(+). GIST, gastrointestinal stromal tumor; CD, cluster of differentiation. Notably, in 7 tumors with a diameter ≤1 cm, 2 tumors demonstrated a mitotic index of >5 mitoses per 50 HPF (Table III). Survival time and the association with clinicopathological factors. In total, 20 patients were followed up and the median follow-up time was 49.5 months (range, 10.5-94.4 months). At the end of follow-up, 5 patients had experienced tumor recurrence, 4 of which possessed rectal tumors and 1 possessed an enterocoelial tumor. Among these patients, 1 patient succumbed following progression, 1 patient succumbed to accidental death and 1 patient succumbed to heart disease. In addition, 1 patient with rectal GIST received imatinib mesylate continuously and is currently in a stable condition. Due to the low incidence of recurrence in the present study, the median RFS time could not be calculated. As there were few samples and cases of tumor progression and mortality, univariate and multivariate analyses could not be performed. Analyzing the RFS time according to the Ki67 index revealed no difference between Ki67 indices of ≤5% and >5% (P= 0.354). The RFS time was analyzed between the rectum and non-rectum groups, which revealed a significant difference in the RFS time between the rectum and non-rectum groups (P= 0.049). The RFS time was also analyzed according to the tumor size, and it was found that there was no difference in RFS time between tumors ≤1 cm in diameter and tumors between >1.0 and ≤2.0 cm in diameter (Fig. 3). Discussion Small GISTs have been the focus of previous studies, but the majority of small GISTs exhibit no symptoms, with the exception that small GISTs near the rectum may result in a change of bowel habits (9,11,15), which occurred in 6 of the 8 patients with rectal GISTs in the present study. Other enterocoelia small GISTs exhibited no compression on other tissues and organs, as the lesions were small in size, with infrequent symptoms of bleeding, necrosis and perforation, which was challenging to identify. The majority of GISTs reported in the literature are located in the stomach, and one of the most important reasons for this identification is the development of endoscopy and EUS (11,16). This technology increased the sensitivity of stomach and duodenum examination to small lesions. Several GISTs combined with other malignant tumors, which were mostly gastric cancer, have been incidentally detected during surgery or in the resected gastric specimen (11,(15)(16)(17). Few individual cases of small GISTs in the intraperitoneal, colorectal and small intestine regions that were detected incidentally during surgery for another disease or during physical examination, which was similar to the identification of lesions in the present study, have been reported in the literature (18,19). Analysis of the histopathological type of the lesions revealed that spindle cells accounted for 95% of the present cases, and the rates of CD117, CD34 and DOG-1 expression were approximately equal to those in overt GISTs, which was consistent with pathological descriptions of small GISTs in the majority of the literature (20,21). For the features and treatment of small GISTs, controversy remains on whether these small lesions represent the early stages of malignant GISTs or are hyperplastic proliferations of an entirely benign nature, that in certain cases may not even represent clonal neoplastic proliferation (12,22). The National Comprehensive Cancer Network (NCCN) recommends surgical resection for tumors >2 cm in diameter due to the malignant potential of these lesions, and tumors <2 cm in diameter may be conservatively followed up (14). Certain studies have identified poor prognostic features in small GISTs, since suspected small GISTs >1.4 cm in diameter with irregular margins, identified using EUS, were associated with significant progression. It has been suggested that this subgroup is monitored by a more intensive follow-up (16). Due to the limited number of reported cases in the literature, it is challenging to obtain results regarding the prognosis of small GISTs by numerous sampling tests. However, it is well known that the tumor size and mitotic index are the best prognostic indicators for determining the malignant potential of GISTs (23). In the present study, although the small GISTs were ≤2 cm in size, the mitotic index of 6 small non-gastric GISTs was Table II. Association between the tumor site and characteristics of 20 patients diagnosed with GISTs, determined by the OR and corresponding 95% CI. Tumor site ---------------------------------------------------------------------------Characteristic Rectum, n Non-rectum, n OR (95% CI) P-value >5 mitoses per 50 HPF and the mitotic index of 2 out of 7 small GISTs (≤1 cm) was >5 per 50 HPF, which indicated the malignant potential and implied the necessity of surgical resection of small GISTs. No significant difference was identified between the mitotic index in the ≤1 cm diameter and 1-2 cm diameter groups, indicating that mitosis occurs in the early stage of disease, which is consistent with the findings of numerous studies in the literature (24,25). Gene detection was not performed in the present study, but a previous study by Corless et al (26) performed c-kit gene mutation testing on 13 small GISTs that were identified during autopsy or found incidentally, among which exon 11 (84.7%) was found to possess mutations of the c-kit gene. An associated study (20) revealed that even the smallest GIST (diameter, 0.2 cm) harbored mutations of the c-kit gene. These studies indicated that the mutation of c-kit or platelet-derived growth factor receptor (PDEGFR) was a critical event at an early stage of GIST development. From the aforementioned analysis, the present study hypothesized that all small non-gastric GISTs ≤2 cm in diameter, which demonstrate malignant potential and may eventually develop to overt GISTs, should be resected once diagnosed or incidentally identified. It is unnecessary to decide a cutoff, such as 1 or 1.4 cm, in small GISTs in order to predict the group that may possess an increased chance for recurrence, as certain previous studies have reported (24,27). Though rectal GISTs are less common than non-rectal GISTs, accounting for ~5% of total GISTs (28), they remain the focus of studies, as the symptoms for rectal GISTs appear earlier compared with the symptoms of GISTs located in the enterocoelia. Rectal GISTs may also be detected by digital rectal examination, fiber colonoscopy and ultrasonic endoscopy (29). The principle of surgery for rectal GISTs is different from the principle for rectal cancer, as no lymph node dissection or TME resection is required, but a tumor-free resection margin and complete resection are necessary (30). Liu et al (31) reported the results of the surgical treatment of 21 patients with rectal GISTs and considered that mitosis, a positive resection margin and open surgery may be poor prognostic factors, with the DFS of the group that received open surgery being decreased compared with the group that received local excision group. It was suggested that for rectal GISTs located <5 cm from the anus, transanal resection should be performed. For larger lesions, initial adjuvant therapy of imatinib mesylate and then surgical treatment subsequent to a reduction in lesion size has been recommended. The findings of the study by Wang et al (32) demonstrated that this novel adjuvant therapy for rectum GISTs is a safe and effective therapy with a clear benefit for the local excision, in terms of feasibility, function preservation and safety. Of the 20 patients in the present study, 8 possessed rectal GISTs. In total, 1 patient was continuously administered with imatinib mesylate instead of undergoing surgical resection, and 7 patients underwent surgical treatment without receiving adjuvant drugs. Recurrence occurred in 4 out of these 7 patients, including 2 patients that underwent transanal resections, 1 patient that underwent rectal anterior resection and 1 patient that underwent Hartmann's procedure. Due to the limited number of patients, the prognostic effect of various surgical methods was unable to be compared, but the malignant potential of rectal GISTs was determined to be increased compared with GISTs in other sites. Rectal GISTs were also easy to assess for local recurrence. The mitotic index was compared between the rectum group, in which 5 lesions demonstrated >5 mitoses per 50 HPF, and the non-rectum group, in which 1 lesion demonstrated >5 mitoses per 50 HPF. This difference was statistically significant. This finding was consistent with certain findings in the literature (33,34), as rectal GISTs have been reported to possess comparatively higher mitotic activity, confirming a distinctively aggressive biology. Investigation of the rectum group revealed that the RFS time of this group was significantly decreased compared with the RFS time of the non-rectum group. The NCCN Task Force (14) previously reported 111 rectal GISTs with a mitotic index of >5 mitoses per 50 HPF and ≤2 cm in size that demonstrated a recurrence rate of 54%. In the present study, there were 5 patients with a mitotic index of >5 mitoses per 50 HPFs in the rectal GISTs. Out of these patients, 3 developed recurrence (Table II), resulting in a 60% recurrence rate. However, due to the smaller sample size, there was little difference between the results of the current study and the NCCN guidelines, which indicated that the metastatic potential of rectal small GISTs was increased compared with the lesions at other sites. Surgical treatment was required once rectal GISTs were detected, and those patients with a mitotic index >5 per 50 HPFs should receive surgery combined with the administration of imatinib mesylate. As one of the most important immunocytochemical markers of proliferation in tumors, the Ki67 index is already accepted as a clinical predictor of the prognosis of breast cancer or neuroendocrine tumors (35,36), but the criteria of the Ki67 index in GISTs is not well-defined yet. Zhao et al (37) detected the Ki67 index in 370 patients and hypothesized that the Ki67 index was an independent prognostic factor for the RFS time of patients with GISTs subsequent to analysis. Wang et al (38) reported the association between Ki67 and clinicopathological factors and hypothesized that Ki67 was associated with NIH risk stratification. At present, the function of the Ki67 index is valued in the clinic, so Ki67 detection is performed on GIST surgical specimens. In the present study, Ki67 detection was conducted for those specimens that had not undergone Ki67 detection previously. However, as the specimens had been stored for a long time, this detection may have resulted in errors. Also, due to the limited numbers of specimens, associations between Ki67 expression and factors including tumor size, site and mitosis were not identified. The RFS time of the patients with a Ki67 index ≤5% and those with an index >5% was not significantly different, which requires additional analysis by increasing the number of specimens assessed. There are a few limitations of the present study. Firstly, no examination of the expression of the c-kit and PDEGFR genes was performed in the present patients. Secondly, additional studies should be performed with increased numbers of patients to investigate the association between clinicopathological features, including the Ki67 index, mitotic index and risk grade, and the prognosis. Thirdly, multicenter randomized controlled trials should be performed to compare biological behaviors, clinicopathological features and prognostic differences between small gastric and non-gastric GISTs and the significance of surgery in the treatment of small GISTs. It is challenging to detect non-gastric small GISTs as the clinical symptoms are not evident, and the majority of these lesions are detected in procedures performed on other organs. Non-gastric GISTs, regardless of the size, may demonstrate mitotic change and recurrence to indicate malignant potential. Once this is detected, surgical resection is required. Rectal small GISTs with increased malignant potential and recurrence rates require more attention. At present, it is challenging to utilize the Ki67 index as a prognostic factor for the assessment of non-gastric small GISTs, and this requires additional investigation by increasing the number of specimens studied.
2016-05-12T22:15:10.714Z
2015-08-25T00:00:00.000
{ "year": 2015, "sha1": "99360404c97281632cbed78bd4238a36a37d3a67", "oa_license": "CCBYNCND", "oa_url": "https://www.spandidos-publications.com/ol/10/5/2723/download", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "99360404c97281632cbed78bd4238a36a37d3a67", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
236149612
pes2o/s2orc
v3-fos-license
Identifying Risk Factors for Self-reported Mental Health Problems in Psychiatry Trainees and Psychiatrists in Mexico Objective The objective was to determine and compare demographic features, professional activities and adversities, physical health conditions, and self-care behaviors related to the most frequently self-reported mental health problems among psychiatrists and psychiatry trainees. Methods A cross-sectional, retrospective, comparative study was conducted on a total of 330 (48.2%) psychiatry trainees and 355 (51.8%) psychiatrists from Mexico through an online survey. Demographic features, professional activities and adversities, physical and mental health problems, self-care behaviors, and social support were examined. Comparative analyses and multiple logistic regression models were performed. Results Major depression, anxiety, and burnout were the most common mental health problems reported with a higher frequency of anxiety disorders in psychiatry trainees. Being a woman, having a physical health problem, and lack of restful sleep were the main risk factors in both groups. Consultation in the government sector and having patients with severe suicidal ideation affected more psychiatry trainees. Perceived discrimination and inadequate eating schedules were risk factors for mental health problems for psychiatrists. Conclusion Psychiatry trainees constitute a vulnerable group for anxiety disorders. Particular attention should be paid to how students cope with the training experience to determine whether additional support is required. These professionals face major stressors leading to a high prevalence of depression, burnout, and anxiety. Encouraging psychiatrists to have better health habits is a step in the right direction, which must be accompanied by tangible organizational avenues to do so and creating a culture that truly promotes self-care. there is little coordination among these health care systems [6] that may particularly affect psychiatry trainees as in the government sector most of the clinical activities are largely staffed by trainees in addition to their role as students. On the other hand, psychiatrists working in the private sector may neglect their personal care by saturating their time with multiple consultations. Psychiatrists report a higher impact on their level of distress caused by the doctor-patient relationship than other physicians [7,8]. The nature of the psychiatrist-patient relationship is unique among medical specialties, as psychiatrists themselves become "tools" in their profession, which likely allows emotions to intensify or to be affected in the context of their clinical work [9]. Psychiatrists' ability to identify and handle emotions provides greater awareness of and sensitivity to human suffering, while dealing with troubled people continuously over extended periods of time makes them more vulnerable to vicarious trauma and compassion fatigue [10]. Moreover, most psychiatrists experience stressful adversities related to their clinical practice such as attacks by violent patients or hostile relatives of patients, and as a result of managing patients with severe suicidal ideation [9,11] or experiencing patient suicide [12]. They also may face additional strains that increase their risk of mental health problems such as excessive work hours, litigations, a generally solitary professional practice coupled with perceived stigma regarding their profession from the general population and even from their own colleagues [13]. In addition, physicians are known to take poor care of themselves. In an extensive review of physician wellness, Wallace et al. in 2009 described how they tend to work even when sick, expect colleagues to do the same, are unlikely to have a family doctor, often rely on avoidance and denial as coping strategies, self-diagnose and self-prescribe treatments, and in general tend to neglect their own health [1]. Health care organizations are yet another factor leading to poor self-care, by failing to provide resources for basic needs such as rest, physical activity, and nutrition. As in psychiatrists, mental health problems have been reported as common experiences during residency training [14]. Being a psychiatry trainee confers additional risks that could affect their mental health: long working hours including night shifts, a sense of vulnerability to possible aggression by patients, the imbalance between their professional experience and the responsibility of treating patients, academic demands, and learning-curve errors perceived as failures [13,15,16], together with poor-quality relations within institutions: (1) abuse, harassment, or discrimination by supervisors, peers, and other health care providers, and (2) a collaborative climate and sense of belonging are not fostered [17]. Despite this adverse scenario, psychiatrists generally report being satisfied with their daily practice. In a study by Cordoba et al. in 2009, comprising 19 Latin American countries, 94% (n=994) of the Mexican psychiatrists surveyed reported being satisfied, reflecting a good level of commitment with the profession despite the adversities previously mentioned [18]. Nevertheless, the small number of psychiatrists in Mexico (with an estimated rate of 3.68 per 100,000 inhabitants) and the uneven distribution across the country (60% of psychiatrists are concentrated in three cities) [19] make them more exposed to factors that can affect their mental health and increase their risk of developing a psychiatric disorder. Indeed, factors such as the workload and long shifts (over 12 h), specific to the practice of medicine in Mexico, could further increase the difficulties faced by these specialists [20,21]. This study provides a retrospective assessment of the main demographic features, professional activities and adversities, physical health problems, and self-care behaviors related to self-reported mental health problems in Mexican psychiatrists and psychiatry trainees. We hypothesize that (1) depression, burnout, and anxiety disorders will be the most prevalent selfreported mental health problems in both psychiatrists and psychiatry trainees; (2) being a woman, having a physical health problem, not having restful sleep, and greater distress will be the most important risk factors for presenting mental health problems in both psychiatry trainees and psychiatrists; and (3) psychiatry trainees will be more affected by their working activities than psychiatrists. Methods This cross-sectional, comparative, retrospective study of Mexican psychiatrists and psychiatry trainees was designed to be used in an online survey. The authors assert that all procedures contributing to this work comply with the ethical standards of the relevant national and institutional committees on human experimentation and with the Helsinki Declaration of 1975, as revised in 2008. The contents of the survey and all study procedures were approved by the Ethics and Research Committees of a psychiatric clinical facility at Mexico City dedicated to research and clinical attention (No. 09-CEI-010-20170316). Participants Recruitment was performed using a convenience sample approach with psychiatry trainees and psychiatrists in the country who were willing and able to participate. Psychiatry trainees in Mexico are medical doctors admitted for a 4-year residency program; residents from the first to the fourth year of residency were invited to participate. Psychiatrists are medical doctors who have graduated from the medical specialty in psychiatry currently practicing as specialists. A study invitation was circulated, by email and social media (Facebook and WhatsApp), with a link explaining the nature and procedures of the study to participants at the beginning of the online survey, first by stating the anonymity of the survey, then explaining that questions were related to physical and mental health, hobbies, and adversities related to the profession, that the study was approved by the Ethics and Research Committees, and that consent to participate could be withdrawn at any time by dropping out of the survey. A link to an electronic information sheet and consent form was included. Those who agreed to participate and provided electronic consent proceeded to complete the survey. Recruitment was carried out from January 2018 to December 2018. Assessment Procedure The survey was conducted in Spanish and took approximately 25-30 min to answer. It was divided into four sections: demographic features, professional activities and adversities, physical and mental health problems, and self-care behaviors and social support. The first section, "Demographic Features," included questions related to the subjects' age, sex, marital status, and whether they had children. The second section, "Professional Activities and Adversities," included current clinical consultations in both the government and the private sectors, the number of hours per week spent on these activities, and the current maximum working hours per day (excluding the continuous 36-h medical shift schedule). It asked about professional adversities since the beginning of the psychiatry residency and included receiving attacks (physical, verbal, or psychological), the identity of the assailant (patient, patient's relative, or colleague), lawsuits, being the attending physician of patients with suicidal ideation or who committed suicide, and perceived discrimination (discrimination dimension of the Spanish version of King's Internalized Stigma Scale) [22,23] as a psychiatry trainee or psychiatrist. The third section included self-report questions on "Physical and Mental Health Problems." Participants were asked about the presence of any physical health problem (respiratory, cardiovascular, endocrinological, musculoskeletal, or gastrointestinal diseases), mental health problems (such as major depression, anxiety disorders, burnout, and suicidal ideation) during their professional careers, and whether they had received specialized treatment for these problems. Perceived distress was also assessed on a 10-point visual analogue scale (0-no distress at all; 10-maximum perceived distress). The last section, "Self-care Behaviors," included questions related to eating patterns (number of meals a day, eating schedules, type of diet-hypercaloric, hypocaloric, balanced), exercise (type of exercise-aerobic, anaerobic; frequency; and duration), sleep (number of hours, restful/unrestful sleep), and engagement in social activities outside the workplace. Statistical Analyses Descriptive information was determined by frequencies and percentages for categorical variables and means and standard deviations (S.D.) for continuous variables. The results of the four sections in the survey answered by psychiatry trainees and psychiatrists were compared using chi-square tests (χ2) for categorical variables, and independent sample Student's ttests for continuous variables. To determine the effect sizes of the comparative analyses, Cramer's V for chi-square tests and Cohen's d for t-tests were carried out, with the results being interpreted as small (0.2-0.3), medium (0.4-0.7), and large (>0.8). Demographic features (sex, age, marital status, children), professional activities (clinical consultations in the government and the private sectors, hours spent on these activities, maximum number of working hours per day), and adversities (patients with suicidal ideation, patients who committed suicide, lawsuits, attacks, and perceived discrimination), presence of any physical health problem, perceived distress, and self-care behaviors (meals per day, caloric consumption, eating schedules, exercise, hours of sleep, restful sleep, social activities) were entered as possible predictors of each of the three most frequent self-reported mental health problems (major depression, anxiety disorders, and burnout) in multiple logistic regression analyses with the backward stepwise modeling approach. The Hosmer and Lemeshow test was used to determine goodness of fit. The most calibrated final models were reported, one for psychiatry trainees and another for psychiatrists, which included the variables that remained significant after the backward stepwise process. All analyses were performed using the SPSS version 21 for Windows PC and the alpha value for tests was set at p<0.05. Demographic Features A total of 330 (48.2%) psychiatry trainees and 355 (51.8%) psychiatrists from 29 of 32 states in Mexico completed the survey. Considering the last report [19] on the number of psychiatry trainees and psychiatrists in our country, 45.8% (n=720) of psychiatry trainees and only 7.1% (n=4999) of psychiatrists answered the survey. As expected, psychiatry trainees were younger (28.4 years old, S.D.=2.1) than psychiatrists (42.1 years old, S.D.=11.3; p<0.001). A higher percentage of psychiatry trainees were in the first year of the psychiatry residency (36.1%, n=119), with 28.8% (n=95) in the second, 29.4% (n=97) in the third, and the remaining 5.8% (n=19) in the fourth year of residency. A similar proportion of men and women was found between groups and most of the psychiatry trainees were single (93%, n=307) with no children (95.8%, n=316) unlike psychiatrists (see Table 1). Professional Activities and Adversities Specific variables related to professional activities between groups are shown in Table 1. Clinical consultations in the government sector were more frequent, with more hours being spent on consultation by psychiatry trainees than by psychiatrists (p<0.001), while the latter reported more private clinical consultation and more hours spent on this activity (p<0.001). The maximum number of hours worked per day was higher in psychiatrists (13 h per day) than in psychiatry trainees (12 h, p<0.001). Professional adversities from the start of the psychiatry residency were reported by most participants (90.2%, n=618): over 90% of psychiatrists and nearly 85% of psychiatry trainees (p<0.001). The most common adversity reported was having patients with severe suicidal ideation, attacks, being the attending physician of a patient who had committed suicide, and lawsuits. Most adversities were more frequently reported by psychiatrists. However, when assessing the type of attack and assailant, differences arose between groups: psychological and physical attacks were more common among psychiatry trainees as was being attacked by a patient, patient's relative, or professional colleague. Perceived discrimination was also higher among psychiatry trainees than among psychiatrists (Table 1). Physical and Mental Health Problems The presence of a physical health problem was more frequently reported in psychiatrists (32.4%, n=115) than in psychiatry trainees (13.9%, n=46: p<0.001). Endocrine diseases (such as diabetes, hypothyroidism, and obesity) were the leading physical problems in both groups. In the group of psychiatrists, this was followed by cardiovascular diseases (such as hypertension and dysautonomia), whereas respiratory (asthma), gastrointestinal (gastritis and colitis), and musculoskeletal (such as hernias and lower back pain) diseases ranked second among psychiatry trainees (Table 2). Over half the participants reported having a mental health problem (62.8% of all participants, n=430; 59.4% of psychiatry trainees, n=196 and 65.9% of psychiatrists, n=234) during their professional careers (see Table 2). Major depression, anxiety disorders, and burnout were the most common mental health problems reported in both groups, with a higher proportion of anxiety disorders being found in psychiatry trainees (76.0%, n=149 vs. 62.4%, n=146 of those who reported mental health problems; p=0.002). A similar percentage of participants received pharmacological and/or psychotherapeutic treatment for these problems. As shown in Table 2, pharmacological treatment was self-prescribed by over a third of the participants, with no differences being reported between psychiatry trainees and psychiatrists. Perceived distress was higher in psychiatry trainees. Self-care Behaviors Different patterns of self-care behaviors were found between psychiatry trainees and psychiatrists (see Table 3). The former reported more hypercaloric (47.3%, n=156 vs. 38.9%, n=138) or hypocaloric (6.1%, n=20 vs. 2.8%, n=10; p=0.003) meals and less established eating schedules (51.5% n=170 vs. 40.0%, n=142; p=0.002) while the number of meals per day, three on average, was similar in both groups. Over half the participants in both groups reported exercising regularly. More psychiatry trainees engaged in mixed (aerobic and anaerobic) exercise (41.3%, n=71 vs. 16.9, n=33) while psychiatrists reported more aerobic exercise (76.4%, n=149, vs. 51.7%, n=89; p<0.001). Both groups exercised an average of three days per week with a mean duration of an hour and a half (Table 3). Prediction of Self-reported Major Depression, Anxiety Disorders, and Burnout in Psychiatry Trainees and Psychiatrists Demographic features, professional activities and adversities, physical health problems, perceived distress, and self-care behaviors affected psychiatry trainees and psychiatrists differently (see Table 4). As can be seen, two of the main risk factors for major depression, anxiety disorders, and burnout in both groups were having any physical health problem and not having restful sleep (except for anxiety disorders in psychiatrists), which can triple the risk of their occurrence. Women were at a higher risk of occurrence of major depression in both groups and a higher risk for anxiety disorders in psychiatry trainees. Clinical consultations in the government sector and the number of hours spent on this activity affected groups differently, with consultations in the government sector conferring a higher risk for the three assessed mental health problems in psychiatry trainees, while clinical consultations in the private sector increased the risk for anxiety disorders in psychiatrists and burnout in psychiatry trainees. Moreover, a greater number of working hours increased the risk for both groups for major depression and for anxiety disorders in psychiatrists. Higher perceived distress mainly affected psychiatry trainees but was also a risk factor for burnout in psychiatrists. Psychiatrists with high perceived discrimination were also at a higher risk for major depression. Eating patterns, particularly the consumption of hypo-or hypercaloric meals, were also a risk factor for depression, anxiety disorders, and burnout in psychiatrists. Discussion Our results indicate that a significant number of psychiatrists and psychiatry trainees have been affected by mental health problems during their academic training and professional lifespan. Anxiety disorders were reported by more than three quarters of the psychiatry trainees who answered the survey, while major depression was reported by over 65% of the sample and burnout by nearly 50%. These results are consistent with previous studies [24][25][26]. Even though a high proportion of psychiatrists and psychiatry trainees had received treatment (pharmacological and/or psychotherapeutic) for the mental problem they experienced, approximately 40% under pharmacological treatment had self-prescribed this treatment. The rate of selfprescription among physicians in other studies varies dramatically from 7.6% in first-year specialists in training in one study [27] to over 60% in non-consultant hospital doctors [28]. As these authors note, many factors are likely to influence self-prescription, and show this to be closely related to the perceived need to work while sick, a phenomenon that is closely linked to working conditions, such as lack of [27] point out that the reduction observed in their study in comparison with previous national studies in the USA may be influenced by the change in the number of working hours. Moreover, in the specific area of mental disease, stigma is another key factor in self-prescription [28]. Self-stigma and fear of being stigmatized by colleagues could explain this phenomenon. They prefer to self-prescribe than be exposed to discrimination, isolation, lack of compassion and understanding, or being deemed incompetent, and are concerned about jeopardizing their status or having their privacy and autonomy violated [29]. Despite ethical and practical concerns, self-prescription is particularly worrisome in our sample of psychiatry trainees since most of them (65%) were only just beginning their specialization, meaning that their diagnosis and treatment choices are questionable, with a higher probability of adverse events related to treatment and a higher probability of treatment inefficacy. Psychiatrists' treatment selection may be based on the profile of the side effects of the medication of choice and be inconsistent with symptom severity. Furthermore, our study did not include the issue of which medications were used to treat this condition, leaving open the possibility of treatment based solely on symptomatic relief (i.e., with benzodiazepines or stimulants rather than SSRIs). In addition, medical knowledge makes doctors prone to oscillating between panic and denial when experiencing symptoms [30], leading to a clear necessity of encouraging treatment and discouraging self-prescription, perhaps through the implementation of mandatory routine medical consultations alongside restrictions for self-prescription of sedatives and stimulants. In keeping with gender-related vulnerability to mental illness [31], women are at a higher risk for major depression and anxiety disorders [13,32], particularly psychiatry trainees, as age appears to be a protective factor for anxiety disorders in psychiatrists. Professional activities and adversities impact differently on psychiatry trainees and psychiatrists. Having clinical consultations in the government sector was an important risk factor for major depression and burnout in psychiatry trainees. Training is stressful and the experience of high levels of responsibility combined with the lack of professional experience (especially since most trainees were in their first and second years of training) [33] may increase distress and lead to increased risks of mental health problems, as borne out by our results showing the impact of perceived distress on this group. A certain amount of discomfort is to be expected as part of all learning experiences. However, the increased risk of mental health problems may indicate that particular attention should be paid to how students are coping with the training experience as a psychiatric specialist and determining whether additional support is needed. Having patients with severe suicidal ideation was the most common professional adversity reported by psychiatrists and psychiatry trainees and conferred a higher risk for having an anxiety disorder in the latter. Dealing with patients who express suicidal intention seems to be the greatest difficulty psychiatrists can face in their profession [34,35]. It is a particularly difficult stressor to cope with [36], with reports of severe stress levels (comparable to the loss of a parent) in treating therapists, a substantial proportion of whom develop posttraumatic symptoms when a patient under their care commits suicide [37]. A sense of competence and confirmation of professional skills (when faced with this adversity) can only come from experience and training [35]. Accordingly, close supervision by a more experienced or attending psychiatrist working with established institutional protocols for providing additional resources to trainees to manage suicidal patients and reduce anxiety is required, together with help for them to deal with guilt and genuinely consider whether mistakes were made [35]. The high discrimination reported by trainees (mean score=35.6 vs. 21.3 of psychiatrists) and attacks by colleagues (51.6%) evidence the need for assessing how this supervision takes place, as it does not appear to be providing the constructive feedback one would expect in teaching facilities. Psychiatrists show better measures of self-care than psychiatrists in training. Doctors often suggest a balanced diet, moderate exercise, and 8 h of sleep a night to their patients yet neglect to follow their own advice. Physicians also sleep less than they might need to on a regular basis [38]. Contrary to other studies, in our group of psychiatry trainees, sleeping less than 5 h was not a risk factor for anxiety and burnout symptoms. This phenomenon is probably the result of medical culture, where the dominant idea in the medical profession is that physicians are never sick and, if they are, they must work silently through their illness and put patient care above everything else. There is a definite need to continue to promote self-care in the medical profession and to debunk the myth of the infallible physician. The limitations of our study come from three main sources: self-reporting rather than active evaluation and detection of these mental health problems, lack of randomization of the sample, and its retrospective nature. Self-reporting of these problems should be taken with caution as it is likely to include other problems that may not meet clearly defined diagnostic criteria. For example, self-reported major depression may include depression or dysthymia [39], while burnout may be indistinguishable from depressive or anxious symptoms [40]. Coupled with the retrospective design of the study, this makes it impossible to determine how accurate these diagnoses are or whether they are currently present. Moreover, in the self-diagnosis process, qualitative studies have shown that physicians tend to under-or over-react to their symptoms, switching from diagnosing diseases with the worst prognosis to ruling out a disorder altogether [30]. Medical culture may encourage both under-reporting and underrecognition of illness and burnout. Although the anonymous nature of the survey may encourage more accurate selfreporting, the extent of under-recognition is impossible to determine given the design of this study. Nevertheless, since the study subjects are experts in the field of mental health, these self-reports give cause for concern and should be carefully considered. Lack of randomization of the sample could lead to a bias in the interpretation of our results. Even though most of the psychiatrists and psychiatry trainees come from the three main cities where these professionals are concentrated in the country (Mexico City 66.6%; Jalisco 10.5%; and Nuevo León 2.8%) [19], we cannot rule out the possibility that those who answered the survey were those with clearly identified mental health problems. Also, we had no participants from three states in this survey (Hidalgo, Guerrero, and Tlaxcala). Compared to the remaining states, these have very few psychiatrists [19] and rather than cultural or economic reasons, we think possible participants did not receive the link for the survey or did not want to participate. Accordingly, our data should be taken with caution and cannot be generalized to the universe of psychiatrists and psychiatry trainees in our country. The retrospective nature of the study limits the study of certain variables as risk factors, such as lack of restful sleep, which could be both a risk factor or a current manifestation of depression, anxiety, or burnout. Future studies should evaluate current symptoms to address this concern. The medical world must evolve towards the recognition that physicians are only human to cultivate integrity, self-reflection, and the ability to admit weaknesses and mistakes while striving for continuous improvement and learning. In addition, physicians with healthy habits are more likely to provide appropriate preventive care and counseling to their patients. Encouraging psychiatrists to take better care of themselves is just one piece of the puzzle, which must be accompanied by tangible organizational avenues to do so and the creation of a culture that truly promotes self-care. We must continue to engage in honest dialogue to raise awareness, while enacting policies that regulate working hours, workload, and work environment, and promote proper work-life balance (for example, daycare support), and ongoing screening of personnel in training followed by a flowchart for both prevention and care. In turn, we need to take time to restructure the hierarchical medical culture permeated by the normalization of violent behaviors disguised as necessary strategies for teachinglearning medicine. Redefining the idea of sacrifice and the overevaluation of academic and professional achievements over self-care is imperative; in other words, shifting the paradigm of forming the "medical character" with an irreversible loss of physical and mental health is mandatory. We believe these goals might be attained through medical wellness programs, started from medical school but maintained in clinical settings, focusing on de-stigmatizing physical and mental illness in doctors, promoting help-seeking behaviors, and addressing the risks of self-diagnosis and self-medication. The steps we take today will shape the future of medicine and psychiatry for years to come.
2021-07-21T22:55:40.038Z
2021-07-21T00:00:00.000
{ "year": 2021, "sha1": "9ae6fdb99d7ae02b323a6c4ab321ac80738eddf8", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s40596-021-01506-y.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "9ae6fdb99d7ae02b323a6c4ab321ac80738eddf8", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
115838279
pes2o/s2orc
v3-fos-license
Within-and between-Session Reliability of the Spider Drill Test to Assess Change of Direction Speed in Youth Tennis Athletes Agility or Change of Direction Speed (CODS) is a critical physical attribute in a sport such as tennis, which is categorised by frequent and multiple changes of direction. Recently, a CODS test called the ‘spider drill’ has been used to assess tennis athletes’ ability to change direction. To the authors’ knowledge, no study has solely assessed its reliability and compared this with other commonly-used CODS tests; thus, this was the aim of the study. Ten nationally ranked youth tennis athletes (age: 15.1 ± 2.6; mass: 66.4 + 17.2 kg; height: 163.0 + 16.2 cm) completed three trials of the spider drill, modified t-test and pro-agility test on two separate testing occasions. All CODS tests had low typical percentage error, both within-sessions (CV = 1.8 4.1%), and between session (CV = 1.2 3.7%). The SEM was also consistent within tests both withinand betweentesting sessions. Within-session test-retest consistency illustrates strong reliability for the spider drill (ICC = 0.93, 0.95), modified t-test (ICC = 0.79, 0.83), however for pro-agility session 2 fell outside of the accepted threshold (ICC = 0.88, 0.69). These trends were similar when assessing between-session consistency, with both the spider drill and modified t-test providing high levels of reliability (ICC = 0.95 and 0.97 respectively). However, the pro-agility fell outside of the accepted threshold (ICC = 0.66), with 95% confidence intervals wide-ranging in nature (95% CI: 0.11 0.9). These results suggest that the spider drill and modified t-test are both reliable tests when measuring CODS within youth tennis athletes. Strength and conditioning practitioners could consider changes in excess of ± 1.1% as meaningful (based off the SDD) when assessing CODS through the spider drill or modified t-test within youth tennis athletes. Introduction Agility has been identified as key to athletic success within numerous intermittent sports [1][2][3][4][5], with tennis being a prime example [6][7][8].Within typical tennis match play, literature reports the average point length to last less than 10 seconds, with an average of 4 changes of direction per point [9,10].Proficient movement in multiple directions, especially the lateral plane of motion is required [7,11], and this is supported by time motion analysis data which has shown elite players average four changes in direction per point and approximately 1000 changes in direction per match [7].Additionally, Roetert, et al. [8] also concluded that agility was the best physical indicator to selection ranking amongst elite players.Considering agility is an integral component to athletic success in tennis, Strength and Conditioning (S&C) coaches need to be able to successfully assess a player's performance for this physical attribute.The importance of a field-based assessment throughout the year has proven beneficial to an athlete's development, as coaches can then accurately monitors an athletes progress and highlight strengths and weaknesses in order to tailor training programmes accordingly [10][11][12]. Despite its importance towards the tennis population, the definition of agility is often the subject of debate amongst the sport science community [13][14][15][16].Within the literature, agility has been referred to by several definitions [13,14,[16][17][18][19][20].The most recent definition of agility describes elements of decision making Abstract Agility or Change of Direction Speed (CODS) is a critical physical attribute in a sport such as tennis, which is categorised by frequent and multiple changes of direction.Recently, a CODS test called the 'spider drill' has been used to assess tennis athletes' ability to change direction.To the authors' knowledge, no study has solely assessed its reliability and compared this with other commonly-used CODS tests; thus, this was the aim of the study.Ten nationally ranked youth tennis athletes (age: 15.1 ± 2.6; mass: 66.4 + 17.2 kg; height: 163.0 + 16.2 cm) completed three trials of the spider drill, modified t-test and pro-agility test on two separate testing occasions.All CODS tests had low typical percentage error, both within-sessions (CV = 1.8 -4.1%), and between session (CV = 1.2 -3.7%).The SEM was also consistent within tests both within-and between-testing sessions.Within-session test-retest consistency illustrates strong reliability for the spider drill (ICC = 0.93, 0.95), modified t-test (ICC = 0.79, 0.83), however for pro-agility session 2 fell outside of the accepted threshold (ICC = 0.88, 0.69).These trends were similar when assessing between-session consistency, with both the spider drill and modified t-test providing high levels of reliability (ICC = 0.95 and 0.97 respectively).However, the pro-agility fell outside of the accepted threshold (ICC = 0.66), with 95% confidence intervals wide-ranging in nature (95% CI: 0.11 -0.9).These results suggest that the spider drill and modified t-test are both reliable tests when measuring CODS within youth tennis athletes.Strength and conditioning practitioners could consider changes in excess of ± 1.1% as meaningful (based off the SDD) when assessing CODS through the spider drill or modified t-test within youth tennis athletes. and perceptual motor skills and has therefore caused many to suggest that some field-based assessments used by S&C coaches are inappropriate due to their lack of response to a stimulus [13,14,21].However, it should be noted that recreating the 'decision-making' aspect of agility may in fact be incognisant.A review by Kovacs [7] highlights how every point in tennis is vastly different in nature, rendering the cognitive and perceptual demands as inconsistent.As a result of this, Change of Direction Speed (CODS) tests are considered favourable [15].Two commonly used CODS assessments are the modified t-test and pro-agility [11,22,23].Both are argued to be suitable and practical methods for the assessment of CODS performance of tennis players due to their emphasis on both linear and lateral movement patterning, acceleration and subsequent deceleration, of which is considered highly important for tennis performance [6,7,11].Equally, both assessments cover a total distance of 20 m, which may be pertinent to tennis as distances covered tend not to exceed 12 m [23].However, though these two CODS assessments are heavily documented within the literature to produce reliable and sufficient results [15,22,23], to the authors' knowledge, they have not been examined specifically amongst the tennis population.Additionally, a less wellknown CODS assessment is gaining popularity among the United States Tennis Association (USTA); the spider drill [8,12,24,25].Though Eriksson, et al. [12] acknowledges that the movement patterns observed within this drill are very much like the movement patterns seen within tennis match play, at present there appears to be limited recognition to the application of the spider drill within the literature, particularly in respect to its reliability within youth tennis athletes [8,24,25]. As CODS is considered key to athletic success among tennis players [6][7][8]11], it is within the interest of S&C coaches to select the most appropriate field-based assessment that can mimic the movement patterns of the sport and monitor the effects of training amongst athletes year-round [11,12,21].Therefore, the primary aim of this study is to conduct a direct evaluation of all three previously mentioned CODS assessments (spider drill, modified t-test and pro-agility) in respect to their test retest reliability within elite youth tennis athletes, and to identify the Smallest Detectable Difference (SDD) between testing sessions. Approach to the problem In order to fulfil the requirements of this study, nationally ranked tennis athletes were required to attend three separate testing sessions.Session 1 was a familiarisation session, whereby participants were subjected to the experimental conditions.This enabled all participants to practice each of the CODS assessments 5 times, with a full verbal explanation and visual demonstration provided.Sessions 2 and 3 were data collection sessions, where by all subjects completed three trials for each of the CODS tests. Subjects Ten elite youth tennis athletes (age: 15.1 ± 2.6; mass: 66.4 + 17.2 kg; height: 163.0 + 16.2 cm) from a high-performance tennis academy volunteered to participate in this study.All participants were nationally ranked (within the top 700 players for their respective age group in the UK).Participants were excluded from the study if they did not have a national ranking and/or were suffering from an injury at the time of testing.Prior to any performance testing, medical screening in the form of a PAR-Q and consent form were completed.Where participants were under 18 years of age, a consent form and information sheet was provided to their parents/ guardians for them to sign on behalf of the individual.Ethical approval was granted by the London Sport Institute, Middlesex University. Procedures All participants were required to complete a total of three trials on each of the testing days for all three CODS assessments, to enable the calculation of both within-and between-session reliability.All trials were completed on an indoor tennis court located at a high-performance tennis academy, with the timing of testing controlled for and thus conducted at the same time of day for each individual respectively.Total time was recorded using electronic timing gates (Brower Timing System, Salt Lake City, Utah, USA), and was recorded to the nearest hundredth of a second.In line with previous CODS research [15], athletes were given a minimum rest period of three minutes between each trial, and three minutes between each test.If/when athletes breached the methodological guidelines for each test (e.g. by failing to reach the line for a COD step), the trial was voided, and an additional trial was conducted following three minutes of rest.Athletes were provided with a standardised warm-up progressive in specificity to the demands of the CODS tests prior to each testing session, inclusive of warm up trials for each of these tests respectively.This was to both minimise the risk of injury, but also to negate any potential warm up effect/ learning effect within trials throughout data collection.The warm-up procedure followed a similar protocol to the RAMP methodology (i.e.raise, activate, mobilise and potentiate), which has been suggested to be an effective protocol to follow prior to physical activity [26]. Spider drill: Considering the spider drill is less well known in the academic literature, a schematic has been provided outlining test protocols (Figure 1).Timing gates were set up at a height of 1 m for all participants 3 m behind the baseline, so as to avoid any collisions when returning to the centre point after each sprint. of 2.5 m left was then performed back to the centre cone (touching down with their left hand) before back-pedalling 5 m through the timing gates to complete the test. Pro-Agility test: Athletes started 0.3 m just to the left of the timing gates (set at a height of 1 m) so as to not risk 'breaking the beam' prior to the commencement of the test.Upon instruction, athletes turned right, cutting the beam and starting the test.This involved a 5-yard sprint to the right whereby all athletes were instructed to touch the cone with their right hand.An immediate 180° turn was conducted and athletes were required to sprint 10-yards to the opposing cone where they were required to touch down with their left hand.A second 180° turn was required before sprinting back further 5-yards, finishing the test as they passed through the timing gates for the third and final time. Statistical Analysis All data analysis was completed using SPSS (V18.0;SPSS, Inc., Chicago, Illinois, USA) and Microsoft Excel™.Means and Standard Deviations (SD) were calculated from each individuals fastest sprint time for each of the CODS test, for each of the testing days.Normality of the data was established through a Shapiro-Wilk test (due to < 50 participants), and both within-and betweensession reliability determined via three separate methods: A Coefficient of Variation (CV), a Standard Error of Measurement (SEM), and a two-way random Intraclass On command, participants were instructed to break the beam of the timing gates officially starting the assessment.Participants were required to complete all sprints as outlined in Figure 1, starting with the sprint to the right first (number 1) and then working in an anti-clockwise direction thereafter.Sprint numbers 1 and 5 represent a distance of 4.11 m whilst numbers 2, 3 and 4 each measure 5.49 m.Each sprint required athletes to return to the centre point on the baseline before starting the next.Once the final sprint was completed (returning from 'sprint 5' as viewed in Figure 1), athletes were required to turn right 90° to complete the three metre sprint through the timing gates completing the test. Modified T-test: Test protocols were conducted in line with previously validated procedures [23], with timing gates were set at a height of 1 m.Due to the distances typically covered by tennis players (somewhat dictated by court dimensions), the modified version of this test was utilised.Athletes were asked to cover a total distance of 20 m forming a "T" shape.A single cone was set up at 5 m from the timing gates, and then two more cones either side of the first cone at 2.5 m.On command, participants sprinted forward, through the timing gates, and touched the middle cone with their right hand.Participants then side shuffled 2.5 m to their left (touching the cone with their left hand) and then proceeded to side shuffle 5 m to the far right cone (this time touching down with their right hand).A side shuffle data reported both in absolute terms (seconds), but also as a percentage (%) of their best time. Discussion To the authors' knowledge, a small sample of literature to date has assessed the reliability and validity of both the modified t-test [23] and pro-agility [15,22] CODS tests, which are theorised to be applicable CODS assessments for youth tennis athletes [11].However, this is the first study to evaluate the reliability (both within-and between-testing sessions) of the spider drill CODS assessment in comparison to other CODS assessments specifically within elite youth tennis athletes.Overall, our results indicate that the spider drill is a reliable measure of CODS within youth tennis athletes, and can be used with confidence to detect meaningful differences in CODS. All CODS assessments achieved low typical percentage error, with this both within-session (CV = 1.8 -4.1%) and between-session (CV = 1.2 -3.7%).In addition, SEM values were recorded in order to identify which CODS test had the smallest margin of error, as this can assist in detecting true changes when these are greater than the error in the test [15].Results identified these to be consistent within all tests both within-and between-testing sessions.However, when this information is coupled with test retest consistency data through the ICC, the pro-agility appears to be less reliable than initially anticipated, with both within-session (e.g.testing session 2: ICC = 0.69, 95% CI: 0.36 -0.90), and between-sessions (ICC = 0.66, 95% CI: 0.11 -0.90) reliability falling outside of the proposed threshold.These findings are contradictory to those from Stewart, et al. [15], of whom found high intraday reliability (ICC = 0.9, 95% CI: 0.84 -0.94) for pro-agility within a sample of physical education students of similar ages to those of the present study (16.7 ± 0.6 years), and to those of Mayhew, et al. [22], who concluded the pro-agility test to hold strong reliability Correlation Coefficient (ICC) with absolute agreement.This modality of ICC was used due to its capacity to detect absolute agreement within both rank and score. To coincide with previous research, accepted CV values were set at < 10% [27], and ICC values reported above 0.75 were accepted as reliable [23,27,28].All measures of within-session reliability were determined through the three trials for each of the CODS tests.All measures of between-session reliability were determined through use of the fastest trial from the pooled trials from each given testing day.SDD was subsequently calculated from the SEM to detect random error scores between testing sessions, and thus to detect meaningful differences.Initially, the SEM was calculated through the equation: Subsequently, SDD was calculated through the equation: 2 SEM * * [29]. Results The Shapiro-Wilk test concluded all data to be normally distributed (p > 0.05).Mean, SD, CV, SEM, and ICC values reporting reliability within-sessions are displayed in Table 1, and between-session reliability reported within Table 2.All CV values (both within-and betweentesting sessions) are below the accepted threshold of 10% (1.21 -4.1%), and SEM values are consistent within tests both within-and between-testing sessions. All values for SDD are provided within Table 2, with Practical Application The results from this study highlight that the spider drill and modified t-test are both reliable measures of CODS within youth tennis athletes, and can be used with confidence to detect meaningful differences in CODS.Strength and conditioning coaches and sport science practitioners could consider changes in excess of ± 1.1% as meaningful when assessing CODS through the aforementioned CODS tests within youth tennis athletes, given that they are familiarised to the demands of such tests before hand. (ICC = 0.8).While measures of consistency are important, Stewart, et al. [15] alluded to research which may suggest that within-subject error (i.e.CV and SEM) may be a better measure of reliability, given the importance of detecting error on an individual level to ensure accurate monitoring of performance.The greatest SEM values were also identified however for the pro-agility test both within-and between-sessions (SEM = 0.1 -0.11 & 0.13 respectively), this somewhat surprising given the overall time to complete the test was the shortest of the three. Previous studies examining the reliability and validity of CODS assessment have employed participants of similar age ranges.Within this study, though all ten participants held a national ranking, the variation in age range can theoretically suggest each participant has had a different level of exposure or experience in terms of playing career; therefore, suggesting some may be more accustomed to the procedures of these assessments than others.Additionally, sample size may have been a limiting factor when comprehending the lower reliability scores within the pro-agility test to those previous suggested within the literature.Whilst this is can only be deemed speculative in nature, it should be noted that all data was identified to be normally distributed, thus allowing for parametric data analysis.As such, the results of this study can be determined applicable to a range of ages in youth tennis athletes, however future studies should look to utilise an athletic population from one age bracket in order to assume the likelihood of similar experience and playing exposure.This will therefore eradicate any issues regarding interage group reliability of such tests.As such, this will allow for a more holistic understanding of both within-and between-age groups, allowing practitioners to more precise when monitoring changes in scores within these CODS tests. In conclusion, the present study has highlighted that both the spider drill and modified t-test are reliable tests to assess youth tennis athletes CODS ability.In order to be able to generalise these findings outside of youth tennis athletes, a wider population sample would be required.However, the aim of this investigation was solely to provide reliability statistics to support the use of the spider drill as a means of reliably assessing CODS within youth tennis athletes, and the findings support this notion. With this in mind therefore, the use of the spider drill and modified t-test would be recommended based off their reliability both within-and between-testing sessions, with high test sensitivity to detect change (as illustrated through the SDD).In addition, results from the present study highlight that the pro-agility test may not be the most appropriate CODS test to use for elite youth tennis athletes.Furthermore, speculation by Roberts, et al. [11] suggests that more than one CODS test should be considered when Figure 1 : Figure 1: Schematic of the Spider Drill. Table 1 : Within-session reliability for each of the three CODS assessments.Mean values are a representation of the group average from each individuals fastest sprint time.s = Seconds; SD = Standard Deviation; CV = Coefficient of Variation; SEM = Standard Error of Measurement; ICC = Intraclass Correlation Coefficient; CI = Confidence Interval. Table 2 : Between-session reliability for each of the three CODS assessments.Mean values are a representation of the group average from each individuals fastest sprint time.s= Seconds; SD = Standard Deviation; CV = Coefficient of Variation; SEM = Standard Error of Measurement; ICC = Intraclass Correlation Coefficient; CI = Confidence Interval; SDD = Smallest Detectable Difference.monitoring changes in CODS, attributing this to the vast complexity of movement patterns completed throughout tennis match play.Given how varied movement patterns occur within the spider drill, when compared to the modified t-test, but also of the notable differences in total time taken to complete these tests, it may be suggested that if practitioners are to use more than one CODS test, that test selection is dictated by the demands of the test, and thus movements that occur.With the spider drill demanding 180° turns, and the modified t-test incorporating lateral movements, it could be suggested that both of these tests hold strong ecological validity, thus supporting their use within youth tennis athletes.Future research however should look to explore this concept further.
2019-04-16T13:26:07.086Z
2017-09-20T00:00:00.000
{ "year": 2017, "sha1": "18ac25cbe369791e41dae786fbcbfc18ddccdbb2", "oa_license": "CCBY", "oa_url": "https://clinmedjournals.org/articles/ijsem/international-journal-of-sports-and-exercise-medicine-ijsem-3-074.pdf?jid=ijsem", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "b44b60fc89fb5e32c7ac4678140c893fbc372b6e", "s2fieldsofstudy": [ "Education", "Medicine" ], "extfieldsofstudy": [ "Engineering" ] }
11705238
pes2o/s2orc
v3-fos-license
Decoding Spontaneous Emotional States in the Human Brain Pattern classification of human brain activity provides unique insight into the neural underpinnings of diverse mental states. These multivariate tools have recently been used within the field of affective neuroscience to classify distributed patterns of brain activation evoked during emotion induction procedures. Here we assess whether neural models developed to discriminate among distinct emotion categories exhibit predictive validity in the absence of exteroceptive emotional stimulation. In two experiments, we show that spontaneous fluctuations in human resting-state brain activity can be decoded into categories of experience delineating unique emotional states that exhibit spatiotemporal coherence, covary with individual differences in mood and personality traits, and predict on-line, self-reported feelings. These findings validate objective, brain-based models of emotion and show how emotional states dynamically emerge from the activity of separable neural systems. Introduction Functional neuroimaging offers unique insight into how mental representations are encoded in brain activity [1,2]. Seminal cognitive neuroscience studies demonstrated that distributed patterns of cortical activity measured with functional magnetic resonance imaging (fMRI) contain information capable of differentiating among visual percepts, including object categories [3] and basic visual features [4]. Extending findings from these studies, subsequent work demonstrated that machine learning models trained on stimulus-evoked brain activity, termed "decoding" or "mind-reading" [5], can be used to predict the contents of working memory [6][7][8] and mental imagery [9,10], even during sleep [11]. Thus, pattern recognition approaches can identify defining features of mental processes, even when driven solely on the basis of endogenous brain activity. The approach was further shown to accurately discriminate among multiple cognitive processes (e.g., decision-making, working memory, response inhibition, among others) in independent subjects [12], establishing the efficacy of assessing diverse mental states with fMRI across individuals. Paralleling cognitive studies decoding task-evoked brain activity, multivariate decoding approaches have recently been used to map patterns of neural activity evoked by emotion elicitors onto discrete feeling states [13,14]. However, a key piece of missing evidence is whether categorically distinct emotional brain states occur intrinsically [15,16] in the absence of external eliciting stimuli. If so, then it should be possible to classify the emotional status of a human being based on analysis of spontaneous fluctuations of brain activity during rest. Successful classification would validate multivariate decoding of unconstrained brain activity and provides insight into the nature of emotional brain activity during the resting state. Adapting the logic of other cognitive imaging studies [16,17], we postulate that the presence of spontaneous emotional brain states should be detectable using multivariate models derived from prior investigations of emotion elicitation. We previously developed decoding algorithms to classify stimulus-evoked responses to emotionally evocative cinematic films and instrumental music [13]. These neural models (Fig 1) accurately classify patterns of neural activation associated with six different emotions (contentment, amusement, surprise, fear, anger, and sadness) and a neutral control state in independent subjects, generalizing across induction modality. Importantly, these neural biomarkers track the subjective experience of discrete emotions independent of differences in the more general dimensions of valence and arousal [18]. By indexing the extent to which a pattern of neural activation to extrinsic stimuli reflects a specific emotion, these models can be used to test whether intrinsic spatiotemporal patterns of brain activity correspond to stimulus-evoked emotional states. Here, we evaluate whether these neural models of discrete emotions generalize to spontaneous brain activation measured via fMRI in two experiments. The first experiment assesses if model predictions are convergent with individual differences in self-reported mood and emotional traits. Because individual differences are linked to mental health and subjective wellbeing [19][20][21], this evaluation provides insight into the potential clinical utility of quantifying spontaneous emotional states, as they may be associated with risk factors for mental illness. The second experiment employs an experience sampling procedure to evaluate whether model predictions based on brain activity during periods of rest are congruent with on-line measures of emotional experience. Together, these studies probe how brain-based models of specific emotion categories quantify changes in extemporaneous affect both between and within individuals. Classification of Resting-State Brain Activity We applied the multivariate models of emotional experience to brain activation acquired from young adults during resting-state fMRI (n = 499; Fig 2A). Two consecutive runs of restingstate scans were acquired, spanning a total duration of 8.53 min. Following preprocessing of data, we computed the scalar product of the resting-state signal and emotion category-specific model weights at every time point of data acquisition. This procedure yielded scores that reflect the relative evidence for each of seven emotional states across the full scanning period. A confirmatory analysis revealed that voxels distributed across the whole brain informed this prediction, as opposed to activity in a small number of brain regions (S1 Fig). Distributed patterns of brain activity predict the experience of discrete emotions. (A) Parametric maps indicate brain regions in which increased fMRI signal informs the classification of emotional states. See [13] for details of the development and validation of these neural decoding models. (B) Sensitivity of the seven models. Error bars depict 95% confidence intervals. The data underlying this figure can be found in S1 Data. If emotional brain states occur spontaneously, the frequency of classifications from our decoding models should be more varied than the uniform distribution that would be expected by chance. To test this hypothesis, we sought to identify whether the total time (or absolute frequency) in each state differed across emotion categories. Such an analysis informs the degree to which discrete emotional brain states may spontaneously occur and, by extension, could contribute to the identification of individual differences that map onto the likelihood of experiencing specific spontaneous states. To perform this comparison, we identified the single model with the maximum score at each time point (one-versus-all classification) and summed the number of time points assigned to each category. The frequency of emotional states clearly Emotional states emerge spontaneously during resting-state scans. (A) Procedure for classification of restingstate data. Scores are computed by taking the scalar product of preprocessed data and regression weights from decoding models. (B) Frequency distributions for the classification of all seven emotional states (n = 499). The mean, 25th, and 75th percentiles are indicated by black lines. The solid gray line indicates the number of trials that would occur from random guessing. The data underlying this figure can be found in S1 Data. The raw fMRI resting state data can be obtained from https://www.haririlab.com/projects. Table 1). Although patterns of neural activation were most often classified as neutral as a whole, it is possible that consistent fluctuations in the time course of emotional states occur against this background. Research on MRI scanner-related anxiety has shown that self-report [22,23] and peripheral physiological [24] measures of anxiety peak at the beginning of scanning, when subjects first enter the scanner bore. This literature predicts that brain states indicative of fear should be most prevalent at the beginning of resting-state runs, and that neutral states should emerge over time, given their overall high prevalence ( Fig 2B). To assess gradual changes in the emotional states over time, we performed Friedman tests separately for each emotion category, all of which revealed significant effects of time (see S1 Table). Next, we quantified the direction of these effects using general linear models to predict classifier scores using scan time as an input. We found the scores for fear decreased over time To determine whether emotional states exhibited consistent dynamics over the course of the scanning period, we fit smoothing spline models [25] for each subject and assessed the correlation between each subject and the average time course of other subjects in a cross-validation procedure. This analysis showed that there is substantial moment-to-moment variability in the time course of emotional states across subjects (which cannot simply be explained by scaling differences in the emotion models or resting-state data; see S3 Fig). Consistent with the linear models using time as a predictor, evidence for neutral brain states was most prevalent in the second scanning session, especially during a peak at the beginning of the run, whereas the time course for fear peaked at the beginning of the first run and decreased throughout the scanning session. The model for surprise exhibited a similar time course as neutral states but peaked at the end of the second run. Additionally, this analysis showed that evidence for sad classifications peaked in the middle of the first run and decreased over time. Overall, these time series revealed a gradual change in evidence from negative emotions (fear and sadness in run 1) to non-valenced or bi-valenced emotions (neutral and surprise in run 2). To ensure that our emotion-specific brain states are not proxies for more general restingstate networks thought to subserve other functions, we examined the spatial overlap between our models and those commonly derived by connectivity-based analysis of resting-state fMRI data [26]. On average, we observed little overlap (Jaccard index = 13.1 ± 1.97% [s.d.]; range 10.8%-16.7%) with the seven most prominent networks found in resting-state data, implicating a substantial degree of independence. To further establish the construct validity of the spontaneous emotional brain states, we reasoned that their incidence should vary with individual differences in self-reported mood and personality traits associated with specific emotions. We assayed depressive mood with the Center for Epidemiologic Studies Depression Scale (CESD) [27] and state anxiety using the State-Trait Anxiety Inventory State Version (STAI-S) [28], instructing participants to indicate how they felt during the resting-state scan itself. Binomial regression models revealed that higher depression scores were associated with increases in the frequency of sadness (b . Viewing these beta estimates as odds ratios (computed as eb) reveals how a one-unit increase in self-reported mood is associated with differences in the occurrence of spontaneous emotional states. Applying this approach to CESD scores reveals that individuals with a score of 16 (the cutoff for identifying individuals at risk for depression) have 5.92% increased odds of being in a sad state compared to those with a score of 0. In more practical terms, this corresponds to approximately seven extra minutes a day of exhibiting a brain state that would be classified as sadness. Drawing from the Revised NEO Personality Inventory (NEO-PI-R) [29], we focused personality trait assessment on the specific Neuroticism subfacets of Anxiety, Angry Hostility, and Depression, due to their discriminant validity [30], heritability [31], universality [32], and close theoretical ties to the experience of fear, anger, and sadness. We found that increasing Anxiety scores were associated with more frequent classification of fear (b ¼ 0:003, t 497 = 1.978, P unc = 0.0479, Fig 4B) and fewer classifications of anger (b ¼ À0:004, t 497 = -2.407, P unc = 0.0161). Angry Hostility scores were positively associated with the number of anger classifications (b ¼ 0:0042, t 497 = 2.400, P unc = 0.0164). Depression scores were positively associated with the frequency of fear (b ¼ 0:003, t 497 = 2.058, P unc = 0.0396) and sadness (b ¼ 0:0037, t 497 = 2.546, P unc = 0.0109). These results provide converging evidence across both state and trait markers that individual differences uniquely and differentially bias the spontaneous occurrence of brain states indicative of fear, anger, and sadness. Concordance with Subjective Experience Finally, we examined whether the predictions of our decoding models were consistent with selfreport of emotional experience during periods of unconstrained rest. We conducted a separate fMRI experiment in which an independent sample of young adult participants (n = 21) performed an experience sampling task in the absence of external stimulation (Fig 5A). Participants were instructed to rest and let their mind wander freely with their eyes open during scanning. Following intervals of rest of at least 30 s, a rating screen appeared during which participants moved a cursor to the location on the screen that best indicated how they currently felt. If spontaneous emotional states are accessible to conscious awareness, then scores should be greater for emotion models congruent with self-report relative to scores for models incongruent with self-report. Contrasting emotion models in this manner is advantageous from a signal detection standpoint because it minimizes noise by averaging across emotions, as some were reported infrequently or not at all in some subjects (see [33] for an analogous approach to predict the contents of memory retrieval during similarly unconstrained free-recall). To test our hypothesis, we extracted resting-state fMRI data from the 10-s interval preceding each self-report query and applied multivariate models to determine the extent to which evidence for the emotional brain states in this window predicted the participants' conscious emotional experience. Consistent with our hypothesis, we found that scores for models congruent with self-report were positive (0.016 ± 0.0093 [s.e.m.], z = 2.068, P unc = 0.0386; Wilcoxon signed rank test), whereas scores for incongruent models were negative (-0.0048 ± 0.0017 [s.e.m.], z = -3.041, P unc = 0.0024). Classification of individual trials into the seven emotion categories exhibited an overall accuracy of 27.9 ± 2.1% (s.e.m.) of trials, where chance agreement is 21.47% (P unc = 0.001; binomial test). Not only do these results demonstrate that classification models are sensitive to changes in emotional state reported by participants, but also that there is selectivity in their predictions, as negative scores indicate evidence against emotion labels that are incongruent with self-report. Establishing both sensitivity and selectivity is important for the potential use of these brain-based models as diagnostic biomarkers of emotional states. As an additional validation of our decoding models, we examined the correspondence between the prevalence of individual emotional brain states as detected via pattern classification and participant self-report. Classifications based on self-report and multivariate decoding yielded similar frequency distributions (Fig 5C), in which neutral and amusement were the most frequent. We found a positive correlation between the frequency of classifications based on participant ratings and multivariate decoding (r = .3876 ± 0.102 [s.e.m.], t 20 = 2.537, P unc = .0196; one sample t test), further demonstrating a link between patterning of brain states and subjective ratings of emotional experience in the absence of external stimuli or contextual cues. Discussion Converging findings from our experiments provide evidence that brain states associated with distinct emotional experiences emerge during unconstrained rest. Whereas prior work has decoded stimulus-evoked responses to emotional events, our study demonstrates that spontaneous neural activity dynamically fluctuates among multiple emotional states in a reliable manner over time. Observing such coherent, emotion-specific patterns in spontaneous fMRI activation provides evidence to support theories that posit emotions are represented categorically in the coordinated activity of separable neural substrates [34,35]. Validating the neural biomarkers in the absence of external stimulation suggests that they track information of functional significance, and do not merely reflect properties of the stimuli used in their development. It is possible that these classifiers detect the endogenous activity of distributed neural circuits, consistent with recent views that emotions are not represented in modular functional units [36,37]. However, the extent to which such activity is the result of innate emotion-dedicated circuitry, a series of cognitive appraisals, or constructive processes shaped by social and environmental factors remains to be determined (for a review of these viewpoints, see [38]). Regardless of the relative influence of such factors, the present findings suggest that the emotion-specific biomarkers track the expression of functionally distinct brain systems, as opposed to idiosyncrasies of the particular machine-learning problem. Our findings complement recent studies demonstrating that a variety of emotion manipulations have lasting effects on resting brain activity [39][40][41]. For instance, one study revealed elevated striatal activity following gratifying outcomes in a decision-making task-an effect that was diminished in individuals with higher depressive tendencies [39]. Because these effects immediately followed emotional stimulation, they could plausibly reflect regulatory processes or lingering effects of mood. The present results, on the other hand, show that resting brain activity transiently fluctuates among multiple emotional states and that these fluctuations vary depending on the emotional status of an individual. Thus, emotional processes unfolding at both long and short time scales likely contribute to spontaneous brain activity. Findings from our resting state experiment stand in contrast to recent work investigating emotion-specific functional connectivity [42]. In this study, whole-brain resting-state functional connectivity was assessed using seeds identified from a meta-analytic summary of emotion research [43]. This latter approach failed to reveal unique patterns of resting-state connectivity for individual emotions but showed that seed regions were commonly correlated with domain-general resting-state networks, such as the salience network [44]. In light of the present results, it is important to consider methodological differences between studies. Seedbased correlation highlights connectivity between brain regions whose time course of activation is maximally similar to the activity of a small number of voxels (which are averaged together to create a single time series), whereas pattern classification identifies combinations of voxels that maximally discriminate among mental states. Because individual voxels sample diverse neural populations [45], it is plausible that seed-based correlation is biased towards identifying networks that have large amplitudes in seeded regions as opposed to exhibiting specificity (e.g., see [46]). Thus, our approach may have greater sensitivity to detect discriminable categorical patterns. Results of the experience sampling study provide external validation of our emotion-specific biomarkers [13]. Consistent with the resting-state study, the overall distribution of emotional states was clearly non-uniform, and classifications of neutral states occurred with high frequency. Beyond these commonalities, the inclusion of behavioral self-report led to differences in emotion-related brain activity. States of contentment and amusement were more frequently predicted during experience sampling compared to resting-state (46.31% versus 23.45%), a finding that was corroborated by higher ratings for these emotions in the self-report data. It is possible that this difference in the frequency of positive brain states is the result of a self-presentation bias [47], wherein participants may have employed emotion regulation in order to project a more positive image. Alternatively, it is possible that the self-reporting task requirement elicited more introspection between trials, which contributed to the pattern of altered emotional states [48]. Future work will be necessary to fully characterize how such cognitiveemotional interactions shape the landscape of emotional brain states [36,49]. We found that individual differences in mood states and personality traits are associated with the relative incidence of brain states associated with fear, anger, and sadness. These findings further establish the construct validity of our brain-based models of emotion and link subfacets of Neuroticism to the expression of emotion-specific brain systems. Given their sensitivity to individual differences linked to the symptomology of anxiety and depression, spontaneous emotional brain states may serve as a novel diagnostic tool to determine susceptibility to affective illness or as an outcome measure for clinical interventions aimed at reducing the spontaneous elicitation of specific emotions. This tool may be particularly useful to objectively assess the emotional status of individuals who do not have good insight into their emotions, as in alexithymia, or for those who cannot report on their own feelings, including patients in a vegetative or minimally conscious state. Ethics Statement All participants provided written informed consent in accordance with the National Institutes of Health guidelines as approved by the Duke University IRB. The resting state experiment was approved as part of the Duke Neurogenetics Study (Pro00019095) with an associated database (Pro00014717). The experience sampling project was approved separately (Pro00027404). Neural Biomarkers of Emotional States Classification of emotional states was performed using neural biomarkers that were developed based on blood oxygen level dependent (BOLD) responses to cinematic films and instrumental music [13]. This induction procedure was selected because it reliably elicits emotional responses over a 1 to 2 min period, as opposed to longer-lasting moods. These models were developed to identify neural patterning specific to states of contentment, amusement, surprise, fear, anger, and sadness (in addition to a neutral control state). These particular emotions were modeled to broadly sample both valence and arousal, as selecting common sets of basic emotions (e.g., fear, anger, sadness, disgust, and happiness) undersamples positive emotions. In selecting these particular emotions, we verified that the accuracy of these models tracked the experience of specific emotion categories (average R 2 across emotions = .57) independent of subjective valence and arousal. Thus, the models offer unique insight into the emotional state of individuals and characterize the likelihood they would endorse each of the seven emotion labels, independent of general factors such as valence or arousal. Resting-State Experiment A total of 499 subjects (age = 19.65 ± 1.22 years [mean ± s.d.], 274 women) were included as part of the Duke Neurogenetics Study (DNS), which assesses a wide range of behavioral and biological traits among healthy, young adult university students. For access to this data, see information provided in S1 Text. This sample was independent of that used to develop the classification models. This sample size is sufficient to reliably detect (β = .01) a moderate effect (r = .2) with a type-I error rate of .05, which is particularly important when studying individual differences in neural activity. All participants provided informed consent in accordance with Duke University guidelines and were in good general health. The participants were free of the following study exclusions: (1) medical diagnoses of cancer, stroke, head injury with loss of consciousness, untreated migraine headaches, diabetes requiring insulin treatment, chronic kidney or liver disease, or lifetime history of psychotic symptoms; (2) use of psychotropic, glucocorticoid, or hypolipidemic medication; and (3) conditions affecting cerebral blood flow and metabolism (e.g., hypertension). Diagnosis of any current DSM-IV Axis I disorder or select Axis II disorders (antisocial personality disorder and borderline personality disorder), assessed with the electronic Mini International Neuropsychiatric Interview [50] and Structured Clinical Interview for the DSM-IV subtests [51], were not an exclusion, as the DNS seeks to establish broad variability in multiple behavioral phenotypes related to psychopathology. No participants met criteria for a personality disorder, and 72 (14.4%) participants from our final sample met criteria for at least one Axis I disorder (10 Agoraphobia, 33 Alcohol Abuse, 3 Substance Abuse, 25 Past Major Depressive Episode, 5 Social Phobia). However, as noted above, none of the participants were using psychotropic medication during the course of the DNS. Participants were scanned on one of two identical 3 Tesla General Electric MR 750 system with 50-mT/m gradients and an eight channel head coil for parallel imaging (General Electric, Waukesha, Wisconsin, USA). High-resolution 3-dimensional structural images were acquired coplanar with the functional scans Preprocessing of all resting-state fMRI data was conducted using SPM8 (Wellcome Department of Imaging Neuroscience). Images for each subject were slice-time-corrected, realigned to the first volume in the time series to correct for head motion, spatially normalized into a standard stereotactic space (Montreal Neurological Institute template) using a 12-parameter affine model (final resolution of functional images = 2 mm isotropic voxels), and smoothed with a 6 mm FWHM Gaussian filter. Low-frequency noise was attenuated by high-pass filtering with a 0.0078 Hz cutoff. Experience Sampling Experiment A total of 22 subjects (age = 26.04 ± 5.16 years [mean ± s.d.], 11 women) provided informed consent and participated in the study. Data from one participant was excluded from analyses because of excessive head movement (in excess of 1 cm) during scanning. While no statistical test was performed to determine sample size a priori, this sample size is similar to those demonstrating a correspondence between self-report of affect and neural activity [13,52,53]. Participants engaged in an experience sampling task in which they rated their current feelings during unconstrained rest. Participants were instructed to keep their eyes open and let their mind wander freely and that a rating screen [54] would occasionally appear, which they should use to indicate the intensity of the emotion that best describes how they currently feel. This validated assay of emotional self-report consists of 16 emotion words organized radially about the center of the screen. Four circles emanate from the center of the screen to each word (similar to a spoke of a wheel), which were used to indicate the intensity of each emotion by moving the cursor about the screen. During four runs of scanning, participants completed 40 trials (10 per run) with an inter-stimulus interval (ISI) of 30 s plus pseudo-random jitter (Poisson distribution, λ = 4 s). Self-report data were transformed from two-dimensional cursor locations to categorical labels. Polygonal masks were created by hand corresponding to each emotion term on the response screen. A circular mask in the center of the screen was created for neutral responses. Because terms in the standard response screen did not perfectly match those in the neural models, the item "relief" was scored as "content," whereas "joy" and "satisfaction" were scored as "amusement." The items "surprise," "fear," "anger," "sadness," and "neutral" were scored as normal. Scanning was performed on a 3 Tesla General Electric MR 750 system with 50-mT/m gradients and an eight channel head coil for parallel imaging (General Electric, Waukesha, Wisconsin, USA). High-resolution images were acquired using a 3D fast SPGR BRAVO pulse sequence (TR = 7.58 ms; TE = 2.936 ms; image matrix = 256 2 ; α = 12°; voxel size = 1 × 1 × 1 mm; 206 contiguous slices) for coregistration with the functional data. These structural images were aligned in the near-axial plane defined by the anterior and posterior commissures. Whole-brain functional images were acquired using a spiral-in pulse sequence with sensitivity encoding along the axial plane (TR = 2000 ms; TE = 30 ms; image matrix = 64 × 64; α = 70°; voxel size = 3.8 × 3.8 × 3.8 mm; 34 contiguous slices). Four initial radiofrequency excitations were performed (and discarded) to achieve steady-state equilibrium. Processing of MR data was performed using SPM8 (Wellcome Department of Imaging Neuroscience). Functional images were slice-time-corrected, spatially realigned to correct for motion artifacts, coregistered to high resolution anatomical scans, and normalized to Montreal Neurologic Institute (MNI) space using high-dimensional warping implemented in the VBM8 toolbox (http://dbm.neuro.uni-jena.de/vbm.html). Low-frequency noise was attenuated by high-pass filtering with a 0.0078 Hz cutoff. Statistical Analysis To rescale data for classification, preprocessed time series were standardized by subtracting their mean and dividing by their standard deviation. Maps of partial least squares (PLS) regression coefficients from stimulus-evoked decoding models [13] were resliced to match the voxel size of functional data. These coefficients are conceptually similar to those in multiple linear regression, only they are computed by identifying a small number of factors (reducing the dimensionality of the problem) that maximize the covariance between patterns of neural activation and emotion labels (for specifics on their computation, see [55]). Classifier scores were computed by taking the scalar product of functional data at each time point and PLS regression coefficients from content, amusement, surprise, fear, anger, sad, and neutral models. Individual time points were assigned categorical labels by identifying the model with the maximal score. In order to determine if relatively focal or diffuse patterns of resting-state activity informed classification, we computed importance maps for each subject (S1 Fig). This was accomplished by calculating the voxel-wise product between PLS regression coefficients for each emotion model and the average activity of acquisition time points labeled as the corresponding emotion. We made inference on these maps by conducting a mass-univariate one-sample t test for each of the seven models, thresholding at FDR q = .05. To address the potential overlap of the emotion classification models and canonical restingstate networks of the brain, we computed the maximal Jaccard index for each emotion model and the seven most prominent resting-state networks identified in Yeo et al [26]. This index is computed as the intersection of voxels in the two maps (voxels above threshold in both maps) relative to their union (the number of voxels above threshold in either map). Thresholds for classification models were adaptively matched to equate the proportion of voxels assigned to each resting state network. When conducting inferential tests on classification frequency (count data), non-parametric tests were conducted. To test whether classifications were uniformly distributed across the emotion categories, a Friedman test was performed (n = 499 subjects, k = 7 emotions). Wilcoxon signed-rank tests were performed to test for differences in frequency relative to chance rates (14.3%) in addition to pairwise comparisons between emotion models, and corrected for multiple comparisons based on the false-discovery rate. Because the models have different levels of accuracy when used for seven-way classification [13], we additionally conducted wavelet resampling of classifier scores in the time domain [33,56] over 100 iterations to ensure that differences in the sensitivity of models did not bias results. This procedure involved scrambling the wavelet coefficients (identified using the discrete wavelet transform) of classifier scores (time series in Fig 3) to generate random time series with similar autocorrelation as the original data. Classifications were then made on these surrogate time series, and Friedman tests were performed to test for differences in frequencies across categories. This procedure yielded a null distribution for the chi-square statistic against which the observed statistic on unscrambled data was compared. To test whether classifier scores changed over time, Friedman tests were conducted on the outputs of the emotion models separately (concatenating the time series across runs), as classifier scores were found to violate assumptions of normality. Follow-up tests on the direction of these changes (either as increases or decreases) were conducted using general linear models with one constant regressor and another for linearly increasing time for each subject. Inference on the parameter estimate for changes over time was made using a one-sample t test (498 degrees of freedom). In addition to testing gradual changes over time, smoothing spline models [25] were used to characterize more complex dynamics of emotional states. Because spline models are flexible and may include a different number of parameters for each subject, cross-validation was conducted to assess the coherence of spline fits across subjects. In this procedure, a smoothing spline model was fit for each subject, and its Pearson correlation with the mean fit for all other subjects was computed. The average of resulting correlations accordingly reflects the coherence of nonlinear changes in emotional states across all subjects. The influence of individual differences in mood and personality was assessed using generalized linear models with a binomial distribution and a logit link function. Multiple models were constructed, each using a single measure from either the CESD, STAI, or facets from the NEO-PI-R to predict the frequency of classifications for the seven emotion categories (seven models per self-report measure). Inference on parameter estimates (characterizing relationships between individual difference measures and classification frequency) was made using a t distribution with 497 degrees of freedom. To control for multiple comparisons, FDR correction (q = .05) [57,58] was applied for targeted predictions. For individual differences in mood, this procedure included correction for positive associations between the frequency of sad classifications and CESD scores and between fear classification and STAI values (P thresh = .0091). For differences in emotional traits, correction was applied to models predicting the frequency of fear classification on the basis of Anxiety scores, anger classification using Angry Hostility scores, and sad classifications on the basis of Depression scores (P thresh = .0479). Scatterplots and predicted outcomes for these regression analyses are displayed in S4 Fig. To assess concordance in the experience sampling study, classifier scores were averaged for trials congruent and incongruent with self-report for each subject. For instance, all trials in which a participant self-reported "fear," the classifier outputs from the neural model predicting fear were considered congruent, whereas the remaining six models were averaged as incongruent. Because the frequency of self-report varied across emotions (e.g., endorsement of fear and sadness were very infrequent), scores were averaged across all trials to reduce noise. In a supplemental analysis, scores were extracted separately for all trials and classified by identifying the model with the highest score. Accuracy was assessed on data from all subjects, using self-reports of emotion as ground truth. Because the frequency of self-reported emotions was non-uniform, chance agreement between self-report and neural models was calculated based on the product of marginal frequencies, under the assumption of independent observer classifications [59]. Inference on the observed classification accuracy was tested against this value using the binomial distribution B(480, 0.2147). Due to infrequent self-reports of surprise, fear, and anger, accuracy on individual models was not computed. Scores were initially assessed by averaging the 10 s preceding each rating. Subsequent analyses increasing the window length up to 20 s did not alter results. Because the scores for congruent (p = 0.0186, Lilliefors test against normal distribution) and incongruent (p = 0.0453) trials exhibited non-normal distributions, Wilcoxon signed rank tests were used to test each sample against zero mean rank. The correspondence between the frequencies of classification labels from self-report and neural decoding was assessed by computing the Pearson correlation for each subject. The correlation coefficients were Fisher transformed and tested against zero using a one-sample t test. To ensure that population differences (i.e., inclusion of individuals with psychopathology) did not contribute to differences in the prevalence of emotions in the resting-state and experience sampling studies, we re-calculated the frequency of classifications using repeated random subsampling of healthy participants in the resting-state sample (1,000 iterations, sampling 21 participants without replacement). The average correlation between the healthy subsamples and the full sample was very high (r avg = .981, s.d. = .013), making it unlikely that clinical status accounts for differences in the frequency of classifications across studies. Supporting Information S1 Fig. Importance maps for resting-state experiment. Parametric maps of t-statistics (onesample t-test against 0) showing voxels whose activation (either positive or negative) contributed towards classification of (A) content, (B) amusement, (C) surprise, (D) fear, (E) anger, (F) sad, and (G) neutral states. Importance maps were created for each subject by taking voxelwise product of classification weights and the mean activity of time-points assigned the corresponding label. Voxels are thresholded at an FDR corrected q = .05. The raw fMRI resting state data can be obtained from https://www.haririlab.com/projects. Fig 2B for ease of comparison. (B) Distribution of χ 2 statistics across 100 iterations (Friedman test against uniform distribution), compared to that of the unpermuted data (solid red line). The data underlying this figure can be found in S1 Data. The raw fMRI resting state data can be obtained from https://www.haririlab.com/projects. (TIF) S3 Fig. ℓ2-norms of models and data. (A) ℓ2-norms computed for each of the neural biomarkers of emotion. The ℓ2-norm is calculated by taking the square root of the sum of squared deviations across all voxels (i.e., Euclidean distance). These norms do not vary strongly across emotion models, indicating that the outputs of classifiers are typically on the same scale. (B) Mean (s.d.) of ℓ2-norms computed on the resting state data (n = 499 subjects). Each time-point reflects a single data acquisition lasting two seconds. Solid vertical line demarcates scans from the first and second run. The data underlying this figure can be found in S1 Data. The raw fMRI resting state data can be obtained from https://www.haririlab.com/projects. Table. Friedman's ANOVAs for changes in classification scores over time.
2016-09-24T08:40:55.662Z
2016-09-01T00:00:00.000
{ "year": 2016, "sha1": "01fec7927b750170bbd400b4bf5024dae7caeffb", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosbiology/article/file?id=10.1371/journal.pbio.2000106&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "01fec7927b750170bbd400b4bf5024dae7caeffb", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
247816749
pes2o/s2orc
v3-fos-license
LayeredCNN: Segmenting Layers with Autoregressive Models We address a subclass of segmentation problems where the labels of the image are structured in layers. We propose applying autoregressive CNNs which, when given an image and a partial segmentation of layers, complete the segmentation. Initializing the model with a user-provided partial segmentation allows for choosing which layers the model should segment. Alternatively, the model can produce an automatic initialization, albeit with some performance loss. The model is trained exclusively on synthetic data from our data generation algorithm. It yields impressive performance on the synthetic data and generalizes to real data it has never seen. Our method implementation is available at https://github.com/JakobLC/LayeredCNN. Introduction When analysing biological tissues or manufactured components we often meet structures that are arranged in layers. Two examples are shown in Figure 1: a µCT slice of bone growth plate, and an optical coherence tomography image of retina. Many segmentation tasks can therefore be formulated as finding layers in images. This motivates us to formulate a model that can segment layers. Consider any of the examples in Figure 1. Given the image and corresponding partial label 1 , a nonexpert should be able to complete the label by utilizing layer appearance and the template given by the partial label. With our model, which we call LayeredCNN we aim to automate this task. The partial label always consists of the leftmost columns of the label, and can contain just a single 1 The term label refers to label image. column of pixels. Extracting the full label based on the partial label is quite challenging. Instead, consider the much easier task of labelling just the single column of pixels to the right of the partial label. A model with this ability can iteratively label subsequent columns, and we can continue until all columns of the image have been labeled. We will use an autoregressive convolutional neural network (CNN) that conditions on label information to the left to predict the next column of labels. The network will however be able to use image information from the whole image. In our definition, layered images have labels that can be represented by ordered, non-intersecting curves placed at the boundary between two neighbouring label class regions. We call those curves layer curves. For each x-value, every layer curve has a uniquely associated y-value. For example, the labels of the bone image in Figure 1 can be represented by four layer curves. Our definition of layered images ensures that all label classes present in the image will appear in the partial label. 1 In a standard segmentation network, the k th channel of the label prediction will be predetermined as a specific class, e.g. the first channel is road, the second channel is pedestrian, etc. Our network operates differently. The partial label defines the classes and not the network architecture. In a sense, our network is much more adaptive than similar networks [9] since we are also learning the label class from the input. We refer to this formulation as a class-agnostic model. A big benefit is that we are capable of segmenting layers in a wide variety of data using the same network. The model is trained exclusively on synthetic data from our data generation algorithm, yet it learns segmentation that generalizes to real data that it has never been trained on. This makes our network useful as a tool to segment layered images without needing to train a completely new network and manually segmenting large quantities of data. Related Works Our model is inspired by the PixelCNN [14] framework and its extensions [15,13]. PixelCNNs were suggested for data generation or image inpainting. We want to use it to expand labels instead, as in work by Leopold et al. [9]. Unlike previous works for autoregressive labelling we use a class-agnostic formulation and apply it to layered images. Some notable extensions to the PixelCNN framework are the Gated PixelCNN [15] and Pixel-CNN++ [13]. The PixelCNN++ paper introduced a multi-scale approach with up and downscaling of neurons, similar to that of U-Net [12]. Graph-based methods have also been used for detecting layers in images. One example is the use of dynamic programming [1]. Here, a layer is found as the path that minimizes the accumulated cost along that path through image. This has been extended to multiple layers by formulating a socalled optimal net surface problem [16] that can be efficiently solved using s-t graph cut [10]. This has further been extended to multiple exclusive objects [7]. However, all these methods depend on handcrafted energy functions to separate the layers. Our approach differs in that it avoids explicitly formulating an energy function. Model Architecture The model always segments from left to right. We can still segment images in other directions if the image is appropriately flipped /rotated. Our model can extend a partial label by one column of pixels per forward pass. It is able to do this because of one key aspect: a column of labels is predicted using only label information to the left, which we denote left label dependence. This dependence can be satisfied using masked convolutions. These work as normal convolutions except some weights are ignored. Masked convolutions ignore kernel information to the right of the center column. If the center column of a masked convolution is also ignored then it is denoted as a mask A convolution and if not then mask B [15]. We show how masked convolutions are used to satisfy the left label dependence in Figure 2 for a simple network. Note that the marked output neuron only depends on label information to the left. We implemented up and downscaling of neurons by adapting the method from PixelCNN++ [13]. Figure 3 demonstrates how a naive up and downscaling would result in breaking the left label dependence. Instead, each downscaling of the label neeeds to be preceded by shifting the neurons to the right. An elementary unit of LayeredCNN architecture is a convolutional block implemented to handle layered images and illustrated in Figure 4. The convolutional block consists of two parallel feature stacks, image feature stack and label feature stack, each equipped with residual connections [4]. Information is passed from the image feature stack to the label feature stack as the network needs information from both to predict the labels correctly. Batch normalization [6] is necessary to keep training stable. The image feature stack uses normal (unmasked) 2D convolutions, allowing the network to see all image information. The label feature stack uses masked 3D convolutions. The depth dimension (third dimension) represents the label classes given to the stack as a one hot encoding. Since the network is fully convolutional in all three dimensions, we can use a label of the arbitrary depth, such that the same network can be used for different number of classes. The bottom right convolution in Figure 4 propagates information throughout the depth dimension so the network can consider the different layers in relation to each other. Figure 5 shows the full network, which consists of convolutional block stacks, with three blocks in each block stack. Our network uses three down and upsamplings. All the convolutional blocks use 32 feature channels. Strided convolutions and transposed strided convolutions are used for downscaling and upscaling, respectively. The network ends with two convolutional layers to process the information from all the skip connections. The network starts with a single convolutional block which uses mask A, and the block includes no residual connections. This is in order to satisfy the left label dependence as required by the first layer in Figure 3. All other convolutional blocks use mask B. The one hot encoded label given to the network is always ordered from top (first depth channel) to bottom (last depth channel). It is important that the order is consistent to make training easier. Note that the label is both an input and output of the model and we are minimizing the negative log likelihood between the predicted and target label. The left label dependence makes this task non-trivial since the network has to extend the label by one column of pixels. It does this simultaneously for all pixels since we are using convolutions. This means that during training time all the iterations can be trained with a single forward pass, which speeds up training significantly. During inference, when the network sequentially labels columns of pixels they have a tendency to become increasingly blurry due to the uncertainty of the network. To combat this we introduce a post processing step after each new column of labels has been produced by our network (see Figure 6). The sharpening is done by converting all probabilities in the column to one-hot labels, except for one pixel at each border between two different labels. These are converted to a mixture of the two labels to allow for sub-pixel accuracy. Training Data Our models are trained exclusively on synthetic data. We have constructed a data generation algorithm based on Brodatz textures [2] that aims at imitating the structure and visual appearance of real layered data (see Figure 7). We generate 30000 training images with one to five layer curves (6000 of each). Another 2000 images were generated and split into a validation and test set. Additionally, we want to find out how well our method works on real data and we have therefore manually segmented 40 layered images [11] coming from 4 vastly different domains. We use 20 of these as a validation set and 20 as a test set. All models are trained with images of shape (H × W ) = (64 × 128) pixels. We apply the following data augmentations during training: Layer dropping where layer curves are ignored from the label with a probability of 20%. This is to make sure the network does not assume everything that looks like a layer is supposed to be segmented (e.g. we might want to only segment some of the layers in a layered image). All layer curves cannot be removed, and therefore one curve (chosen at random) is always kept. Horizontal and vertical flipping are used with a probability of 50% each. Border warping is a deformation that we define byl = l + l warp , l warp = l noise * g where l ∈ R W is a layer curve andl ∈ R W is the corresponding warped layer curve that has been warped by l warp ∈ R W . The numbers in the layer curve vectors represent the height (y) position of the layer at each image width index. The operator * is a cross-correlation filtering and g is a Gaussian kernel with standard deviation σ W . The noise vector l noise ∈ R W contains I.I.D Gaussian noise which we filter with g to get l warp . The variable σ H represents the vertical warping height and σ W represents the horizontal smoothness of the warping. We sample σ W ∈ U(1, 3) and σ H ∈ U(1, 1.5) (in pixels) where U(a, b) is the uniform distribution. We apply border warping to the input labels during training, but not the target labels. This encourages the network to undo the border warping. Training on network outputs is when we pass an input through the network without gradients a few times before doing the final forward pass with gradients. However, The target label in the final forward pass with gradients is kept unchanged. The input label will be deformed by the network as if it was continuing a column of labels per forward pass. We are therefore training the network to fix its own mistakes. The number of forward passes without gradients is sampled with equal probability from {0, 1, . . . , min(⌊e/3⌋, 5)}, where e is the number of completed epochs. Automatic Initialization We formulate a method of producing an automatic initialization (partial label). We can give a trained network a batch of 20 linearly spaced layer curve initializations ranging from the bottom to the top of an image. We initialize just the rightmost column of labels and then segment the image from right to left. The linearly spaced layers will find nearby ground truth layers and end up clustering together. The position of these clusters in the leftmost column of labels can be used as our initializations. We cluster layers with single-linkage clustering [3] with a linkage distance of 1.5 pixels on the positions in the leftmost column. We want to select the best clusters. We can measure how confident the network is in a layer by how blurry the one hot label is before layer sharpening. The summed absolute difference between the unsharpened and sharpened one-hot label is the cost associated with a layer. The cost associated with a cluster is the minimum cost of the layers contained in it. The positions of the best m clusters are used, where m is the number of layers we want to segment. The actual segmentation can begin with the leftmost column initialized at those positions (see Figure 8). Segmentation Results We implemented LayeredCNN using PyTorch and the model was trained with the Adam [8] optimizer for 30 epochs (0.9 million images). Training the model on a single Nvidia V100 GPU with 32 GB RAM took approximately 15 hours. Segmenting a (64 × 128) image with 2 layer curves, which requires 127 sequential forward passes through the network, takes approximately 5 seconds. We are using two measures of performance: the mean absolute distance between target layer curves and predicted layer curves (denoted L1) and the Adjusted Rand Index [5] (denoted ARI). ARI is useful since it can also compare different numbers of labels, which is a possibility when using auto. init. Larger scores are better with ARI and ARI=1 represents a flawless segmentation while ARI=0 is as good as random. Results on synthetic data are very accurate (see Figure 8 and Table 1). Some qualitative results of real data are displayed in Figure 10, where the label was initialized with only the leftmost column. Most segmentations are accurate despite our model only training on synthetic data. The model is able to handle images where the segmentation is both intensity and texture based. The prediction does deviate from the target in some cases. On Test Bone #1 (top image) the blue layer follows the porous bone protrusions, however it is unclear from the initialization whether they should be segmented or not. In the third image, Test Bone #8, the model has detected the blue curve correctly but loses track of the orange curve. Perhaps this is because the image is quite dissimilar from the synthetic training data. The target orange curve indicates a subtle boundary between a coarse texture to fine texture. The synthetic training data rarely had such wide fades. The network mostly fails on images with few similarities to the synthetic training data. Ablation study To investigate the effect of the initialization size we segmented images with an initialization ranging from 1 to 30 label columns (see Figure 9). Performance improves slightly with more columns. To test the effect of our data augmentations we train a network without border warping, without training on network outputs and without both. Mean performance measures are shown in Table 1. The table shows that the proposed augmentations improve performance. Either training on network outputs or border warping is necessary, although both might not be needed. The model is more likely to lose track of layers without any data augmentations. The loss in ARI when using automatic initializations is very low on the synthetic datasets (2%-6%), but much larger on real data (23%-26%). The base network has 1.16 million trainable parameters and we found that increasing the model size improved synthetic data performance slightly while having no impact on real data performance. This probably indicates that the distribution overlap between real and synthetic images is limited. Discussion and Conclusion We present the LayeredCNN model architecture which has relatively good success in segmenting real images when considering the fact that it has only trained on synthetic data. The model is extremely good at finding layers in the synthetic data, and it could even consistently segment them with no human supervision (automatic initialization). Human supervision would however be required to get consistent results on real data. The best improvement would likely come from training a model on a large dataset of real layered images, as most of the poor performance was seen in data with few similarities to the synthetic data. A good use of our model is to reduce a long and arduous segmentation task to just a couple of clicks. The main strength of our model is the class-agnostic formulation that makes it applicable to a large variety of different data that it has not trained on.
2022-03-31T16:51:15.296Z
2022-03-28T00:00:00.000
{ "year": 2022, "sha1": "12014ff50b4acc527a713cab1cdb8b7128b5a1c8", "oa_license": "CCBY", "oa_url": "https://septentrio.uit.no/index.php/nldl/article/download/6254/6500", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1876982d4e7ea1c0564188f5836ae6a2bc919d87", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
119310417
pes2o/s2orc
v3-fos-license
Higher dimensional abelian Chern-Simons theories and their link invariants The role played by Deligne-Beilinson cohomology in establishing the relation between Chern-Simons theory and link invariants in dimensions higher than three is investigated. Deligne-Beilinson cohomology classes provide a natural abelian Chern-Simons action, non trivial only in dimensions $4l+3$, whose parameter $k$ is quantized. The generalized Wilson $(2l+1)$-loops are observables of the theory and their charges are quantized. The Chern-Simons action is then used to compute invariants for links of $(2l+1)$-loops, first on closed $(4l+3)$-manifolds through a novel geometric computation, then on $\mathbb{R}^{4l+3}$ through an unconventional field theoretic computation. Introduction The role that Deligne-Beilinson cohomology [1,2,3,4,5,6,7] plays in establishing the relation between Chern-Simons Quantum Field Theory and link invariants [8,9,10,11,12,13,14,15,16], in the abelian case, has been stressed out in a series of papers [17,18]. We will here complete these works by showing how higher dimensional Deligne-Beilinson (DB) cohomology classes, and their DB-products, provide a natural generalisation of the Chern-Simons action, and how they can be used to compute invariants for higher dimensional links [13,19]. We will produce a novel, geometric computation for closed (4l + 3)-manifolds. We will then compare it to a field theoretic computation made on R 4l+3 . In section 2, we recall some basic facts concerning Deligne-Beilinson cohomology and how it relates to the functional measure based on the abelian Chern-Simons action. In section 3, we present a natural candidate for the generalized CS action. In section 4, we deal with generalized abelian loops and their expectation values for closed (4l + 3)manifolds within the DB approach. We further illustrate it with two specific examples. Section 5 is devoted to a quite unusual field theoretic computation of these expectation values in the R 4l+3 case, and the extension of this type of computation to S 4l+3 is sketched. In Appendix, a geometrical interpretation of the higher dimensional linking number relating it to the notions of solid angle and zodiacus is presented following the original ideas of Gauss [20]. Here are the main results elaborated in this article: 1. The abelian Chern-Simons generalised action is non trivial only in dimension 4l +3, and its level parameter k has to be quantized; 2. The generalised Wilson (2l+1)-loops are observables of the theory and their charges are quantized. 3. In the geometric DB approach provided by functional integration over the space 2 Basic facts about Deligne-Beilinson cohomology Without recalling the whole theory let us remind the basic facts about DB-cohomology useful in this paper. Definition via exact sequences If M is a closed (i.e. compact and without boundary) n-dimensional smooth manifold, the p-th DB cohomology group of M, denoted H p D (M, Z) (p ≤ dimM = n), is canonically embedded into the following equivalent exact sequences [5,21]: where Ω p (M) is the space of smooth p-forms on M, Ω p Z (M) the space of smooth closed pforms with integral periods on M,Ȟ p+1 (M, Z) is the (p+1)-th integralČech cohomology group of M, andȞ 1 (M, R Z) is the p-th R Z-valuedČech cohomology group of M. These exact sequences also occur in the context of Cheeger-Simons differential characters [22,23] or Harvey-Lawson sparks [21]. Thanks to exact sequences (2.1) one can interpret H p D (M, Z) as an affine bundle oveř H p+1 (M, Z) (resp. Ω p+1 Z (M)) with structure group Ω p (M) Ω p Z (M) (resp.Ȟ p (M, R Z)). Note that in the former case Ω p Z (M) plays the role of a gauge group, which is much bigger (in general) than the usual group of exact forms. An element of H p D (M, Z) will be generically written ω [p] . Let us pick up a normalized volume form on M, i.e. a n-form µ such that ∫ M µ = 1. For dimensional reasons any n-form on M is closed, hence for any n-form ω on M there exists a (n − 1)-form ν such that ω = τ µ + dν, with τ = ∫ M ω ∈ R. Furthermore, if ω has integral periods, then τ ∈ Z, since dν is a closed n-form with zero periods ( ∫ M dν = 0 since M has no boundary). This proves that any element of Ω n (M) Ω n Z (M) can be written as θµ, with θ ∈ R Z. Finally, integrating θµ over M makes the construction independent of µ and proves that Ω n (M) Ω n Z (M) ≃ R Z (equivalently one can pick up another normalized volume form and see that it will give the same θ, and finally pick any volume form and prove the same). Still for dimensional reasons,Ȟ n+1 (M, Z) = 0, so we conclude that H n D (M, Z) ≃ R Z. For later convenience, let us consider two special cases. First, when M = S 4l+3 and p = 2l + 1, we haveȞ 2l+1 (M, R Z) = 0 =Ȟ 2l+2 (M, R Z), then sequence (2.1) reduces to: Hence H 2l+1 D (M, Z) is isomorphic to the quotient space Ω 2l+1 (M) dΩ 2l (M), the gauge group reducing to the trivial group dΩ 2l (M). Although this is a quite trivial case, it is very close to the one of the field theoretic approach. The second example is provided by M = S 2l+1 × S 2l+2 , still with p = 2l + 1. Sincě H 2l+1 (M, R Z) = Z =Ȟ 2l+2 (M, R Z), sequence (2.1) reads: The DB Z-module H 2l+1 D (M, Z) is then a non trivial affine bundle over Z, the gauge group Ω 2l+1 Z (M) being also now non trivial. Pontrjagin dual of DB-spaces Due to the form of the exact sequences (2.1), one can consider dual sequences not with respect to R but to R Z. This gives rise to the Pontrjagin dual space of H p D (M, Z): The notion of integration of DB-classes over cycles is needed to confirm this. Integration of DB-classes over integral cycles There is a canonical pairing between DB-class and cycles on M provided by integration of the later over the former: Incidentally, integration also shows that Z p (M) is canonically embedded into H p D (M, Z) * -which can be expressed [21] by saying that p-cycles live in the topological boundary of H p D (M, Z) * . Hence: where ⊂ has to be understood as the above canonical embeddings. Property 1 As in the three dimensional case, abelian holonomies defined by: are observables of the generalized abelian Chern-Simons theories. DB-product and cycle map There is a natural bilinear product, referred here as the DB-product: which is graded according to: 1 . (2.10) From our previous remarks, one straightforwardly verifies: The "DB-square" operation satisfies the graded commutation property: which implies in particular: This is similar to the usual theory of de Rham currents. We end this subsection with the following important result shown in [7]: to any pcycle z on M one can associate a canonical distributional DB-class η z ∈ H p D (M, Z) * such that: 3 Generalized Chern-Simons action, Chern-Simons functional measure, observables and framing Generalized Chern-Simons action It is standard from a physicist point of view to present the abelian Chern-Simons (CS) lagrangian on R 3 as : 15) or, using the CS action: where A is a U(1)-connection on some principal U(1)-bundle P over R 3 . A natural generalization for R 4l+3 would be to replace A in eqn. 3.15 by a (2l + 1)-form. This is what will be done in section 5 when dealing with the field theoretic formulation. However U(1)-connections on M are actually not 1-forms for compactclosed 3-manifolds M. Hence, as explained in [17,18], we rather have to use DB-classes to write the lagrangian (3.15), and hence the action (3.16). Let us recall that H 1 D (M, Z) canonically identifies with the set of classes of U(1)-isomorphic principal U(1)-bundles with connection over M. Hence we must replace eqn. (3.16) by where A has now to be understood as a DB class. For a level k CS theory we set: We can extend the definition of the action (3.18) to any closed smooth n-dimensional manifold M as: This will be our definition of the n-dimensional Chern-Simons theory of level k on M. Since integrals take values in R Z this quantity is well defined provided 20) which is the announced quantization of the level parameter. We now consider the "quantum weight": When p = 2l the graded commutation property (2.12) leads to: thereby providing a trivial functional measure. Consequently, the non-trivial cases only occur when p = 2l +1 which implies that n = 2p+1 = 4l+3. In particular, if M is a sphere, the only non trivial abelian Chern-Simons theories will occur for S 3 , S 7 , S 11 ... . (3.23) Note that this is namely the set of spheres for which Hopf invariants are non-trivial, hence linking numbers are non trivial as well [24]. Furthermore, this expression for the CS action holds true for closed manifolds with torsion. In summary: The non trivial generalized abelian Chern-Simon lagrangian of level k is defined by the DB square product of (2l+1) dimensional DB classes on a (4l+3)-dimensional closed manifold, with k an integer. For a (4l + 3)-dimensional manifold and its (2l + 1)-loops, the inclusions stressed out after (2.5) and in (2.7) give: We will assume that the space of quantum fields of a generalized abelian Chern-Simons theory in (4l + 3) dimensions is a subset of H 2l+1 Chern-Simons functional measure and zero mode property The generalized Chern-Simons "gaussian" functional measure for a (4l+3)-manifold takes the form: Since we wish to use this measure to compute observables and identify them with (2l + 1)-links invariants, let us have a closer look at it. First, dµ k (ω) is supposed to be a measure on H 2l+1 products of distributional DB classes appearing in the gaussian part of the measuresomething common in Quantum Field Theory. In fact, we will only need the fundamental Cameron-Martin like property for the measure (3.25), that is to say: Let us consider a (2l + 2)-cycle Σ, whose integration (2l + 1)-current in M is denoted β Σ . While this current canonically represents the zero class in H 2l+1 D (M, Z), in general the current β Σ 2k does not. From property (3.26), and identically denoting currents and the DB classes which they represent, we deduce: In contrast with the identity exp 2iπk trivial since dβ Σ = 0, the following one: deserves some justification. The factor 4iπk = 2k ⋅ (2iπ) in eqn. (3.29) is of pivotal importance. Indeed, ω * D β Σ 2k is not the zero class, whereas 2k(ω * D β Σ 2k) = ω * D β Σ is, as β Σ is trivial. Note that β Σ 2k is not an integer current, and that a DB class ω is not the restriction of a current in general (see for instance [7]). Of course, one should be careful when dealing with the product of currents β Σ ∧ dβ Σ . However one can always smooth β Σ around Σ (i.e. use a Poincaré representative with support as close to Σ as necessary) in order to consistently regularize β Σ ∧ dβ Σ to the zero DB class. More generally, for any integer m, which provides the generalization of Property 4 of [17]: The functional measure dµ k (ω) is invariant under translations by m β Σ 2k , where β Σ is the integration current of a (2l + 2)-cycle Σ and m an integer. When Σ is homologically trivial (Σ = bV) then β Σ = dχ V , and therefore β Σ 2k = d( χ V 2k ) . In this case the DB-class of β Σ 2k is also zero. This happens for any Σ when the (2l + 2)th homology group of M is trivial. Conversely, as we shall see in the next section, when M has a non trivial (2l + 2)-th homology group, Property 3 will provide a treatment of the so-called "zero modes", thus leading to the important result of this paper concerning the vanishing of links invariants. Observables and Framing Following Property 1, let us consider an observable of our level k generalized CS theory: Let us remind that a (2l + 1)-loop is meant to be a continuous mapping γ ∶ Σ 2l+1 → M, where Σ 2l+1 is a closed (2l + 1)-dimensional manifold. It is always possible to identify such a loop with a (2l +1)-cycle in M. Furthermore, if the mapping is an embedding (i.e. the image γ(Σ 2l+1 ) is isomorphic to Σ 2l+1 ) γ is said to be a fundamental loop. Then, seen as a cycle, any (2l + 1)-loop in M can be written as: γ = qγ 0 , for some fundamental loop γ 0 and q ∈ Z. Hence, the abelian Wilson line of the gauge field ω of degree (2l + 1) along a (2l + 1)-loop γ = qγ 0 in M reads: Conversely, the righthand side of this expression has a meaning if and only if q is an integer. This leads to: Property 4 In the generalized CS theories, loops must have integer charges. The charge (or colour) of a loop γ can be geometrically interpreted as the number of times the fundamental loop associated with γ has been covered. When γ is not homologically trivial, its charge canonically identifies with its homology class. The charge can also be seen has defining a representation for the U(1) holonomy of a fundamental loop. This is also true for the level k parameter which can be seen as a charge of M, or as a representation of the U(1) 3-holonomy given by the Chern-Simons action. If η γ and η 0 are the DB classes (∈ H 2l+1 D (M, Z) * ) associated with γ and γ 0 respectively, then η γ = qη 0 . Hence we can alternatively write: The expectation values of the Wilson lines are given by: For a generic homological combination γ = ∑ n i=1 q i γ 0 i with q i ∈ Z and γ 0 i fundamental, we get: (3.35) or in term of the DB representatives η 0 i of these γ 0 i : Let us first exhibit the nilpotency property of the expectation values For the loop 2kγ 0 , where γ 0 is fundamental with DB representative η 0 : Performing the shift thanks to property (3.26), we obtain: Such an expression is ill-defined since η 0 is distributional. If we decide to regularize the quantities η 0 * D η 0 into the zero DB class, which we refer to as the zero-regularization, then: This gives: The generalized CS theories satisfy the 2k-nilpotency property. Zero-regularization calls for a comparison with framing. If γ 0 is a boundary (i.e. is homologically trivial), then where χ 0 is the current of a chain whose γ 0 is the boundary, while dχ 0 is the de Rham current of γ 0 . The symbol = The difference between two choices of framing is an integer, which coincides with taking η 0 * D η 0 = 0. However, when γ 0 is not a boundary the framing procedure is not a welldefined regularization as it does not provide a definite homotopically invariant integer for the self-linking number ∫ M χ 0 ∧ dχ 0 . Notwithstanding property (3.41) still holds, the zero-regularization is thus coarser than framing yet more "general". Let us point out that 2k-nilpotency 1 is totally equivalent to zero-regularization. Property 6 In generalized CS theories, the only Wilson loops having non vanishing expectation values are those of the homologically trivial links (modulo 2k). The expectation values of these Wilson loops are given by the self-linking of the corresponding link and the only required regularization is the one provided by framing (i.e. self-linking of the fundamental loops forming the link). We will first present the general ideas used to compute expectation values (3.37). Then we will consider the particular case M = S 4l+3 , the closest to the field theoretical computation of section 5. We will next treat the less trivial case M = S 2l+1 × S 2l+2 . In these two examples, we will present an alternative and more computational way to get Property 6. Since M is assumed without torsion, all its homology and cohomology groups are free and of finite type, i.e of the form Z N , for some integer N. If (⃗ e) I=1,...,N denotes the canonical basis of Z N , then any ⃗ u ∈ Z N is written as * we can use, as an origin on this fiber, a (2l +1)-cycle or equivalently its DB representative. In particular, a fundamental loop γ 0 I can be associated with each basis vector ⃗ e I of Z N . Its DB representative η 0 I then plays the role of origin on the fiber over ⃗ e I . If ⃗ u = ∑ u I ⃗ e I , then η ⃗ u ≡ ∑ u I η 0 I will be a possible origin for the fiber over ⃗ u. Note that the de Rham current of γ 0 I would play the role of the "curvature" of η 0 I , as an element of Hom (Ω 2l+1 (M) Ω 2l+1 Once such an origin for each fiber of H 2l+1 D (M, Z) * has been chosen, any DB class ω can be decomposed as , and ⃗ u ω being the base point over which ω stands. In particular, the DB representative η of a cycle γ will decompose as For a link L, we can express the expectation value of the corresponding Wilson line according to our choice of basis (η 0 I ) I=1,...,N : and is a rewriting of the Wilson line of L with respect to the basis (η 0 I ) I=1,...,N , and with the decomposition η L = ⃗ v L ⋅ ⃗ η 0 + β for the DB representative of L. We recall that L is a link (a formal combination of charged fundamental loops) hence a cycle. Instead of evaluating the Wilson line (4.46), we rather use the zero mode property. Let (Σ I 0 ) I=1,...,N be a collection of (2l + 2)-cycles on M which generates H 2l+2 (M, Z) and are orthogonal to the fundamental loops γ 0 I : β J 0 being the currents of the Σ J 0 , and ⊺ ∩ denoting transversal intersection. Due to Poincaré and Hom dualities there are as many β J 0 as γ 0 I . Let us consider again: into which we perform the shift for a collection of integers m I . This gives: Using Property 3, we obtain: That is to say: Since this has to hold for any collection of integers (m I ) I=1,...,N , we conclude that, for a non vanishing mean value: where β L is the DB class of a current of a (2l + 2)-chain with boundary L. Now let us perform into eqn. (4.56) the shift: what leads to: Hence, we obtain: The integral in this expression is, modulo zero-regularization via framing, exactly the selflinking number of the link L [25,26,27], itself made of self-linking (defined via framing) and linking of the fundamental loops composing L. We stress out that while the link has to be homologically trivial, its components do not have to. This completes the proof of Property 6. Of course we could have directly used property (3.26) together with the shift (4.57) to obtain eqn. (4.59). However we have preferred to use the explicit definition (4.43) of the functional integral rather than the formal one. Let us have a closer look at a first example where zero modes are not required to be treated: the spheres. This will provide us with a general property concerning (4l + 3)manifolds whose (2l + 1)-th homology group vanishes. Abelian links invariants on S 4l+3 SinceȞ 2l+2 (S 4l+3 , Z) = 0 =Ȟ 2l+1 (S 4l+3 , Z), the first of the exact sequences (2.1) reduces to: = Ω 2l+1 S 4l+3 dΩ 2l S 4l+3 , and the dual sequence (2.5) to: These isomorphisms are somehow canonical if we consider that the choice of the zero class, 0, as origin of these spaces is canonical. More explicitly, for any ω ∈ H 2l+1 This corresponds to choose the zero cycle z ≡ 0 as origin, the DB representative of this cycle being 0. SinceȞ 2l+1 (S 4l+3 , Z) = 0, any (2l + 1)-cycle in S 4l+3 is trivial, i.e. a boundary. Hence, if L denotes a (2l + 1)-link which is the sum of charged fundamental (2l + 1)-loops γ 0 i on S 4l+3 : then there exists some (2l + 2)-chain, Σ L , such that L = bΣ L . Geometrically, Σ L can be seen as a (2l + 2)-surface in S 4l+3 . This surface is of course not unique, but two of them only differ by a closed (2l + 2)-surface. As explained in [7], the de Rham current of such a Σ L , β Σ , completely determines the DB representative, η L , of L, according to: The Wilson line of L is then written: and its expectation value reads: Seen as an element of Hom (Ω 2l+2 However, the corresponding DB class, 0 + (β Σ 2k), is not the representative of any fundamental loop in S 4l+3 . Next, we perform the change of variable: into eqn. (4.66). This turns the expectation value into: Making explicit the DB product within this expression, we obtain: what is exactly eqn. (4.59). Finally in terms of the charged fundamental loops, γ 0 i , building L, we have where L(γ 0 i , γ 0 j ) is the linking number of γ 0 i with γ 0 j , that is to say: with α 0 i the de Rham current for which 0+α 0 i is the DB representative of the fundamental loop γ 0 i . As for "diagonal" terms L(γ 0 i , γ 0 i ) we regularize them using the usual framing procedure (what we have called zero-regularization): As in the three dimensional case extensively detailed in [17], the abelian invariants thus obtained are nothing but those coming from linking and self-linking numbers, that is to say intersection theory in S 4l+3 . Let's note that this result is what we are supposed to recover via a quantum field theory approach. There, the gauge fixing procedure is supposed to provide a choice of representatives for DB classes, and the propagator thus obtained appears like an inverse of the de Rham differential d, deeply related to the Poincaré chain homotopy operator. The consistency of the procedure is ensured by the fact that if γ is a loop (a (2l + 1)-cycle), and if Σ is a (2l + 2)-chain such that bΣ = γ, which corresponds to dβ Σ = η γ in term of currents, then β Σ (as the current of an integral chain) is unique up to closed (2l + 1)-currents (of integral (2l + 2)-cycles). However, on S 4l+3 any (2l + 2)-cycle is trivial so β Σ is unique up to dχ, where χ is the 2l-current of an arbitrary (2l)-chain. This means dβ Σ = η γ has to be inverted on classes β Σ ∼ β Σ + dχ. This is exactly gauge invariance from the point of view of integral chains (and currents). This will be detailed in section 5. What we have done here for S 4l+3 can be straightforwardly applied to any (4l + 3)manifold M for whichȞ 2l+1 (M, Z) = 0 =Ȟ 2l+2 (M, Z), leading to exactly the same final result. Property 7 Over a (4l+3)-dimensional closed manifold, without torsion, whose (2l+1)th homology groups vanishes, the generalized abelian Wilson loop of a link L defines a link invariant made of the self-linkings, the linkings and the charges of the fundamental loops composing L. The second example will present a homologically non trivial case which is the equivalent of the three dimensional pedagogical case S 1 × S 2 widely discussed in [17]. Abelian links invariants on S 2l+1 × S 2l+2 Let us now consider the less trivial case M ≡ S 2l+1 × S 2l+2 for whichȞ 2l+2 (M, Z) = Z = H 2l+1 (M, Z), so that: and: If L is a link in M, then its DB representative, η L , satisfies Instead of using the elegant zero-mode property, as was done to establish Property 6, we shall present a somehow more computational approach. Although this will be a bit "heavier", we make this choice in order to show more explicitly the usefulness of zero modes as well as of zero-regularization. Since it provides the final answer, let us first consider the case where n L = 0 ( i.e. when L is homologically trivial). Then expression (4.77) takes the form: For the same reasons than in the previous example, β Σ 2k ∈ Hom (Ω 2l+2 Z (M) , R Z). So, we perform the shift: The expectation value of the Wilson line of L then simplifies into: that is to say: or equivalently: just as in the S 4l+3 case. Once more, this is totally similar to what happens in the three dimensional case S 1 ×S 2 detailed in [17]. This turns out to be the same expression as eqn. (4.70), and of course as eqn. (4.59): the link invariant is made of linking and self-linking numbers of the fundamental loops forming the link. However let us stress again that whereas the link L has to be homologically trivial, this is not the case of its components. Let us now assume that n L is not zero (nor an integral multiple of 2k, although this can be dealt with straightforwardly). If we expand all the expressions within the exponentials appearing in eqn. (4.77), and then apply the zero-regularization to η 0 * D η 0 , we obtain the expression: (4.83) Once more, we perform the shift (4.79), and get, after some simplifications: The last two terms are independent of m and α, and then give rise to: out of the integration and sum in eqn. (4.78). In the remaining factor, we can invert the sum over m with the integration over α, thus obtaining: Putting this back into eqn. (4.86), and performing some algebraic juggling, we obtain: Let us introduce a closed (2l + 2)-surface Σ 0 , with de Rham (2l + 1)-current ρ 0 , which satisfies: This surface is a generator ofȞ 2l+1 (M, Z) ≃Ȟ 2l+2 (M, Z) = Z and is formally a sphere S (2l+2) in M = S (2l+1) × S (2l+2) . The (trivial) DB class associated with ρ 0 ( also denoted ρ 0 ) give rises to the DB class ρ 0 2k, which is non trivial since: Actually, ρ 0 2k ∈ Hom (Ω 2l+2 Z (M) , R Z) and the DB class it determines is 0 + ρ 0 2k. Moreover, as seen when establishing the zero-mode property: and for each value of K, if we perform the shift: None of the expressions (4.94) and (4.95) is well-defined. However, using 2k-nilpotency, we can reduce each of these infinite sums to a sum over a period, thus obtaining: for the former one and for the latter one. The "regularized" quotient defining the expectation value will then be taken as: Hence, when n L ≠ 0 [2k], the expectation value of the corresponding Wilson line is zero, while when n L = 0 the expectation value is given by eqn. (4.81). Due to 2knilpotency, when n L = 2kN, with N ∈ Z * , then the corresponding link invariant is trivial. These results are a clear generalization of those investigated in [17] for the three dimensional case. Also, it is quite obvious how to deal with a more general case than the quite simple product S 2l+1 × S 2l+2 , as long as M is torsionless. The case of (4l + 3)-manifolds with torsion might be treated extending [18]. Naive abelian gauge field theory and (2l + 1)-links invariants This section provides a formulation of the abelian (4l + 3)-dimensional Chern Simons theory on R 4l+3 with Euclidean metric in terms of a lagrangian density involving a U(1) connection i.e. gauge field A, plus gauge fixing. This formulation, coined "naive gauge field theory" extends eqns. (3.15), (3.16) to the (4l + 3)-dimensional case, and is the one familiar to field theorists. The presentation is formulated in a somewhat hybrid way conveniently using notations which keep track of the geometric nature of the fields and operations, combined with algebraic manipulations familiar in field theory. We aim here at emphasizing the ambiguities or weaknesses arising in this framework, in order to stress where the above non perturbative formulation in terms of DB cohomology classes brings clarification. In particular, the normalization of both the level k and loop charges e are a priori unspecified in the naive field theory approach: the prescription that they have to be integers is ad hoc, whereas they are bound to be integers ab-initio in the DB approach. Furthermore, the naive approach leads to ill-defined self-linking integrals which require to be given meaning and integer values by some extrinsic regularization procedure, such as framing, whereas the DB approach was shown above provides a natural regularization independent normalization prescription for the latter. Last, this study on R 4l+3 also suggests which complications may arise when trying to extend the naive field theoretical framework to manifolds with non trivial cohomology. Formulation and computation on R 4l+3 The lagrangian density 2 L CS (A (2l+1) ) of the abelian (4l + 3)-dimensional Chern-Simons theory reads: The degeneracy coming from the gauge invariance A (2l+1) → A (2l+1) + d Λ (2l) of this lagrangian density shall be fixed, in order that the functional integral giving the generating functional, and, in particular, the propagator of the A (2l+1) field be defined. Covariant gauge fixing and corresponding propagator In the three dimensional case, a common procedure consists in imposing the "covariant gauge fixing" d * A (3) = 0 by adding the following Lagrange constraint: where * here denotes the Hodge dual operation with respect to the Euclidean metric on R 3 and the Lagrange multiplier B (0) is a scalar field i.e. a zero-form. Let from now on * denote the Hodge dual operation on flat Euclidean R 4l+3 , such that for any q-form The naive straightforward generalization of eqn. (5.100) by means of a single auxiliary 2l-form B (2l) according to is not effective as L naive GF still has the residual gauge invariance B (2l) → B (2l) + d Λ (2l−1) . An appropriate formulation requires a collection of 2l + 1 auxiliary forms of decreasing degrees (B (2l) , B (2l−1) , ⋯, B (0) ), according to: Regrouping all the fields into we can compactly write the full action given by L tot = L CS (A (2l+1) ) + L GF as a scalar product: where δ ≡ * d * is the co-differential associated with the Hodge dual. The Euler-Lagrange equations of motion of the ⃗ A field read: by means of Fourier transformation, taking advantage of translation invariance on Euclidean space R 4l+3 . It is especially convenient to use a Fourier transformation, defined by means of Berezin integration, which preserves the degrees of forms, as detailed in Appendix A. The Fourier transform of D δ (4l+3) (x − y) reads: The expression for P and Ξ are given in eqns. (6.139) of Appendix A. The Fourier transforms ⇀ N jk of the < A 2l+2−j ⊗ A 2l+2−k > satisfy: A particular solution to the inhomogeneous eqns. (5.107)-(5.109) on the diagonal j = k is suggested by the Hodge decomposition of the Laplacian operator whose Fourier transform reads: Ξ ∧ P + P ∧ Ξ = p 2 Id, and by the identities P ∧ P = 0, Ξ ∧ Ξ = 0: and all the other ⇀ N i,j vanishing. The particular solution thus found for the Fourier transform ⇀ N 1 , 1 of the propagator < A (2l+1) ⊗ A (2l+1) > involved in the computation of Wilson (2l + 1)-loops correlators turns out to be the so-called Moore-Penrose pseudoinverse 3 of the operator i * P which satisfies: where Π is the projector onto the subspace selected by the covariant gauge fixing condition. The propagators < A 2l+2−j ⊗ A 2l+2−k > might differ from the particular solution above by terms corresponding to general solutions of the homogeneous equations associated with eqns. (5.107) -(5.109) i.e. with all right hand sides vanishing. The general solutions of these homogeneous equations on the space of tempered currents can be proven to be forms with harmonic coefficients. Hence in the present case on R 4l+3 with Euclidean metrics the coefficient functions of these harmonic forms are harmonic polynomials of (x − y). In a first step we shall ignore such potential terms and consider the ⇀ N jk entirely given by eqns.(5.110) -(5.112). We will comment on them in paragraph 5.1.2 and prove that they do not contribute insofar as we are only concerned with the computation of correlators of (2l + 1)-loops. Performing the inverse Fourier transforms of eqns.(5.110) -(5.112) yields the explicit expressions of the < A j (x)A k (y) >. The only one explicitly needed in the following is: The gauge field theory is provided by the generating functional in presence of arbitrary source currents ⃗ J , which may be formally expressed by the following functional integral: } is a functional integration measure on some (unspecified) appropriate functional space. This measure is assumed to have all nice properties of usual gaussian integrals, and N is a normalization constant such that Z( ⃗ J = 0) = 1. The correlator of two (2l + 1)-loops γ 1 and γ 2 is provided by the quantity Let us represent the (2l + 1)-loop γ s by the (2l The functional integration leads to: In the integral in the exponential in the r.h.s. of eqn. (5.122), the term of degree (2l + 1) is made of: and: This yields two sorts of terms. 1. Those of the form: They turn out to be the linking of γ 1 and γ 2 since after injecting expression (5.114) in the last line of eqn. (5.125) one recognizes the generalized Gauss formula [19]. The latter is recalled in Appendix B providing a consistency check of all normalizations between the geometric and the "naive" approaches. However, at variance with the virtue of the geometric approach, it is important to notice in this respect that the values of the level k and of the loop charges e j are not quantized in the naive approach: their prescribed integer natures here are ad hoc and imposed "by hand". This derivation sheds some light on the relation between the generalized Gauss formula (5.125) (5.126) whose general solution is is an arbitrary closed current. Indeed the current η (2l+1) 2 is not unique since: (5.128) This reminds us of the definition of the Poincaré Homotopy: that encodes Poincaré Lemma (for R 4l+3 ). The degeneracy associated with the inversion of d is exactly the one due to gauge invariance since on R 4l+3 , and still by virtue of Poincaré's lemma, one has: We shall come back to this comment below when addressing the corresponding issue on topologically non trivial (4l + 3)-dimensional manifolds instead of R 4l+3 . 2. It also involves the self-linkings of (2l + 1)-loop γ 1 and of (2l + 1)-loop γ 2 by means of formulas very similar to eqn. (5.125), yet the integrals involved here are illdefined [25,26,27]. An extrinsic procedure is required to have them make sense as quantities defined modulo integers. Framing provides one such procedure in the present case, a given integer for each self-linking corresponding to a given framing choice. By contrast the zero regularization implemented in the geometric approach is less detailed as it does not prescribe any definite integer value to any given self-linking. Harmonic terms do not contribute So far we have ignored the presence of a harmonic contribution H(x−y) to the propagator < A (2l+3) (x)⊗A (2l+3) (y) >. At first sight one might be tempted to argue that the absence of such terms is implied by the cluster property meaning that < A (2l+3) (x) ⊗ A (2l+3) (y) >→ 0 when x − y → +∞. However this is i) beside the point ii) not necessarily true. i) It is beside the point insofar as we are interested in correlators of (2l + 1)-loops i.e. closed curves. Assuming that the propagator involves such a harmonic term H(x − y), let us generalize eqn. (5.125) bỹ The currents j (2l+2) 1,2 dualize (2l + 1)-loops so that e.g. j so that through integration by part, This suggests that the appropriate functional space on which the propagator has to be defined is a quotient modulo harmonic parts. Such a functional space has been studied in ref. [32]. By passing, eqn. (5.131) proves that harmonic contributions vanish even when j (2l+2) 2 dualizes a non compactly supported loop, such as a (2l + 1)-hyperplane. This property is expected to be particularly relevant in order to extend the present result to the sphere S 4l+3 . ii) The cluster property may not hold with another gauge fixing choice. See for instance the 3-dimensional case with axial gauge fixing. Impact of the gauge fixing choice Equation (5.125) was noticed to reproduce the generalized Gauss formula when the propagator < A (2l+3) ⊗ A (2l+3) > is given by eqn. (5.114). Another condition than the gauge fixing (5.100) would lead to a different propagator. Equation (5.125) would then provide an expression of the linking number different from the one obtained using the generalized Gauss invariant. For example in the three dimensional case, the "axial gauge" choice leads to a braiding interpretation of the linking number [29], rather than the solid angle interpretation reminded in Appendix B. Let us stress that all gauge fixing choices are equivalent ways of computing the generalized linking number. Indeed, the propagator in the covariant gauge and one with an alternative gauge choice differ by terms involving the derivative d whose actions on the closed currents dualizing (2l + 1)-loops vanish. In a Quantum Electro-Dynamical language, the latter are "conserved currents" which guarantees the gauge fixing independence of observables associated with these currents. Further issues arising on the S 4l+3 then on further non trivial manifolds As we already mentioned it, Chern-Simons field theory cannot provide a quantization of the level k nor of the charge q. This is due to the fact such a theory is developed over the non compact space R 4l+3 . It's only when going on a closed manifold such as a sphere that the quantization naturally appeared in the geometric approach. This suggest that to get such a quantization of k and q within the field theoretic framework, one should have to first define a field theory over a closed manifold M, starting with S 4l+3 . Since the CS lagrangian is not a globally defined 3-form, we anticipate two possible paths: one based on a partition of unity subordinated to a good covering of M and a second based on a polyhedral decomposition of M. 1. We could consider a polyhedral decomposition ∆ of M and start with field theories on each of the fundamental i.e. (4l + 3)-dimensional polyhedra ∆ α of the decomposition. Once this done on fundamental polyhedra we would have to see how things match on the (4l + 2)-dimensional boundaries ∆ αβ of these polyhedra leading to (4l + 2)-dimensional field theories on those boundaries. We would have to keep proceeding along this line till we reach the polyhedral elements of dimension 0 of the decomposition. This would be related to the short formula defining the integral of a DB class, as explained in [7]. 2. We could provide M with a partition of unity subordinated to a good covering U in such a way that each open set U α supports a field theory in R 4l+3 . Matching these theories in the (4l+3)-dimensional intersections U αβ would lead to considering extra field theories in these intersections then in the triple intersections U αβγ etc. The present point of view in which all supplemented field theories would be on R 4l+3 is a smoothing of the former polyhedral approach. This would be related to the long formula appearing in [7]. We would like to stress out that our procedure to compute the propagator of the abelian CS field theory on R 4l+3 exhibits a set of descent equations whose resolution is made simple because R 4l+3 has no cohomology (except in dimension 0). Our results might be extended to S 4l+3 since it shares the same cohomology properties for the concerned degrees. In the case of a general closed manifold, such has S 2l+1 × S 2l+2 , this would not be true. However, locally that is to say with respect to a good covering and with an Euclidean metric on each open set, such a descent might still hold. Yet the gluing constraints on the whole manifold (e.g. via a partition of unity) would prevent the descent from being globally trivial. The simplest case to investigate would be S 3 and the first non trivial one S 1 × S 2 . Concerning the propagator itself, the fact it coincides with the Gauss integral is once more only due to the fact we are working on R 4l+3 . One would expect a different expression for the propagator on a closed manifold. However there exist expressions of the Gauss integral on spheres [31]. One could also try to mimic Gauss zodiacus idea, at least in the case of S 3 identified with SU(2), replacing the notion of translations acting on R 3 by actions on SU (2). From the point of view of the two possible approaches previously mentioned, we can expect a collection of propagators, associated with the different field theory arising from the construction (for instance one for each polyhedra type of the decomposition of the closed manifold), but also a gluing rule explaining how these propagators "communicate". It appears as a very interesting problem how this could be properly handled because it would provide an example of a field theory over a closed manifold. We can have some hope about how this can be done, because the theory which we are dealing with is a topological one, and also because the geometric approach provides us with the final answer concerning Wilson observables. Conclusions and outlook The treatment of abelian Chern-Simons to generate link invariants introduced in [17] straightforwardly extends to the case of oriented closed (4l + 3)-dimensional manifolds without torsion. Actually, we didn't show that the expectation values of our generalised Wilson lines are ambient isotopy invariants. This can be easily checked extending what has been done in [17]. In the same way, it is possible to establish satellite relations for our generalised invariants. As for torsion, one could follow the approach developed for RP 3 in [18]. One can wonder whether the DB strategy applies more generally to abelian BF systems. Using Deligne-Beilinson Cohomology technics might also provide a way to study higher order systems, that is to say systems whose classical lagrangian involves DB products of more than two DB classes. In any of these cases one should expect homology and intersection to play the fundamental role. An important property is that the Hodge operation and Berezin-Fourier transform do commute: Berezin-Fourier transform for linear operators The Berezin-Fourier transform of a linear operator O acting on forms is defined by Accordingly, the (useful) Fourier transform of the differential, its Hodge dual and the co-differential read: ⇀ The linking number can thus be given the following equivalent form [e xy ; dx; dy] x − y 4l+2 (6.148) and interpretation of a global solid angle. We have used the value of the surface of a unit sphere S n is given by . This is also the total solid angle in dimension n + 1. The three dimensional case In the three dimensional case (l = 0), the linking number (6.148) is the famous Gauss invariant [20] L(γ, γ ′ ) = 1 4π The unitary vector ⃗ e xy = ⃗ x − ⃗ y x − y . (6.151) defines a map e from S 1 × S 1 to the sphere S 2 whose degree is the linking number [33]. The image of the map e is generically a surface called the zodiacus by Gauss who also obtained a necessary condition for a point to be on its boundary: the tangent vectors to the two curves at points x and y respectively and the vector ⃗ e xy are linearly dependent. In other words, these are points such that [⃗ e xy ; d⃗ x; d⃗ y] = 0 (6.152) and do not contribute to the Gauss integral. This condition is only necessary and not all solutions do represent actual boundaries of the zodiacus. Two cases have to be distinguished: (1) the two curves are not linked and the zodiacus has at least one boundary, (2) the two curves are linked and the curve defined by the previous condition cannot be a boundary of the zodiacus which is in fact the whole sphere. Some intuition on these matters can be given by the following particular case. We consider a basic configuration of two circles γ, having radius one and centered at the origin, and γ ′ , having radius R greater than one. This configuration has linking number one when the circle γ ′ intersects the disc defined by γ. In the extreme case where the radius R → ∞, the γ ′ circle may be deformed to a straight line perpendicular to the plane containing the circle γ completed with an half circle at infinity whose contribution to the Gauss integral vanishes. We obtain the linking number by integrating over the straight line L(γ, γ ′ ) = 1 4π A moment thought shows that for y < 1, there is no boundary and the vector ⃗ e sweeps the whole sphere once. On the contrary, for y > 1, the zodiacus has two boundaries at the values s = arcsin(y −1 ) and s = π − arcsin(y −1 ) that join at antipodal points for y 3 = ±∞. Higher dimensional cases As in the three dimensional case, the unitary vector e xy spans on the sphere S 4l+2 the zodiacus associated with the two surfaces γ 2l+1 and γ ′ 2l+1 . The eventual boundaries of the zodiacus necessarily correspond to stationary points of e xy upon infinitesimal displacements δx (resp. δy) on the surface γ 2l+1 (resp. γ ′ 2l+1 ), that is to say δe xy = 0 where δe xy = δ(x − y) − e xy (e xy .δ(x − y)) x − y (6.160) If the surfaces γ 2l+1 and γ ′ 2l+1 are parameterized by (even local) coordinates s i , t j respectively (i, j = 1 ... 2l + 1), then where a i and b j are two families of infinitesimal coefficients. As a consequence of the stationarity conditions, the vector e xy is thus a linear combination of the 4l + 2 tangent vectors B s i x and B t j y. Hence the oriented solid angle formed by two simultaneous displacements on both curves vanishes at the boundary of the zodiacus: [e xy ; B i x; B j y] = 0. (6.162) We shall now check the normalisation of the linking number considering a simple choice of linked surfaces. We choose a (2l + 1)-sphere centered at the origin and an orthogonal (2l + 1)-hyperplane containing the origin. They are given respectively by γ 2l+1 ∶ x 2 1 + ⋯ + x 2 2l+2 = 1, x 2l+3 = ⋯ = x 4l+3 = 0 (6.163) and a (2l + 1)-hyperplane γ ′ 2l+1 ∶ y 1 = ⋯ = y 2l+2 = 0 (6.164) with its completion (an half-sphere) at infinity whose contribution to the Gauss integral vanishes. The ball defined by the sphere γ 2l+1 and the hyperplane γ ′ 2l+1 intersect at the origin so we have a configuration with linking number equal to one and a moment thought shows that the zodiacus is the whole (4l + 2)-sphere.
2012-07-05T14:26:43.000Z
2012-07-05T00:00:00.000
{ "year": 2012, "sha1": "dbfa79643d4d6fc5a295f950268200e0c2bc7fcc", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1207.1270", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "dbfa79643d4d6fc5a295f950268200e0c2bc7fcc", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
235196427
pes2o/s2orc
v3-fos-license
Uveitis reactivation following recombinant zoster vaccination Purpose Describe three cases of uveitis reactivation following immunization with recombinant zoster vaccine (RZV). Observations One patient developed reactivation of previously controlled multifocal choroiditis within one week of receiving RZV, requiring treatment with systemic corticosteroids. Two patients with previously controlled anterior uveitis developed new anterior segment inflammation after RZV; both were treated with topical corticosteroids and systemic antiviral therapy. Conclusion and importance Uveitis recurrence is an infrequent but serious potential ocular side effect of recombinant zoster vaccination. Introduction Herpes zoster is a viral infection caused by varicella zoster virus (VZV) reactivation. There are two vaccinations available for immunization against herpes zoster: zoster vaccine live (ZVL, Zostavax), a live attenuated vaccine available since 2006, and recombinant zoster vaccine (RZV, Shingrix), a recombinant subunit vaccine available since 2017. The most recent Centers for Disease Control guidelines recommend healthy adults 50 years and older undergo vaccination with RZV, which is administered as a two-dose series with 2-6 months between doses. Post-licensure safety monitoring of RZV by the Vaccine Adverse Event Reporting System found a reporting rate of 0.4/100,000 for inflammatory eye disease, with reported events including herpes zoster keratitis and keratoconjunctivitis, two cases of primary herpes zoster iridocyclitis and one report of pre-existing ophthalmic herpes zoster. 1 The recombinant zoster vaccine contains a novel adjuvant, AS01 B , which stimulates a potent immunogenic response that may be responsible for long-lasting cell-mediated immunity. 2 The increased immunogenicity of the adjuvanted vaccine is one advantage of RZV over ZVL, 3 but it raises the potential for immune-mediated events, particularly in those with known inflammatory disease. Here we present three cases of patients with reactivation of their previously controlled uveitis after receiving RZV vaccination. Case 1 A 57-year-old Caucasian woman with a history of bilateral multifocal choroiditis controlled on methotrexate 10 mg po weekly presented with an acute decrease in vision in the right eye (OD) and new metamorphopsia in the left eye (OS) five days after receiving her first RZV vaccine. She also reported upper arm swelling at the injection site, chills, malaise, subjective fever, and tinnitus that started 24 hours after the RZV injection. On examination, we measured count fingers vision eccentrically OD (baseline acuity 20/40) and 20/20-2 vision OS with correction. Intraocular pressure was within normal limits in both eyes (OU). Pupils were equal, round, and reactive to light, without evidence of a relative afferent pupillary defect. On slit-lamp exam we noted a quiescent anterior segment OU, an occasional anterior vitreous cell OD, and no vitreous haze in either eye. In the right eye we saw stable posterior segment findings including peripapillary atrophic scarring with temporal thinning of the optic nerve, confluent circular punched-out atrophic macular scars with a small spared foveal region, and vessel attenuation. In the left eye we saw a linear yellow scar temporal to the fovea and a new yellow chorioretinal lesion adjacent to this scar ( Fig. 1A-D). By fundus autofluorescence we saw stable hypoautofluorescence in the area of prior retinal scars OD and a new area of hyperautoflurorescence at the site of the new lesion OS (Fig. 1E-H). Despite the decrease in vision OD, there was no change on macular ocular coherence tomography (OCT) compared to previously identified atrophy and scarring. On macular OCT OS we noted a new outer retinal lesion temporal to prior residual scar ( Fig. 2A-D). The patient was started on 60 mg of oral prednisone daily and continued methotrexate. On follow-up examination one week later the patient endorsed decreased metamorphopsia in the left eye. Visual acuity was improved to 20/250 OD and remained at 20/20-2 OS. Ocular examination remained stable OD and the new lesion noted at prior examination OS was less elevated (Fig. 2E). The patient underwent a prednisone taper over two months without development of recurrent inflammation; however, she developed a secondary choroidal neovascular membrane at the edge of the new scar requiring treatment with intravitreal bevacizumab. Case 2 A 69-year-old male with a history of idiopathic recurrent bilateral anterior and mild intermediate uveitis presented with sudden onset headache and blurred vision in the right eye one month after receiving his second RZV vaccination. He had finished a course of topical prednisolone acetate 1% three months ago OD and two months ago OS, and his uveitis was quiescent on examination two months prior. On exam, his best corrected vision was 20/50 OD and 20/30 in the unaffected OS. Intraocular pressures were within normal limits in both eyes. On exam we noted several foci of anterior stromal keratitis, stellate keratic precipitates and trace anterior chamber cell OD; examination OS was unremarkable. On Pentacam optical densitometry of the cornea, there was loss of clarity in the regions of stromal keratitis (Fig. 3A). The patient was started on valacyclovir 1000 mg three times daily. Three days later the anterior stromal keratitis resolved and there was improvement of anterior chamber cell to 0.5+. Prednisolone acetate 1% drops two times daily OD was initiated for two weeks and the patient completed a twoweek course of valacyclovir 1000 mg three times daily followed by 500 mg daily as prophylaxis. At follow up one month after initial presentation his vision improved to 20/30 OD, the stromal keratitis remained resolved, the number of keratic precipitates was reduced and anterior chamber inflammation was quiescent. Corneal densitometry demonstrated improvement (Fig. 3B). Case 3 A 70-year-old female with a history of recurrent unilateral anterior uveitis and corneal neovascularization with lipid keratopathy OS presented two weeks after receiving her first RZV with mildly decreased vision left eye. She had completed treatment for presumed viral keratouveitis six months prior with a ten-day course of oral valacyclovir 1000 mg three times daily and a tapering course of topical loteprednol etabonate 0.5%. Four months earlier, the patient's uveitis had been quiescent off treatment, but following RZV she developed 1+ anterior chamber cell and new keratic precipitates in the left eye. The patient was treated with oral valacyclovir 1000 mg three times daily followed by 1000 mg daily and topical prednisolone acetate 1% with return to quiescence six weeks later. Discussion The RZV is an adjuvanted subunit vaccine for immunization against herpes zoster. Currently, the Advisory Committee on Immunization Practices recommends RZV vaccination over ZVL in immunocompetent individuals over the age of 50 years. RZV is preferred over the prior live attenuated vaccine since it is more efficacious particularly in older populations, its efficacy is longer lasting, and it may be safely administered to immunocompromised patients for whom a live vaccine is contraindicated. 2,3 RZV contains recombinant VZV glycoprotein E and the adjuvant AS01 B , which is composed of two immunostimulants: a toll-like receptor 4 agonist (3-O-desacyl-40-monophosphoryl lipid) and a saponin derived molecule QS21 (from the South American tree Quillaja saponaria). The adjuvant induces an innate immune-cell mediated response, which enhances glycoprotein E antigen presentation to T cells and induces increased production of antibodies and CD4 + T cells specific to the VZV glycoprotein E. 2,3 Results from two large randomized placebo-controlled phase 3 trials of the RZV found potential immune-mediated diseases occurred at a similar rate between those receiving RZV and controls at all time points. Similarly, subjects with preexisting possible immune-mediated diseases did not demonstrate an increased risk for a new possible immunemediated process or exacerbation of their prior disease after RZV vaccination compared with controls. Ocular autoimmune diseases were a pre-defined reportable adverse event in both trials; uveitis was only recorded in 1 of 14,645 subjects receiving RZV. 4 Post-licensure surveillance of RZV in the Vaccine Adverse Event Reporting System found a reporting rate for inflammatory eye disease of 0.4/100,000 with limited reports related to uveitis (two cases of primary herpes zoster iridocyclitis, one report of presumed reactivation of pre-existing ophthalmic herpes zoster without specification of affected ocular structures). 1 A query of the VAERS database in October 2020, just prior to our submission of the cases included in this report, identified a report of severe unilateral inflammation treated with Kenalog and oral prednisone by a retina specialist; additional details of this case are unknown. The predominant adverse ocular events reported in VAERS include herpes zoster ophthalmicus, keratitis and conjunctivitis; it is unknown whether any of these patients had preexisting inflammatory ocular disease, and the possibility that these cases actually occurred from a lack of vaccine efficiency cannot be excluded. Review of the literature finds only limited prior cases of uveitis follow RZV vaccination. Heydari-Kamjani et al. 5 reported a case of presumed subclinical sarcoidosis that subsequently presented with the development of uveitis starting 4 days after vaccination with RZV. Unlike our cases, this patient had no prior history of ocular inflammation. A recent report characterized a case of acute retinal necrosis following RZV. 6 As RZV does not contain infectious virus, this most likely represented a failure of efficacy in boosting immunity to VZV. There have been two reports of recurrent keratitis after vaccination with RZV. One patient had a history of controlled herpetic stromal keratitis who developed reactivation 3 weeks following the RZV 7 ; the other had a remote history of herpes zoster ophthalmicus who presented with stromal keratitis and ulceration a week following receipt of the second RZV dose. 8 Here, we present a spectrum of cases with uveitis activation in patients with previously controlled ocular inflammation following vaccination with the RZV. Viral DNA from previous zoster infection has been detected in corneal tissue up to eight years following the initial clinical presentation with herpes zoster ophthalmicus. 9 One possible mechanism for post-RZV ocular inflammation is that the cell-mediated response to RZV vaccination reacts with this residual viral DNA which results in reactivation of viral keratitis or potentially keratouveitis. We propose this as the possible cause for recurrent inflammation in our third case. The mechanism for reactivation of uveitis in our other two cases is less clear. We hypothesize that in case 1, the patient with longstanding multifocal choroiditis who was controlled on immunosuppression, the upregulation of humoral and cell-mediated responses following vaccination may have resulted in reactivation of immune cells directed against uveal antigens in addition to the desired response against the VZV glycoprotein. The uveitis reactivation in case 2 is consistent with a viral process, particularly given the keratitis. The underlying etiology may be a failure of vaccination since the patient's prior uveitis presentation, which was a bilateral process without keratitis that was well controlled with short courses of topical corticosteroids, was not consistent with a viral etiology. Despite the possibility of uveitis reactivation following RZV, RZV vaccination is an important component of preventative health. The presented cases should not deter patients and physicians from recommending the RZV vaccine; RZV is efficacious in preventing herpes zoster and postherpetic neuralgia. Rather, this report highlights the importance of ensuring primary care providers are aware of a patient's history of immune-mediated eye disease. We recommend that patients with a history of uveitis discuss plans for RZV vaccination with their ophthalmologist and primary care provider in advance, so appropriate postvaccination ocular monitoring occurs. Conclusions Reactivation of uveitis is an uncommon complication of RZV vaccination. Financial support This work was supported in part by an unrestricted grant from Research to Prevent Blindness, and a National Eye Institute Vision Research Core Grant (P30 EY016665) to the University of Wisconsin-Madison Department of Ophthalmology and Visual Sciences. Patient consent Consent to publish the case report was not obtained. This report does not contain any personal information that could lead to the identification of the patient. Declaration of competing interest No conflicting relationships exists for any author.
2021-05-27T05:23:34.510Z
2021-05-03T00:00:00.000
{ "year": 2021, "sha1": "2c2af975681ace7e975f5c337ea03370149e269a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.ajoc.2021.101115", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2c2af975681ace7e975f5c337ea03370149e269a", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
245203921
pes2o/s2orc
v3-fos-license
Metal cracks detection based on circular patch microstrip antenna Cracks in metal can be produced by many factors, such as external loads, physical processes, and chemical processes, such as alkali and corrosion. Structure health monitoring (SHM) is very important in maintaining the reliability of a building. Considering that the ultimate goal of a building health monitoring system is to provide sensory information that can facilitate decision making regarding the feasibility of components. Microstrip antennas have been shown to be able to detect cracking in metals according to their characteristics. In this paper will discuss the capability of a microstrip antenna with a circular patch having dual frequency operation to detect crack in metal. Introduction Metal is widely used in the construction of infrastructure as a framework to withstand the load of a building. Its role is very important in maintaining the reliability of a building. Structure health monitoring (SHM) refers to the process of implementing damage detection and characterization strategies for engineering structures. Damage is defined as a change in the material or geometric properties of a structural system, including changes in system connectivity, that affect system performance [1]. SHM is needed to ensure the robustness of a building. its application is expected to reduce losses caused by the failure of a building either due to age or disasters that occur. especially for Indonesia, which is in a disasterprone area. Cracks can be defined as unintentional discontinuities in a structural material. In general, cracks are the result of material failure. Cracks can be produced by many factors, such as external loads, physical processes, and chemical processes, such as alkali and corrosion [2][3][4]. Considering that the ultimate goal of a building health monitoring system is to provide sensory information that can facilitate decision making regarding the feasibility of components, it is necessary to develop new crack sensors that can not only detect cracks but also provide quantitative information about cracks. Microstrip antenna sensor that can detect crack length and propagation with its sub-millimetre resolution [5][6]. This sensor has many advantages such as high resolution, small size, light weight, low cost, and makes use of advanced microwave detection technology to facilitate wireless detection and signal processing. Microstrip antennas have been widely developed as sensors to detect cracks in metal. Changes in the structure of the metal will correlate with the frequency shift in the antenna characteristics [7][8][9]. In this article, we will discuss a circular patch microstrip antenna with an inset feed for detecting cracks in metal. frequency shift and minimum return loss are the parameters to be analyzed. Methods Microstrip antennas can be used to detect cracks that occur in metal. The characteristics of the microstrip antenna are highly correlated with the shape of the conductors in the patch and ground plane. the metal crack detection system uses a microstrip antenna using a ground plane which will be replaced with the metal to be tested. Antenna Microstrip Microstrip antenna generally has three parts, namely, patch, substrate and ground plane. The patch and ground plane are located at the top and bottom of the antenna, between which there is a substrate. The type of microstrip antenna used in this study has a patch with a circular shape. The substrate uses FR4 composite materials. The patch and ground plane materials used are copper conductors. The microstrip antenna that will be used to detect cracks in metal has a circular patch shape. The feeder used is an inset feed type with an impedance of 50 ohms [10]. In Figure 1 can be seen the dimensions of the antenna. The patch antenna has a radius of 42.3 mm and the feeder has a length of 36.7 mm and a width of 3.1 mm where the feeder length of 24.78 cuts the patch antenna with a width of 1 mm on each side. The ground plane on the opposite side of the patch and separated from the substrate has a thickness of 1.6 mm. The ground plane has dimensions of length and width of 93.2 mm and 92.2 mm, respectively The antenna has an operating frequency range according to its characteristics. The antenna that has been designed has two working frequency ranges. this value is obtained from the return loss. at low frequencies the antenna has a centre frequency of 1000 MHz and at high frequencies it has a value of 2803 MHz. Return loss simulation results can be seen in Fig. 2. Crack Detection Cracks in metal were tested using a microstrip antenna by placing it on the ground plane. The metal tested has a width of 500 mm and a length of 93.2 mm according to the length of the antenna. Fig. 3 shows the configuration for detecting metal cracks using a microstrip antenna, where the crack position is located outside the patch. In Fig. 4 you can see the configuration of metal crack detection, where the cracks are placed in the patch. the position of the crack at the centre of the circular patch. metal crack detection test using microstrip antenna based on simulation using the finite element method (FEM) Results and Discussion Cracks that occur in metal change the shape of the structure of the metal. in the test using a metal microstrip antenna placed on the ground plane of the antenna. Referring to the characteristics of the microstrip antenna, the change in structure will correlate with changes in the electrical property of the antenna. Crack Detection Inside Patch The test is carried out on metal cracks placed under the patch. Based on the simulation results, there is a shift in the centre frequency of the antenna operation to be greater than the operating centre frequency of the antenna without crack. in Fig. 5 can be seen the simulation results of the frequency shift caused by the crack that occurs in the patch. The artificial crack in the test metal has a width of 1 mm with a length varying from 2 mm to 22 mm in increments of 4 mm. the results of the frequency shift of all cracks are positive from the centre frequency of the antenna without cracking for low and high frequencies. the frequency shift at high frequencies has a greater value than the frequency shift at low frequencies. frequency shift at low frequencies based on the simulation results obtained insignificant changes in the value from 4 MHz to 6 MHz. The highest frequency shift was obtained for cracks with a length of 2 mm, 10 mm and 22 mm, while the lowest frequency shift occurred for cracks with a length of 14 mm. at high frequencies, the large frequency shift that occurs in the 2 mm crack length has the highest shift value of 19 MHz. and the lowest frequency shift is obtained at the crack length of 14 mm with a large shift of 6 MHz. Centre frequency antenna value is obtained from the minimum return loss value. in Fig. 6 it can be seen the minimum return loss value for the antenna without cracks and the antenna on the crack test. Based on the simulation results, it is found that there is an increase in the minimum return loss value of the antenna when there is a crack in the test metal. The highest return loss value at low frequency occurs for cracks with a length of 14 mm with a return loss value of -13.4186 dB. at high frequency, the highest increase in return loss occurs in cracks with a length of 6 mm of -18.3199 dB. Crack Detection Outside Patch Simulation results for metal cracks that are outside the antenna patch have a lower frequency shift than the cracks inside the patch. all crack test results for frequency shift are positive. The simulation results for the frequency shift for low and high frequencies can be seen in Fig. 7. Frequency shifting at high frequencies has a greater value than at low frequencies. at high frequencies the frequency shift does not change much which has a value in the range of 8 MHz to 9 MHz. at low frequencies the highest shift value occurs in cracks with a length of 2 mm which have a frequency shift of 8 Mhz. The lowest frequency shift is obtained at the crack length of 10 mm with a shift value of 2 MHz. The minimum return loss value obtained for the antenna used in the crack test on metal has a better value at low frequencies compared to the antenna without crack. The lowest minimum return loss value was obtained at the crack length of 10 mm with a value of -29,009 dB. this value is 86% lower when compared to the antenna without crack. Conclusions Circular patch microstrip antenna with dual frequency at 1000 MHz and 2803 MHz has been simulated to detect cracks on metal. Based on simulation result crack position at patch antenna has better sensitivity than outside patch, especially on higher frequency. Crack with length 2 mm has maximum shifting frequency at higher frequency with value 19 MHz.
2021-12-16T17:33:46.995Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "e54463b3703faf0722a3143a78e78b2a89cf4320", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/107/e3sconf_icdmm2021_05003.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "45c86e9b17f60da72f35b30434c7bcfad8fa30f7", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [] }
253535547
pes2o/s2orc
v3-fos-license
Preparation of Nano/Microcapsules of Ozonated Olive Oil in Hyaluronan Matrix and Analysis of Physicochemical and Microbiological (Biological) Properties of the Obtained Biocomposite Hydrogels, based on natural polymers, such as hyaluronic acid, are gaining an increasing popularity because of their biological activity. The antibacterial effect of ozone is widely known and used, but the instability the gas causes, severely limits its application. Ozone entrapment in olive oil by its reaction with an unsaturated bond, allows for the formation of stable, therapeutically active ozone derivatives. In this study, we obtained an innovative hydrogel, based on hyaluronic acid containing micro/nanocapsules of ozonated olive oil. By combination of the biocompatible polymer with a high regenerative capacity and biologically active ingredients, we obtained a hydrogel with regenerative properties and a very weak inhibitory effect against both bacterial commensal skin microbiota and pathogenic Candida-like yeasts. We assessed the stability and rheological properties of the gel, determined the morphology of the composite, using scanning electron microscopy (SEM) and particle size by the dynamic light scattering (DLS) method. We also performed Attenuated total reflectance Fourier transform infrared (FTIR-ATR) spectroscopy. The functional properties, including the antimicrobial potential were assessed by the microbiological analysis and in vitro testing on the HaCat human keratinocyte cell line. The studies proved that the obtained emulsions were rheologically stable, exhibited an antimicrobial effect and did not show cytotoxicity in the HaCat keratinocyte model. Introduction The skin is the second largest organ of the body of vertebrates (after the gut). It has a complex structure and multiple functions, most importantly the isolation of the internal environment of the organism from the exterior, in particular the mechanical defense against pathogenic microorganisms. Just as any other organ, the proper functioning of the skin depends on its condition [1,2]. A disrupted skin barrier weakens the defense mechanisms, compromises the integrity and leads to changes in pH, dehydration and inflammation-it poses a serious threat to the health of the individual [3]. Thus, the materials used in providing additional protection to the skin or supporting the wound healing process, present an important therapeutic potential. In recent years, many types of dressings have been developed to Figure 1A,B shows the scanning electron microscope images of the Hyal/O3 film surface at 50,000 an 10,000 magnification, respectively. (A) (B) The FTIR-ATR spectra in the spectral range of 750-4000cm −1 for the Hyal and Hyal/O3 films are presented on Figure 3. The band for the oscillations of the 3600-2980 cm −1 can be attributed to the hydrogen-bonded O-H and N-H stretching vibrations of the N-acetyl side chain. A group of overlapping bands of moderate intensity is observed at approximately 2910 cm −1 , due to the C-H stretching vibrations. The bands at 1620 and 1410 cm −1 can be attributed to the asymmetric (C=O) and symmetric (C-O) stretching modes of the planar carboxyl groups in the hyaluronate [38]. The FTIR-ATR spectra in the spectral range of 750-4000cm −1 for the Hyal and Hyal/O3 films are presented on Figure 3. The band for the oscillations of the 3600-2980 cm −1 can be attributed to the hydrogen-bonded O-H and N-H stretching vibrations of the N-acetyl side chain. A group of overlapping bands of moderate intensity is observed at approximately 2910 cm −1 , due to the C-H stretching vibrations. The bands at 1620 and 1410 cm −1 can be attributed to the asymmetric (C=O) and symmetric (C-O) stretching modes of the planar carboxyl groups in the hyaluronate [38]. Table 1 shows the Ostwald de-Waele rheological model parameters, matched to the obtained curves. The values of the apparent viscosity at a specific shear rate are included in Table 2. Figure 4 presents the changes in the flow curves of the hydrogels, stored for 30 days, while Table 1 shows the Ostwald de-Waele rheological model parameters, matched to the obtained curves. The values of the apparent viscosity at a specific shear rate are included in Table 2. K-consistency coefficient, n-flow behaviour index, τ0-yield stress. Parameters in columns denoted with the same letters do not differ statistically at the level of confidence p < 0.05. The flow curves of the examined hydrogels and nanoemulsions show a deviation from the Newtonian fluids and is typical for the pseudoplastic, shear-thinned fluids, which correspond with the literature data [39][40][41][42][43]. The shear stress rates of both examined The flow curves of the examined hydrogels and nanoemulsions show a deviation from the Newtonian fluids and is typical for the pseudoplastic, shear-thinned fluids, which correspond with the literature data [39][40][41][42][43]. The shear stress rates of both examined samples (Hyal and Hyal/O 3 ) are very close to each other and show a similar pattern of flow curves ( Figure 4). In both cases, the changes related to the storage time of the samples were noticed-a gradual decrease of shear stress occurred, which is evidence of a decrease of the apparent viscosity. This is also confirmed by the parameters of the flow curves, because the consistency coefficient decreased with the storage time of the samples ( Table 1). The differences in viscosity between the Hyal and Hyal/O 3 samples on the same measurement day are small or statistically insignificant, especially at higher shear rates (Table 2). However, the changes caused by the storage time, turned out to be statistically significant. Comparing the consistency coefficient values for the Hyal gel, a decrease of this parameter was observed on days 15 and 30, compared to day 1. A similar trend was observed for the Hyal/O 3 emulsion. Frequency Sweep The dependences of the elasticity and loss moduli vs the frequency are shown in the Figure 5. flow curves (Figure 4). In both cases, the changes related to the storage time of the samples were noticed-a gradual decrease of shear stress occurred, which is evidence of a decrease of the apparent viscosity. This is also confirmed by the parameters of the flow curves, because the consistency coefficient decreased with the storage time of the samples ( Table 1). The differences in viscosity between the Hyal and Hyal/O3 samples on the same measurement day are small or statistically insignificant, especially at higher shear rates (Table 2). However, the changes caused by the storage time, turned out to be statistically significant. Comparing the consistency coefficient values for the Hyal gel, a decrease of this parameter was observed on days 15 and 30, compared to day 1. A similar trend was observed for the Hyal/O3 emulsion. Frequency Sweep The dependences of the elasticity and loss moduli vs the frequency are shown in the Figure 5. Analysis of the Particle Size Distribution and Particle Charge The obtained results show enormous differences between the samples. Hyal reference samples contain particles of the order of 4000 nm, while in the Hyal samples supplemented with ozone, the particles are four times smaller (1060 nm). We see similar significant differences in the zeta potential results, which for Hyal is −55 mV, and for the Hyal/O3 samples −81 mV (with the measurement error less than 1 mV). Microbiological Analysis-The Effect of Hyal/O3 on the Microbial Growth Out of the examined 53 skin commensal bacterial strains, 17 showed a slight growth increase (Table 3, Figure 6a,b) as a result of the application of the Hyal/O3 foils. The growth of five bacterial strains was inhibited (Table 4, Figure 6c,d). In the case of the remaining 31 isolates, no effect was observed. Moreover, out of the 30 Candida strains, the growth of only five was inhibited (Table 4), while no effect was observed for the remaining 25 strains. The control foils (sole Hyal) caused no effect-neither an increase, nor an inhibition of the bacterial and Candida growth was observed in vitro ( Table 4). The statistical analysis showed that the differences in the growth inhibition zones were statistically significant, only between S. aureus and Candida (H = 15.88; p = 0.0012), while for the varying origin of strains, the results differed significantly (H = 20.78; p = 0.0009) between the type strains and those isolated from the skin lesions (z = 3.4; p = 0.01) and between the type strains and the one isolated from the eye (z = 3.67; p = 0.004). Analysis of the Particle Size Distribution and Particle Charge The obtained results show enormous differences between the samples. Hyal reference samples contain particles of the order of 4000 nm, while in the Hyal samples supplemented with ozone, the particles are four times smaller (1060 nm). We see similar significant differences in the zeta potential results, which for Hyal is −55 mV, and for the Hyal/O 3 samples −81 mV (with the measurement error less than 1 mV). Microbiological Analysis-The Effect of Hyal/O 3 on the Microbial Growth Out of the examined 53 skin commensal bacterial strains, 17 showed a slight growth increase (Table 3, Figure 6A,B) as a result of the application of the Hyal/O 3 foils. The growth of five bacterial strains was inhibited (Table 4, Figure 6C,D). In the case of the remaining 31 isolates, no effect was observed. Moreover, out of the 30 Candida strains, the growth of only five was inhibited (Table 4), while no effect was observed for the remaining 25 strains. The control foils (sole Hyal) caused no effect-neither an increase, nor an inhibition of the bacterial and Candida growth was observed in vitro ( Table 4). The statistical analysis showed that the differences in the growth inhibition zones were statistically significant, only between S. aureus and Candida (H = 15.88; p = 0.0012), while for the varying origin of strains, the results differed significantly (H = 20.78; p = 0.0009) between the type strains and those isolated from the skin lesions (z = 3.4; p = 0.01) and between the type strains and the one isolated from the eye (z = 3.67; p = 0.004). The Assessment of Cytotoxicity in the HaCat Keratinocyte Model Both hyaluronic acid-based hydrogels (with and without the ozonated olive oil nanocapsules) were very well tolerated by the HaCat keratinocytes, and the dilutions 1:25 and greater did not significantly affect the number of viable cells in the culture (Figure 7). Only in case of the highest tested concentration of both hydrogels (1:10 dilution), the incubation lead to the statistically significant reduction of the cell viability: 80% and 52% of viable cells for the hyaluronic hydrogel (Hyal) and the hyaluronic hydrogel enriched in the ozonated olive oil (Hyal/O 3 ), respectively (Figure 7). No signs of cytotoxicity, such as abnormal morphology, floating detached cells or cell debris in the cell culture were observed, so the decreased cell number is likely a result of a slowed down proliferation rate, rather than cell death. The hydrogel that contained ozonated olive oil nanocapsules (Hyal/O 3 ) exerted the stronger growth inhibiting effect, than the pure hyaluronic acid hydrogel (Hyal). The cell culture medium with the highest concentration of hydrogels (1:10) had an opaque, cloudy appearance and was slightly more viscous than a standard medium, so the observed inhibition of the proliferation could possibly be attributed to impeded gas or nutrient diffusion. The Assessment of Cytotoxicity in the HaCat Keratinocyte Model Both hyaluronic acid-based hydrogels (with and without the ozonated olive oil nanocapsules) were very well tolerated by the HaCat keratinocytes, and the dilutions 1:25 and greater did not significantly affect the number of viable cells in the culture (Figure 7). Only in case of the highest tested concentration of both hydrogels (1:10 dilution), the incubation lead to the statistically significant reduction of the cell viability: 80% and 52% of viable cells for the hyaluronic hydrogel (Hyal) and the hyaluronic hydrogel enriched in the ozonated olive oil (Hyal/O3), respectively (Figure 7). No signs of cytotoxicity, such as abnormal morphology, floating detached cells or cell debris in the cell culture were observed, so the decreased cell number is likely a result of a slowed down proliferation rate, rather than cell death. The hydrogel that contained ozonated olive oil nanocapsules (Hyal/O3) exerted the stronger growth inhibiting effect, than the pure hyaluronic acid hydrogel (Hyal). The cell culture medium with the highest concentration of hydrogels (1:10) had an opaque, cloudy appearance and was slightly more viscous than a standard medium, so the observed inhibition of the proliferation could possibly be attributed to impeded gas or nutrient diffusion. Discussion The SEM microscopy showed that the spherical nanocapsules, sized 50-100 nm, with an active substance, can be observed, as well as the single capsules with dimensions of 150-200 nm (Figure 1a). This image confirms the presence of nanocapsules in the produced matrix. Figure 1b was performed at a longer exposition time, which caused the swelling and cracking of the capsules. We can observe bigger (swollen) structures, sized 100-1000 nm. Some capsules cracked during the analysis under the influence of the electron beam, showing their core-shell structure (core-ozonated olive oil, shell-hyaluronic acid). Similar structures have been observed in the case of composites containing ozonated olive oil in chitosan [44]. The Hyal spectrum shows a well-defined band at 269 nm, corresponding to the carbonyl groups of the hyaluronic acid molecule. The lack of a band shift may indicate the lack of interactions between the carboxyl group of Hyal and the ozonated oil. When such Discussion The SEM microscopy showed that the spherical nanocapsules, sized 50-100 nm, with an active substance, can be observed, as well as the single capsules with dimensions of 150-200 nm (Figure 1a). This image confirms the presence of nanocapsules in the produced matrix. Figure 1b was performed at a longer exposition time, which caused the swelling and cracking of the capsules. We can observe bigger (swollen) structures, sized 100-1000 nm. Some capsules cracked during the analysis under the influence of the electron beam, showing their core-shell structure (core-ozonated olive oil, shell-hyaluronic acid). Similar structures have been observed in the case of composites containing ozonated olive oil in chitosan [44]. The Hyal spectrum shows a well-defined band at 269 nm, corresponding to the carbonyl groups of the hyaluronic acid molecule. The lack of a band shift may indicate the lack of interactions between the carboxyl group of Hyal and the ozonated oil. When such an interaction is present, we usually observe a bathochrome shift [45]. The spectra differ only in the higher absorbance of the Hyal/O 3 sample, relative to the Hyal, which results from the lower transparency of the Hyal/O 3 sample. The lack of differences may also indicate that the entire amount of the ozonated oil has been enclosed in capsules, which causes a reduction in the transparency and an increase in the absorption in the entire studied spectrum. The addition of ozonated olive oil nanocapsules did not significantly affect the structural changes of the polymer (Figure 3). At the Hyal / O 3 spectrum, we can see bands corresponding to the oil structure and an increase in the intensity of the absorption bands characteristic for oils. We can also observe vibrations corresponding to the methyl group in the range from 1350 to 1150 cm −1 that are the valence vibrations, corresponding to C-H in the -CH 3 Then several vibrations with a maximum at approx. 2952, 2921 and 2855 cm −1 come from valence -C-H vibrations from groups -CH 3 , CH 2 , respectively, in triglycerides [47]. The decrease in the viscosity of the hydrogels and nanoemulsions over time, and thus their thinning, may be a result of the weakening of the intermolecular interactions and indicate a slow destabilization of the formed structure. The obtained parameters are extremely important in the assessment of the properties of the gels and the emulsions intended for application on the skin, because they are related to a specific shear rate. According to the literature, cream spooning and pouring occurs at a shear rate of 10-100 s −1 , while rubbing the cream on the skin occurs at a shear rate of as much as 1000 s −1 [41,48,49]. For comparison, Table 2 shows the viscosity values of the tested gel and the Hyal-based nanoemulsions for the individual shear rates. As can be seen from the presented data, the apparent viscosity of the preparations decreased with the increase of the shear rate, which proves their thinning. Moreover, when comparing the obtained results with the literature data [48,50], we observed that the apparent viscosity values of the tested systems, compared to the typical creams applied to the skin, were much lower and more similar to the rare, delicate, semi-solid systems, than the dense, heavy creams. The advantage of such systems is undoubtedly the ease of rubbing on the skin even at the low applied shear rates. This is also confirmed by the literature data, according to which the rheological properties of the product, especially the parameters, such as the yield stress and apparent viscosity in the lower shear rate ranges, can be correlated with the empirically subjective assessment of skin sensations (application and distribution of the preparation on the skin). Moreover, the apparent viscosity determined for the upper range of the shear rate (γ = 500 s −1 ) enables the final evaluation of spreading the sample on the skin, which increases with the decreasing viscosity [49,51,52]. By analyzing the obtained data (Figure 4), it was found that the tested gels and nanoemulsions showed a low flow limit. It is an important parameter in the assessment of the quality of the gels and emulsions that can be used in the production of medical ointments. A too high yield stress value indicates a heavy consistency of the product and difficulties in its distribution on the skin. From the consumer's point of view, this is an undesirable feature as it may lead to skin irritation, which in turn discourages from the regular use. Moreover, emulsions with high values of yield stress are characterized by a lower efficiency [49,51,52]. An important parameter characterizing the nanoemulsions intended as creams and various types of healing ointments, is thixotropy, visible as hysteresis loops. The desired physical features, which the consumer pays close attention to, are the ease of application of the preparation to the skin. It is therefore important that the emulsion can return to its original shape when pressure is applied. The semi-solid product is applied to the skin by the force of the touch and transforms into a liquid. The emulsion must quickly re-bond and restore the semi-solid form, i.e., be thixotropic [50]. In the tests conducted in this study, the occurrence of a small hysteresis field was observed (Table 1). In the case of low-viscosity systems, i.e., more semi-solid than solid systems, it can be considered an advantage, because in this case, a too high thixotropy would indicate a significant dilution of the product during the shear and a slow return to its original state after the effect of shear forces has subsided, and such a behavior would make it difficult for the application and absorption of emulsions into the skin. It should also be noted that the reconstruction of the shear-damaged structure, in case the emulsion took place very quickly, may indicate its high stability, and no disintegration into a separate phase. The frequency sweep test is performed in the LVR range, and is aimed not to destroy the structure, so that the measurements can provide information about the intermolecular forces present in the material [53]. The increase of the moduli values along the increase of frequency is observed for the whole frequency range. The Hyal and Hyal/O 3 gels showed an advantage of viscous properties over elastic ones in the low frequency range 0.1-4.0, and at higher frequency values, a module intersection, above 4 Hz G > G was observed. Such a course of curves proves the properties typical for concentrated solutions, showing a tendency for a more solid-like (gel-like) behavior at higher frequencies [54,55]. Only a slight decrease in the mechanical modulus (G and G ) was observed with the storage time of the Hyal gel and the Hyal/O 3 emulsion. To our best guess, hyaluronic acid in solutions without additives creates large flat structures. Meanwhile, when it interacts with the olive droplets, its structures are adapted to the shape and size of the olive droplet, which they tightly cover. Therefore, the particles of Hyal/O 3 can be many times smaller than the particles of the hyaluronic acid itself. Increasing the negative surface potential of such shells is also logical, since −81mV is the real results of the negative surface potentials of hyaluronic acid and olive oil. The mechanism of interaction between these components remains unknown. More negative surface probably was formed due to the ozone oxidation. A similar behaviour was observed in the PLA materials after the photo-oxidation, due to the UV irradiation [56]. It was already proved that under the influence of ozone, the surface becomes functional, as a result of which the oxygen-containing functional groups are included of material interface [57]. The functional group oxidation is complicated and probably requires complicated pathways [58]. Authors also confirmed that the ozonation of solid materials can increase the specific surface area, because ozone reacts with the physical structure of the materials, enlarging the pore size and creating new pores. An increase in the pore structure was seen at the micropore level. A few preliminary and pilot studies have been published on the effect of ozonated oils on pathogenic bacteria [59][60][61] and Candida yeasts [62][63][64][65], or both [66,67], as well as the possibility of their topical applications [33]. None of the studies published so far have dealt with the possible impact of ozonated oils on commensal skin microbiota. The results presented by various authors are either contrary, or show varying, sometimes disputable effects against microorganisms. For example, Pietrocola et al. [59] examined the effect of ozonized olive oil against Gram-positive and Gram-negative oral and periodontal pathogens. They observed a moderate antiseptic effect of this preparation, definitely lower, as compared to the classic chlorhexidine preparation. Silva et al. [61], based on the study involving the methicillin-resistant S. aureus, observed the growth inhibition caused by ozonated oils and suggested that their use is promising in the treatment of skin infections. Radzimierska-Kaźmierczak et al. [60] demonstrated a weak inhibitory effect of ozonated olive oil against E. coli, S. aureus, C. albicans and Aspergillus brasilensis, suggesting it to be a promising raw material for the cosmetics and pharmaceutical industries. The inhibitory effect against the skin microbiota and pathogenic Candida, observed in this study can also be assessed as weak to moderate. What is interesting, is that out of the 53 bacterial strains tested, the growth of 17 was slightly increased in vitro ( Figure 6A,B), with the highest share represented by Micrococcus luteus (n = 10). No similar observations have been reported so far, therefore the ability of some bacterial strains to overcome the O 3 treatment and to increase their growth should be subjected to further, more thorough, examinations. What is interesting, is that the growth of two other strains of M. luteus (isolated from hands) was inhibited, while the remaining five strains did not react to the application of the Hyal/O 3 foils. Serio et al. [68] published a study examining the in vitro antibacterial effects of ozonated sunflower seed oil and observed a satisfactory growth inhibition of both Gram-negative and Gram-positive strains of bacteria, including M. luteus. As shown in Table 3, the Hyal/O 3 foils caused a growth inhibition of the commensal bacteria and Candida yeasts, but the mean values of growth inhibition zones were higher for the bacteria than for Candida and the differences in the growth inhibition zones were statistically significant between Candida and S. aureus. In our study, the highest mean values of growth inhibition zones were observed for both type strains of S. aureus (methicillin-resistant -MRSA and methicillin-susceptible -MSSA) strains, the same as in the study by Silva et al. [61], who observed a high activity of ozonated oils against both MSSA and MRSA strains. Similar observations to our experiments were made by Nocuń et al. [66], in a study on the activity of ozonated olive oil against nine species of pathogenic and potentially pathogenic bacteria and fungi, including S. aureus, E. coli and C. albicans. They observed that the effective concentration of ozonated olive oil was much higher against Candida (1.6% vol) than for S. aureus (0.2% vol), and that the mean growth inhibition zone diameter was nearly two times smaller in Candida (14 mm) than in S. aureus (26.7). The decreased susceptibility of the Candida strains towards antifungal agents can be associated with the structure and composition of their cell wall, which is two-layered and is composed of a β-glucan-chitin skeleton, which is responsible for the strength and shape of the cell wall [69]. It contains mannans which have a low permeability and porosity thus affect the resistance of the cell wall to the antifungal agents [69]. In terms of different Candida species, no statistically significant difference was observed (H = 3.79; p = 0.28). Similarly, Monzillo et al. [65] studied the effect of ozonized gel against four Candida species and observed the antimycotic activity of this preparation, but without clear differences between the different species. Moreover, Berenji et al. [63] observed a decreasing susceptibility of the Candida species to ozonized olive oil, in the following order C. krusei > C. glabrata > C. albicans. Furthermore, Nocuń et al. [66] also observed a significantly lower susceptibility of Gram-negative bacteria (E. coli) to ozonized olive oil than the Gram-positive S. aureus. This observation is also similar to the one in our study, but here we observed no growth inhibition of neither of the two type strains of E. coli. Nocuń et al. [66] attributed the lower susceptibility of Gram-negative bacteria to the ozonized preparations and antibiotics, to the presence of the outer lipopolysaccharide membrane and its decreased permeability [70]. Previous studies on hyaluronate hydrogels revealed a significantly impeded diffusion of various solutes in such gels, and the effect was proportional to the increase of the hyaluronate concentration and the molecular size of the solutes, as well as the presence of crosslinking agents [McCabe and Laurent 1975; Ogston and Sherman 1961; Kodavaty and Deshpande 2021]. For example, the diffusion of serum albumin and glucose in the hyaluronate solution (0.8 mg/mL) were reduced by 20-and approximately three fold, respectively [71], whereas more a recent study reported a 1.5-3.5 fold slower diffusion of fluorescein as a tracer molecule in 5% hyaluronate hydrogels crosslinked with divinyl sulfone, depending on the pH of the solution [72]. The diffusion of oxygen in hyaluronate gels (solidified with 6% agarose) was decreased by 7% [73]. These data suggest that the diffusion of nutrients (e.g., lipids), as well as the waste metabolites could be impaired in our cell cultures with the highest concentration of hydrogels (1:10). Particularly, the large proteins present in the serum-containing media, such as the growth factors necessary for sustaining the cell proliferation, could have a sub-optimal access to the keratinocytes, which resulted in a slower growth rate, as compared to the control cultures. However, our two-dimensional cell culture model has some limitations: in this model the cells have access to the nutrients, the growth factors and oxygen, only from the apical surface, whereas in the physiological condition in skin, the viable epidermis (stratum basale), the cells receive the necessary nutrients through the capillary vessel circulation. Therefore, the topical application of hyaluronate hydrogels (with or without ozonated olive oil) would not affect the nutrition or respiration of the keratinocytes in their microenvironment. Hyaluronan facilitates the wound healing processes and supports the keratinocyte functions [74]. Our results are similar to those presented in other studies on non-crosslinked hyaluronic acid hydrogels, that did not show cytotoxicity in the Hacat model (cell viability remained 70% or greater) [74,75]. However, the concentrations tested in the latter study were much lower than in our experiments (0.01-0.5% vs. 1:10-1:100 in our study). Hyaluronan promotes the keratinocyte proliferation and differentiation into corneocytes, which are terminally differentiated keratinocytes present in the stratum corneum, the external protective layer of the skin [76,77]. The formation of corneocytes is an important process, crucial for the development to the epidermal barrier. The molecular mechanism of the hyaluronan action involves binding to the surface glycoprotein receptor CD44 on the keratinocytes and the subsequent activation of the transcriptional program necessary for both the proliferation and differentiation [78]. The activation of CD44 by hyaluronan, induces the expression of cyclin D, involucrin, profilaggrin and cytokeratin 10 [76,77]. Hyaluronan and ozonated lipid mixtures have not been tested in HaCat cultures, but some studies demonstrated that Ozodrop ® preparations, containing ozonated sunflower oil liposomes with the addition of Hypromellose, stimulated the HaCat proliferation and expression of the antimicrobial peptides, such as calprotectins and calregulin C [79]. Ozodrop ® efficiently inhibited the expression of the proinflammatory cytokine CCL20 in keratinocytes, which might support the regeneration and alleviation of the local inflammation. Regenerative processes were further stimulated by the elevated expression of the migration markers, such as matrix metalloproteinases MMP2 and MMP9 by Ozodrop ® , and the acceleration of the wound healing in a scratch wound assay by Ozodrop ® gel [79]. These results confirm the positive impact of the ozonated lipids on the skin physiology and suggest further direction of experimental analyses on the Hyal/O 3 hydrogel to verify its activity, as well. Determination of the Ozone Content The ozone content in the vegetable oils used and the obtained products was determined by the peroxidation number, according to the procedure described in the European Pharmacopoeia [80]. Preparation of Ozonated Olive Oil Nanoemulsion Nanoemulsion was prepared by placing a mixture of 5.0 mL of water and 5.0 mL of ozonated olive oil in an ultrasonic cleaner (Polsonic, Warsaw, Poland) and was sonicated for 30 min to obtain a nanoemulsion. Hyaluronic Acid Hydrogel Preparation 1000.0 g of a 2% solution was prepared by weighing out 20.0 g of hyaluronic acid on an analytical weight (Radwag, Białystok, Poland) and afterwards supplementing it with 980.0 mL of deionized water. The resulting suspension was stirred using a magnetic stirrer (Heidolph RZR 2020, Heidolph Instruments GmbH & Co. KG, Schwabach, Deutschland) until a clear gel was obtained. Samples Preparation The previously obtained emulsion (10 mL) was slowly dropped into the 500 g hyaluronic acid gel, cooled down to 5 • C, while homogenizing. (Polytron PT 2500 E, Kinematica AG, Malters, Switzerland). A stable emulsion was obtained. For the SEM, FTIR-ATR oraz UV-VIS analysis, 100 g portions of gel were poured onto sterile 12 cm diameter polypropylene dishes and dried at room temperature to obtain the foils. SEM Microscopy The size and morphology of the nanoparticles thus prepared were analysed using a JEOL 7550 (Akishima, Tokyo, Japan) scanning electron microscope. Prior to the measurement, the prepared samples were sprayed (K575X Turbo Sputter Coater, Quorum Technologies Ltd, Lewes, UK) with 20 nm of chromium to increase a conductivity of the samples. UV-VIS UV-Vis absorption spectra of the composite obtained were analysed in the range of 200-800 nm using a Shimadzu 2101 (Shimadzu, Kyoto, Japan) scanning spectrophotometer. FTIR-ATR The FTIR-ATR spectra of the composite was analyzed using a MATTSON 3000 spectrophotometer (Madison, WI, USA). in the range of 4000-700 cm −1 with a resolution of 4 cm −1 . Rheological Measurements Rheological measurements were determined using a RheoStress RS 6000 (Thermo Scientific, Karlsruhe, Germany) rotary rheometer equipped with a plate-plate P 35 Ti geometry. The temperature of the baseplate was 25.0 ± 0.1 • C. The measurement was carried out at freshly prepared samples (1 day) and after 15 and 30 days of storage in the fridge at 8 • C. On the measurement day, the sample was removed from the refrigerator and incubated at 25 • C for 1 h. The measurements were run in duplicate. Flow curves: The shear rate was raised from 0.1 to 300 s −1 , over a 10 min period and a subsequent decrease of the shear rate from 300 to 0.1 s −1 , over a 10 min. The obtained flow curves were described by the Herschel-Bulkley rheological model: γ-shear rate (s −1 ), and n-flow behaviour index, τ 0 -yield stress (Pa). Oscillation stress sweep test: the stress was increased from 0 to 300 Pa in 40 logarithmic steps at a constant frequency (1 Hz). The frequency sweep test: frequency was increased from 0.01 to 30 Hz at 1Pa deformation fitting the range of the linear viscoelasticity. Statistics The statistical analysis involved a Statistica 12.5 (StatSoft, Tulsa, OK, USA) software employing the mono-and bifactorial analysis of variance and the Duncan's test for checking the significance of the differences at p < 0.05. For the microbiological tests, the statistical analysis was performed in Statistica v.13 (TIBCO Software, Palo Alto, Santa Clara, CA, USA). The descriptive statistics (mean, standard deviation, coefficient of variation) for the growth inhibition were calculated. A non-parametric Kruskal-Wallis test was applied to compare the effects of the Hyal/O 3 and control foils, as well as the effect of Hyal/O 3 against the different species of bacteria and Candida yeasts. The significance level for all tests was predetermined as p < 0.05. Dynamic Light Scattering (DLS) and Zeta Potential-Analysis of the Hydrodynamic Diameter, Polydispersity and Particle Charge For the research, we used the same original emulsions, which were used for the foil preparation. The emulsion was diluted 10 times in distilled water and left on a magnetic stirrer for 12 h until it was fully dispersed. The particle size dispersion and zeta potential were measured using a Zetasizer Nano Series ZS (Malvern, UK). In such a measurement, one experimental cycle equals at least 12 repetitions of the single analyses. For each sample, three full experimental cycles were performed and the average value from all experiments was analyzed. Swab samples were collected from various regions of the human skin and body, i.e., skin lesions of various types, the under eye region, cheeks, hands, back, eye, ear, mouth, throat, tonsils, vagina, anus. Sputum was also collected for the isolation of potentially pathogenic microorganisms. The samples were inoculated on general and selective media for the isolation of bacterial strains of the commensal skin microbiota, pathogens and potential pathogens as well as Candida species. Trypticase Soya agar (Biomaxima, Lublin, Poland) was used to isolate the bacterial members of the commensal skin microbiota (incubation for 24-48 h at 37 ± 1 • C), Mannitol Salt agar (Biomaxima, Lublin, Poland) was used for the isolation and preliminary identification of Staphylococcus spp. (yellow and pink colonies after incubation for 24-48 h at 37 ± 1 • C), Baird Parker agar was used for the isolation and identification of Staphylococcus aureus (grey to black colonies with clear halo after incubation for 24-48 h at 37 ± 1 • C), Columbia Agar with Sheep Blood Plus (Oxoid, Cheshire, UK) and UTI agar Plus (Oxoid, Cheshire, UK) were used for the isolation and identification of the type strains of Escherichia coli and Staphylococcus aureus, whereas Sabouraud Dextrose agar (Graso Biotech, Jabłowo, Poland), CandiSelect agar (Bio-Rad, Marnes-la-Coquette, France) were used for the isolation and preliminary identification of the Candida-type yeasts (incubation for 3-5 days at 35-37 ± 1 • C). Following the incubation and preliminary identification, the selected bacterial and yeast colonies were subcultured and subjected to microscopic observations of the Gram-stained preparations. The systematic position of the 53 bacterial strains was verified by MALDI-TOF (matrixassisted laser desorption/ionization-time of flight) mass spectrometry and the systematic position of 30 yeast strains was determined using the CandiFast test kit (ELITechGroup, Puteaux, France). Antimicrobial Activity of Hyaluronic Acid with Ozone Antimicrobial activity of the examined preparations was assessed on a total of 83 microbial isolates. These included 53 bacterial isolates (four type strains and 49 isolates from the human skin) of 12 different species (Table 5) and 30 pathogenic Candida (two type strains and 28 isolates from various regions of the human body) of 12 different species (Table 6). 0 0 1 0 0 Total 6 2 8 10 2 2 Microbial isolates were transferred to a sterile saline solution to obtain 0.5 MacFarland suspensions, then streaked onto Mueller-Hinton agar (Biomaxima, Lublin, Poland). Both sole Hyal and Hyal/O 3 foils were sterilized under UV light for 30 min. Then, 10 × 10 mm squares were cut with a surface sterilized scalpel and applied onto the surface of the bacterial and yeast cultures. The cultures were incubated at 35 • C for 18-24 h (in case of bacteria) and for 3-5 days (in case of yeasts). Then, the results were read by observing whether the growth of the microorganisms was affected. In cases of growth inhibition, the diameters around the foil fragments were measured. Due to the fact that the applied foils were square, two diameters were read and the final result was expressed as a mean of the two reads (mm). All experiments were performed in three replications. The Assessment of Cytotoxicity The spontaneously immortalized human epidermal keratinocytes (HaCaT cell line) were seeded on a 96-well cell culture plate (1000 cells per well) in RPMI 1640 medium (Corning, USA), supplemented with fetal calf serum (EurX, Gdańsk, Poland) to a final concentration of 10%, 2 mM stable glutamine and antibiotic-antimycotic mixture: penicillin 50 I.U./mL, streptomycin 50 µg/mL and amphotericin B 250 ng/mL (from Biowest, Lo-Reninge, Belgium). Hyaluronic acid hydrogels (with or without ozonated olive oil) were diluted in the cell culture medium in 1:10, 1:25, 1:50 and 1:100 proportions, then 200 µL of these mixtures were added to each well with cells. The nontreated cells (NT control) received the standard cell culture medium. Following the 48 h incubation, the cell viability was determined, based on the ATP content of each well, using the luminometric Cell-Titer Glo test (Promega, Germany). The cell numbers were quantified, based on the calibration curve prepared with the known numbers of the HaCaT cells. Two independent experiments were performed, tetraplicates in each experimental group. Conclusions Nanocapsules of ozonated olive oil in hyaluronic acid, sized 50-100 nm, have been successfully obtained. The addition of ozonated olive oil nanocapsules did not significantly affect the structural changes of the polymer. The emulsions had a good rheological stability over time. The DLS study showed that hyaluronic acid in solutions without additives creates large flat structures, while when it interacts with the olive droplets, its structures are adapted to the shape and size of the olive droplet. The examined Hyal/O 3 foils exhibited a very weak inhibitory effect against both bacterial commensal skin microbiota and pathogenic Candida-like yeasts. These results indicate that this formula can be treated as a safe ingredient of cosmetic preparations. Data Availability Statement: The data presented in this study are available upon request from the corresponding author.
2022-11-16T16:52:55.438Z
2022-11-01T00:00:00.000
{ "year": 2022, "sha1": "2c828b2d1d7b72773c5cd2da4e3fdaf7d523f896", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/23/22/14005/pdf?version=1668333920", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8cf3593487e366183ce234e837874c41d7acce94", "s2fieldsofstudy": [ "Materials Science", "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
245171387
pes2o/s2orc
v3-fos-license
The Effectiveness of Virtual Reality on Anxiety and Performance in Female Soccer Players With the increased use of technology, relaxation interventions are finding their way into technology devices like virtual reality head-mounted displays (VR HMDs). However, there is a lack of evidence on the efficacy of VR relaxation interventions to reduce anxiety in athletes and how that is portrayed in their movement patterns. The purpose of the current study was to examine how a VR relaxation intervention affected perceived anxiety levels and penalty kick performance of female soccer players. Thirteen female soccer players took five penalty kicks in baseline, stress-induced, and VR relaxation conditions. Perceived levels of anxiety, self-confidence, mental effort, heart rate (HR), accelerometry of the lumbar spine and thigh, and performance in each condition was obtained. Results indicated that the VR intervention significantly reduced cognitive anxiety and somatic anxiety from baseline (p = 0.002; p = 0.001) and stress (p < 0.001; p < 0.001) with large effect sizes (Kendall’s W = 0.72; 0.83). VR significantly increased self-confidence from baseline (p = 0.002) and stress (p = 0.001) with a large effect size (Kendall’s W = 0.71). Additionally, all participants felt that VR helped them relax. Mental effort was significantly higher in the stress condition compared to that in baseline (p = 0.007) with moderate effect size (Kendall’s W = 0.39). Peak acceleration and performance were not significantly influenced by stress or VR. This study serves as an initial step to evaluate VR relaxation interventions on performance in female soccer players. Introduction Coaches, athletes, and practitioners are seeking strategies to optimize performance, and with the ever-evolving world of technology, virtual reality (VR) has seen an increase in use as a means to optimize performance. VR has been used to enhance perceptual-cognitive skills by training athletes to detect informational cues related to the game [1]. VR has also been utilized for injury rehabilitation in each phase of the rehabilitation process, tending to the needs of the participant during their recovery. Lastly, VR has been used for relaxation in sport to help athletes properly manage potential stressors by teaching athletes coping strategies with VR [1]. However, more information is needed with VR as a relaxation intervention for athletes under stress and competitive anxiety. Stress and competitive anxiety are areas that have received considerable attention in athletics. Stress occurs when there is an imbalance between physical and mental demands placed on the athlete, and the response capability under those conditions fails to meet the demands, having important consequences [2]. More specifically, psychological stress is "a relationship between the person and the environment that is appraised by the person as taxing or exceeding [their] resources and endangering [their] well-being" [3]. A contributor to psychological stress is competitive anxiety [4]. Competitive anxiety is either a trait and/or state-like response to a stressful sport-related situation which results in a range of cognitive appraisals, behavioral responses, and/or physiological arousals. When it becomes too high, the athlete can become over-aroused, exceeding the mental capacity to process stimuli, resulting in an increase in stress and decline in performance [4]. Martens and colleagues [5] separate competitive anxiety into cognitive anxiety (e.g., negative thoughts and worry) and somatic anxiety (e.g., physiological signs of nervousness and tension). It has been demonstrated that negative coping control is associated with competitive anxiety [6]. Therefore, it is important for athletes to modulate psychological stress and anxiety by developing or discovering positive coping mechanisms for optimal performance and well-being [7]. Poor coping mechanisms indicate poor adaptability to stress. A negative reaction to stress and anxiety interferes with motor coordination, reduces flexibility, and increases cognitive and somatic anxiety, and shifting of attention leads to a narrowing of the visual field, causing athletes to miss vital cues [2,8]. Literature has demonstrated a negative relationship between anxiety and how it projects physically in high-anxiety situations [9][10][11]. A reduction in performance due to increased anxiety has been shown in soccer players taking penalty kicks, where participants performed significantly lower in the high-anxiety condition compared to the low-anxiety condition [11]. Sekiya and Tanaka [10] investigated the kinematic performances of novice table tennis players with high psychological pressure and found significant alterations in their swing and speed, contributing to a decrease in performance. Together, these studies show that increased stress and anxiety can negatively affect a person's movement in multiple sports, indicating that stress can negatively impact not only psychological performance, but also physical performance. Because of this, it is important to consider both components to have a better understanding of stress and anxiety as a whole on athletic performance. In order to assist with the increasing psychological stress and anxiety of athletes, various interventions have been implemented, including mindfulness, relaxation imagery, deep breathing, and muscle relaxation [7,12]. Now, with the increased use of technology, relaxation interventions are finding their way into technology devices and have become incredibly accessible with mobile apps (e.g., Calm and Headspace) and virtual reality head-mounted displays (VR HMDs). Due to the heightened immersive effects, VR HMDs can result in a better imagery experience and reduction of anxiety compared to traditional relaxation techniques [13]. VR has been used in other non-athletic populations (i.e., intensive care unit (ICU) patients, teachers, and nurses) for anxiety reduction, and a positive relaxation effect was found [14,15]. Exposure therapy with VR is a method that has been used to reduce the stressful reaction to specific stimuli with individuals to develop better coping mechanisms to reduce stress and allow them to apply their strategies in their professional setting [15]. For patients in an ICU, an audiovisual imagery experience was utilized to reduce sensation of stress and anxiety to put the patients at ease [14]. Furthermore, Liu and Matsamura [16] found that VR relaxation interventions provided acute relaxation in NCAA Division I student athletes. Researchers found that 74.4% of participants thought the VR intervention helped them relax or reduce anxiety, and 90% of the participants would use the intervention again [16]. This study did not analyze any performance factors and was solely focused on relaxation. Based on their post-VR survey, 67.5% of participants believed that the VR intervention could be beneficial in helping increase their performance when used before games, which opens a window for potential use of VR in sport that needs to be further explored in quantitative measures [16]. Thus, the current study aimed to provide evidence if VR relaxation could be beneficial prior to competition, or if it is a subjective perception among athletes. Unfortunately, there is a lack of evidence on the efficacy of VR relaxation in anxiety reduction in athletes and how it translates to their movement patterns in performance settings. This study utilized accelerometer data as the physical component to examine if induced stress and/or the VR intervention affected participants' technique/motor pattern and tracked any changes across the duration of the study. Due to the novelty of this study and it serving as an initial step in this direction of research, accelerometer data was utilized for its ease of use and simplicity in measuring biomechanical changes on a soccer field. Combining accelerometer data with heart rate (HR) and psychological measures will further bridge the sport psychology and biomechanical literature together to examine the effects of stress and anxiety as a whole. To extend the previous literature, the purpose of this study was to examine how VR relaxation techniques affect perceived anxiety levels in female soccer players and how the potential changes translated to their movement patterns during baseline, stress-induced, and VR relaxation penalty kick conditions. It was hypothesized that cognitive and somatic anxiety, perceived mental effort, and HR would be higher during the stress-induced block compared to those of the VR and baseline blocks, while self-confidence would be lower. It was also hypothesized that accelerometer data from the anterior thigh would be greater in the stress-induced block compared to those in the VR and baseline blocks and result in lower performance scores. Research Design Participants performed under baseline, stress, and VR conditions in a repeated measures design. Each participant served as their own control and completed all conditions. The current study followed similar procedures as those in Wilson et al. [11] and Wood et al. Demographic Questionnaire A demographic questionnaire obtained information about the participants including items such as age, race, soccer history, and previous experience with VR. The VR items consisted of the use of VR as a relaxation technique and frequency. Mental Readiness Form-3 (MRF-3) The MRF-3 measures cognitive anxiety, somatic anxiety, and self-confidence [17]. This instrument has three questions on a Likert scale from 1 to 11. Cognitive anxiety was assessed by rating thoughts about performance from 1 "being worried" to 11 "being not worried". Somatic anxiety was assessed by rating physical manifestations from 1 "being tense" to 11 "being not tense". Self-confidence was assessed from 1 "being confident" to 11 "being not confident". Krane [17] developed the MRF-3 to address the concerns about the terms in the MRF-Likert being truly bipolar opposites and compared the results to those of the Competitive State Anxiety Inventory-2 (CSAI-2) with correlations of 0.76 for cognitive anxiety, 0.69 for somatic anxiety, and 0.68 for self-confidence. Krane [17] concluded that the MRF-3 is a suitable tool in the field of sport anxiety research due to its brevity and simplicity to complete the questionnaire and could be more advantageous to use than the CSAI-2 when facing time constraints. This form of the MRF has been used in a similar study by Wilson and colleagues [11] when assessing attentional control theory (ACT) in soccer players. Rating Scale for Mental Effort (RSME) The RSME assessed the mental effort of participants invested in the penalty kick tasks. It is a one-dimensional scale on a vertical axis with a range from 0 to 150 [18]. There are descriptors on the scale with corresponding numbers to act as verbal references at 0 = not at all effortful, 75 = moderately effortful, and 150 = very effortful. Participants mark the scale according to their perceived effort on the task. This scale is reliable (0.88 in laboratory settings and 0.78 in real life settings) and a valid measure of mental effort [19,20]. Kinematics Kinematic data of the task were collected using the DELSYS Trigno Avanti (DELSYS Incorporated, Natick, MA, USA) sensors to record accelerometer data. Two sensors were used with one placed at the fifth lumbar/first sacral vertebrae, between the posterior superior iliac spines. The second sensor was placed on the anterior thigh at the midway point between the superior aspect of the patella at the knee and the anterior superior iliac spine at the hip of the kicking leg. EMGworks (DELSYS Incorporated, Natick, MA, USA) software collected data at 150 Hz. Accelerometer data was utilized to examine the physical translation of the stress inducer and VR intervention had on the participants to gather a wholistic view on the effects of stress and VR. The biomechanical literature has examined the effects of stress in skill execution but has yet to be combined with the sport psychology literature findings. Thus, this study aimed to combine these areas of the literature together to examine how the physical and psychological components interacted with each other with the induced stress and VR intervention. Virtual Reality Head-Mounted Display (VR HMD) and Application The relaxation intervention played using the Oculus Quest (Facebook Technologies, LLC, Menlo Park, CA, USA) with a virtual relaxation session from the Liminal VR application (Liminal VR, Abbotsford, Victoria, Australia). The four-minute campfire screen from the calm category was used. Heart Rate (HR) HR was obtained utilizing a Polar H10 sensor with a Pro Strap that sent data to the Polar Beat app via an iPad (Polar Electro Oy, Kempele, Finland). For each participant, HR was recorded prior to each penalty kick and then averaged to represent the overall HR for each condition. Gilgen-Ammann, Schweizer, and Wyss [21] tested the RR interval measurements of the Polar H10 sensor compared to those of the 3-lead ECG Holter monitor (Schiller Medizintechnik GmbH, Baar, Switzerland) that is referred to as the gold standard for HR data collection. RR intervals are the two consecutive R-waves in an electrocardiogram, and the signal quality of these RR intervals is what is important for measurement devices quantifying HR and HR variability [21]. Results showed that the Polar H10 was as accurate as the Holter monitor during low-and moderate-intensity activities and even had a higher RR interval signal quality than the Holter monitor during intense activities. Both systems had less than a 2% difference between each other in 97.1% of measured RR intervals and had a high correlation with each other (r = 0.997), thus, the Polar H10 monitor provides an accurate measurement of HR [21]. Commitment Check The Igroup Presence Questionnaire [22] checked the participants' commitment. The questionnaire measures the sense of presence that individuals experience in VR. It consists of 14 questions divided among three subscales of spatial presence, involvement, and experienced realism. Questions were answered on a 7-point Likert scale from −3 to +3 (i.e., "fully disagree" at −3 and "fully agree" at +3, with 0 being neutral). Participants with a score lower than zero on the commitment check were excluded. The reliability of the spatial presence subscale is 0.80, the involvement reliability is 0.76, and the experienced realism subscale has a reliability of 0.68; overall, the IPQ has a reliability of 0.85 with all reliabilities using Cronbach's alpha [22]. Igroup.org [22] conducted two studies to determine the IPQ's reliability to determine VR presence along with a factorial analysis that can be referenced from their website. Based on Lui and Matsumura [16], four additional questions were added to assess how relaxing the participants found the VR session. The questions were answered with a 5-point Likert scale ranging from "not at all" to "very much so." Penalty Kick Scoring Wood and Wilson's [23] scoring zones were replicated in this study. The goal box was divided into twelve zones, with each half of the goal consisting of six zones of 61 cm, starting from an "origin" in the center (0 cm) and moving out to each post. Shots hit in the zones further from the central origin of the goal reflected shots that were further from the goalkeeper's reach, which gave the participants higher scores. If the goalkeeper made a save, then the participant did not receive any points. If the participant completely missed, the participant lost five points. The zones were set with points in increments of five, thus, the four corners of the goal box had the highest points available. To determine where the shot was hit, a researcher marked where each shot was placed on a score sheet, and then scores were totaled for each penalty block. Soccer Equipment The equipment used for the penalty kick sessions: a goal box (7.32 m × 2.44 m) on a game-regulated size field with a size 5 ball and distance to the penalty mark (11 m) were in accordance with NCAA regulations [24]. The penalty kick was taken within the goal box at the designated penalty spot on the field. Procedure Before the study began, an Institutional Review Board (IRB) approval was confirmed. Participants signed the informed consent and completed the demographic questionnaire. Next, the participants were familiarized with the DELSYS Trigno Avanti sensors, HR monitor, and the penalty kick procedure. The participants were allowed to have a warm-up period of five minutes consisting of their normal warm-up routine before games/practices. The participants then completed the three blocks of five penalty kicks against a goalkeeper. The first block of penalty kicks served for baseline data. The second block represented the high-stress situation, and the third block represented the VR intervention in the high-stress situation. One minute of rest was given between each penalty kick, and a five-minute seated rest in a shaded area between each condition was provided. Researchers were the only ones present during the study with the participant. Due to the fact that the study was completed on an intramural public field, some spectators were present as they were passing by or on adjacent fields in the area, and total seclusion could not be achieved. Each block is explained further below. Penalty Kick Block 1: Baseline The participants were told that the main purpose of these penalty kicks was to ensure the HR monitor and the Trigno Avanti sensors were working accordingly, aiming to relieve any anxiety in this first session. They first completed the MRF-3 questionnaire and then proceeded to take the five penalty kicks. There was a 60 s break between each penalty shot. Once the participants completed the five penalty kicks, they completed the RSME questionnaire and then rested seated in a shaded area for five minutes. Penalty Kick Block 2: Stress Induced with No VR Intervention After rest, the participants were read a script that specified that they were to successfully score as many of the five shots as they could against the goalkeeper. They were told that their scores would be analyzed and totaled after the session and there would be no way to know their score, therefore, to do the best they can. They were told that their scores would be compared to those of the other participants and ranked on a scoreboard if they made the top five and to imagine that their performance would be communicated with their coach to help with playing time decisions. This method of inducing stress has been utilized in a similar study and found effective, thus it was used for the present study [18]. Participants then completed the MRF-3, penalty kicks, and RSME form in the exact same manner as in the first penalty block, and then rested seated for another five minutes. Penalty Kick Block 3: Stress Induced with VR Intervention After rest, the stress-inducing script was reiterated to the participants to maintain the high-stress situation. Before they completed the penalty shots, the participants watched the relaxation intervention seated with the VR HMD. When finished, the participants completed the MRF-3, the five penalty kicks, and the RSME like in the previous two blocks. The participants then completed the commitment check. Once all forms were completed, the participants were debriefed and told that their performance did not have any effect on playing time; they were only told this to induce a stress response and were then thanked for their participation. Data Analysis Demographic information was analyzed with descriptive statistics. Kinematic analysis of the kicking movement was examined with a movement magnitude measure (peak acceleration) and a temporal (time to peak acceleration) variable. Using a lumbar sensor, movement initiation was determined as 3 standard deviations from a baseline steady state (i.e., standing still) and visually checked by a researcher. Peak accelerations were identified for both sensors within a 1.5 s window following movement initiation. Time to peak acceleration was used as the temporal measurement and calculated from movement initiation to the timestamp of peak acceleration for each sensor. Since the data were not normally distributed, Friedman tests were carried out to compare the MRF-3 subscales, RSME, HR, penalty kick score, and peak acceleration across the three kicking conditions. Dunn-Bonferroni post hoc tests were used, and effect size was measured by Kendall's W. Results Three Friedman tests were carried out to compare the cognitive anxiety, somatic anxiety, and confidence for the three conditions (see Figure 1). A significant difference between the conditions on cognitive anxiety was found, χ 2 (2) = 18.63, p < 0.001. Dunn-Bonferroni post hoc tests were carried out, and the results revealed that VR was significantly different from baseline (p = 0.002) and stress (p < 0.001). Kendall's W was 0.72 which indicates a large effect size. Comparison of the mean scores for each of the significant dependent variables suggested that VR (M = 2.15, SD = 1.14) reported lower scores compared to the means of baseline (M = 4.77, SD = 1.36) and stress (M = 5.62, SD = 2.14). On the subscale of somatic anxiety, a significant difference was reported between the conditions, χ 2 (2) = 21.50, p < 0.001. Dunn-Bonferroni post hoc tests were carried out, and the results revealed that VR was significantly different from baseline (p = 0.001) and stress (p < 0.001). Kendall's W was 0.83, which indicates a large effect size. Comparison of the mean scores suggested that VR (M = 2.23, SD = 1.17) reported lower scores compared to the means of baseline (M = 5.46, SD = 1.39) and stress (M = 5.08, SD = 1.26). On the subscale of confidence, a significant difference was reported between the conditions, χ 2 (2) = 18.43, p < 0.001. Dunn-Bonferroni post hoc tests were carried out, and the results revealed that VR was significantly different from baseline (p = 0.002) and stress (p = 0.001). Kendall's W was 0.71, which indicates a large effect size. Comparison of the mean scores suggests that VR (M = 3.38, SD = 1.12) reported lower scores (higher selfconfidence) compared to the means of baseline (M = 5.54, SD = 0.78) and stress (M = 5.31, SD = 1.18). A Friedman test was carried out to compare RMSE for the three conditions (see Figure 2). A significant difference between the conditions was found, χ 2 (2) = 10.16, p = 0.006. Dunn-Bonferroni post hoc tests were carried out and revealed that stress was significantly different from that in baseline (p = 0.007). Kendall's W was 0.39, which indicates a moderate effect size. Comparison of the mean scores suggests that stress (M = 76. 23 A Friedman test was carried out to compare penalty kick scores for the three conditions (see Figure 3). There was a nonsignificant difference between the conditions χ 2 (2) = 3.11, p = 0.21. Kendall's W was 0.12, which indicates a small effect size. Total penalty kick scores did not differ significantly across the penalty block conditions. A Friedman test was carried out to compare penalty kick scores for the three conditions (see Figure 3). There was a nonsignificant difference between the conditions χ 2 (2) = 3.11, p = 0.21. Kendall's W was 0.12, which indicates a small effect size. Total penalty kick scores did not differ significantly across the penalty block conditions. A Friedman test was carried out to compare HR for the three conditions (see Figure 4). A significant difference between the conditions was found, χ 2 (2) = 7.96, p = 0.02. Dunn-Bonferroni post hoc tests were carried out and showed that VR was significantly different from baseline (p = 0.02). Kendall's W was 0.31, which indicates a moderate effect size. A Friedman test was carried out to compare HR for the three conditions (see Figure 4). A significant difference between the conditions was found, χ 2 (2) = 7.96, p = 0.02. Dunn-Bonferroni post hoc tests were carried out and showed that VR was significantly different from baseline (p = 0.02). Kendall's W was 0.31, which indicates a moderate effect size. A Friedman test was carried out to compare HR for the three conditions (see Figure 4). A significant difference between the conditions was found, χ 2 (2) = 7.96, p = 0.02. Dunn-Bonferroni post hoc tests were carried out and showed that VR was significantly different from baseline (p = 0.02). Kendall's W was 0.31, which indicates a moderate effect size. Friedman test was carried out to compare acceleration amplitudes of the lumbar and thigh sensors for the three conditions (see Figure 5). Due to a data collection error, only 11 participants were analyzed. On the lumbar sensor, there was a nonsignificant difference between the conditions, χ 2 (2) = 5.64, p = 0.06. Kendall's W was 0.26, which indicates a small effect size. On the thigh sensor, there was a nonsignificant difference between the conditions, χ 2 (2) = 1.27, p = 0.53. Kendall's W was 0.06, which indicates a small effect size. Friedman test was carried out to compare acceleration amplitudes of the lumbar and thigh sensors for the three conditions (see Figure 5). Due to a data collection error, only 11 participants were analyzed. On the lumbar sensor, there was a nonsignificant difference between the conditions, χ 2 (2) = 5.64, p = 0.06. Kendall's W was 0.26, which indicates a small effect size. On the thigh sensor, there was a nonsignificant difference between the conditions, χ 2 (2) = 1.27, p = 0.53. Kendall's W was 0.06, which indicates a small effect size. Commitment Check All participants actively engaged with the VR relaxation scene. All 13 participants felt that the VR helped them relax, and 11 participants (84.62%) felt that it helped them reduce anxiety. In terms of using VR again, 11 participants (84.62%) reported that they Commitment Check All participants actively engaged with the VR relaxation scene. All 13 participants felt that the VR helped them relax, and 11 participants (84.62%) felt that it helped them reduce anxiety. In terms of using VR again, 11 participants (84.62%) reported that they would use VR relaxation again, and the remaining two (15.38%) participants were indifferent. Regarding how participants felt about using VR before a competition, eight participants (61.54%) felt that the VR relaxation would help them before a competition to perform better. The other five participants (38.46%) either selected "no" or "indifferent". Discussion The present study examined the effects of a VR relaxation intervention on anxiety and performance of penalty kicks in female soccer players. All positions were included in the present study, providing a better-balanced examination of the effectiveness of VR relaxation in female soccer players. All players on a team have the potential to take a penalty kick, and the concentration of position in the group taking penalty kicks varies from team to team. Even goalkeepers can take a penalty kick against the opposing goalkeeper, so it was important to encompass all positions. The results indicated that the VR relaxation intervention reduced cognitive and somatic anxiety while increasing confidence compared to the stress condition. All participants felt that the VR helped relax them prior to their last five penalty kicks. The VR intervention brought participants closer to their baseline levels by lowering their perceived effort while also reducing their HR. However, despite these relaxing effects, the participants penalty kicks' scores in the VR condition were lower, although insignificantly, compared to those of the stress condition, while movement strategies were not influenced by either the stress or VR conditions. A key research question for this study was if a VR relaxation technique would significantly reduce the perceived anxiety levels compared to the stress and baseline conditions. After the VR relaxation intervention, perceived anxiety levels were reduced significantly, while self-confidence significantly increased, showing that the VR intervention had a positive effect. These results support research from Liu and Matsumura [16], who surveyed Division-I student athletes about the relaxing effects of a VR intervention and found that the athletes reported the VR to be relaxing and beneficial to them. This relaxation effect also significantly lowered HRs in the VR condition compared to those in baseline, therefore providing a physiological indicator of the significant relaxation effect from the VR intervention. The other main research question examined if the VR relaxation technique would help improve penalty kick performance. Despite these improvements in the participants' perceived levels of anxiety, their performance declined (nonsignificant) in the VR condition compared to that in the stress condition. These findings contradict other studies examining induced stress on performance in that when stress has been induced on participants, their performances suffer [9][10][11]25,26]. Wilson et al. [11] found that inducing anxiety on soccer players significantly reduced shooting accuracy. Because of the results of these previous studies, we expected to have similar findings in this study. It may be possible that the nature of the task played a role in performances. For example, Wilson et al. [11] did not use a game-regulated size goal and used a much smaller version with a goalkeeper, increasing the difficulty and reducing the chances of success. Other research has used more precise, skill-based activities (putting and table tennis) where there is less degree for error as well, while this study used an activity that has a higher rate of success regardless of anxiety. Thus, anxiety may have a larger effect on more precise sports/activities, and future research should investigate different tasks. Additionally, movement strategies were examined through kicking kinematics to determine the effect of the VR relaxation technique. The findings showed that peak acceleration amplitudes of the thigh and lumbar were similar across all conditions. It was predicted that with the increased anxiety, participants would kick faster to attempt to ensure the shot would be made, and their kicks in the VR block would return to baseline, but results indicated no change in movement strategies. It may be the case that the stressor was not strong enough to alter the participants' movement. In this study, the stress induced was simulated and was not the same as actual competition stress. It is also possible that the participants had practiced their kick enough under stressful conditions that their movement patterns would not be impacted. With the average of approximately 13 years of experience, the participants likely had optimized their kick to what is the most functional for them, and despite being induced with stress, this learned movement had no change. Combining this with the lower level of induced stress, the participants were able to maintain their kicking patterns in the present study. With the reduced performance in the VR condition and improved performance in the stress condition, it is possible that the stress inducer aroused the participants to their optimum arousal level, and the VR intervention under-aroused them. According to the individual zone of optimal functioning (IZOF) theory, there is a necessary level of arousal that varies from individual to individual [27]. In respect to this theory, the stress induction in the present study may have aroused participants to the "optimal" level for performance, thus helping them execute a more precise kick. Once the participants experienced the VR intervention, the VR may have reduced their arousal levels, making their kicking technique less precise and controlled, thus making them perform worse. This possibility is apparent with HR decreasing throughout the conditions. Initially, with the first condition, there was a goal to ease any nervousness the participants had while completing the research study. With performing in front of the research team who were strangers and the novelty of being a research participant, initial anxiety may have been a factor that could not be controlled for, resulting in the elevated HR and participants being over-aroused. For the VR intervention, the intervention significantly decreased the participants' HRs from baseline due to its relaxation effect, and the participants became under-aroused, causing HR to be at its lowest. Thus, the stress condition resulted in optimal arousal with a HR higher than that of the VR condition but lower than that of the baseline condition and coinciding with the better penalty kick scores. Additionally, the catastrophe theory may further explain the effects of performance in regard to cognitive and somatic anxiety [28]. Accordingly, cognitive anxiety directly influences performance while somatic anxiety has a smaller effect, but if both are too high, the "catastrophe" occurs and performance plummets [29,30]. In the current study, even though cognitive anxiety was higher, participants may have risen to that peak point right before the catastrophe effect occurred, resulting in their better scores in the stress condition. Even though the stress inducer was effective in increasing anxiety in the participants, it may not have induced enough anxiety to reach the catastrophe point for them. When measuring the participants' perceived mental effort between conditions, it was hypothesized that the VR relaxation intervention would significantly reduce the level of perceived effort exerted in the VR condition compared to that in the stress and baseline conditions. The results indicate that the stress inducer significantly increased perceived exerted effort and, although insignificantly, the VR intervention brought the participants closer to their baseline levels. The higher level of effort in participants in the stress condition could have helped maintain their performance, as explained by the processing efficiency theory. With increased worry and anxiety, individuals can exert more effort to counterbalance the aversive effects of worry to help maintain performance [31]. Participants exerted more effort in the stress condition to counteract the reduced attentional capacity from increased anxiety and maintained their performance. In the VR condition, the participants exerted effort similarly to their baseline and thus may not have put forth the effort necessary to maintain performance and their kick, as seen with the accelerometer data, to effectively manage the stressful situation. Based on the contradictory results between perceived anxiety and performance, it may be that VR relaxation may be inappropriate to use prior to competition due to a desired level of arousal for optimal performance [32]. VR relaxation may be more appropriate following competition or throughout the competitive season to systematically modulate psychological stress and anxiety. Injury incidence rates are higher during periods of high academic or physical stress, therefore, systematically incorporating VR relaxation can serve as a buffer during these time periods [33]. This study does not come without limitations, and caution should be used when generalizing the results of this study to other realistic soccer situations. The participants were required to wear a face covering due to COVID-19, which is not worn in actual competition. Future research should revisit this study design when participants no longer have to wear a mask to eliminate its effects. This study had a small sample size that only examined female soccer players of varying skill levels. Females tend to experience stress and anxiety differently than males, thus, the benefits of VR relaxation may be different in males, and gender differences, along with different sports, should be examined [34]. This study was conducted during COVID-19 restrictions, so our participant pool was limited to individuals within the university's community that did not virtually attend classes, thus contributing to the small sample size. This study should be re-examined when COVID-19 restrictions and the pandemic are eased to increase the utilized sample size to better extrapolate the study's findings. Additionally, players at different skill levels may interpret anxiety differently due to the level of competition they compete at; thus, future research should examine differences in skill level and use a trained goalkeeper for participants that match the appropriate skill level to provide consistency and proper defending. Finally, since the athletes varied in skilled level and current playing status, this could have impacted the results. The participants may have been fatigued by the third condition block, which could have resulted in reduced performance. Therefore, the current results can only be generalized to the current setting and participants. Future research should address these limitations. Conclusions These results from the current study are insightful for practitioners when deciding when/if to utilize VR as a relaxation intervention. Caution should still be taken when applying the results of this study to one's practice. It is still important to consider individual differences of athletes regarding their responses to stress and anxiety. VR may be a tool for those with severe pre-competition anxiety and it may be a beneficial tool after competition when it is necessary to relax. Additionally, VR relaxation can be a viable preventative tool for upcoming periods of increased stress and anxiety. In conclusion, the purpose of this study was to examine the effects of VR relaxation on the perceived anxiety and performance in soccer players taking a series of penalty kicks. The results indicated that the VR intervention significantly reduced the participants' perceived cognitive and somatic anxiety levels, increased self-confidence, and reduced their HR. However, future research is still needed to understand the relationship between VR relaxation, kinematics, and performance. This study serves as an initial step in establishing a basic framework for future research to build from and evaluate VR relaxation interventions in female soccer players.
2021-12-16T18:02:51.253Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "e38bda02114f6099a35bd873e9300ee1e7ae2e0f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4663/9/12/167/pdf?version=1639379836", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f7e8f8f0ed3826ec7767f81cbd186f9256896f02", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [] }
248013750
pes2o/s2orc
v3-fos-license
Male and Female Family Social Addresses in The Minangkabau Tribes of Sumatera Barat, Indonesia Received 14/11/21 Revised 17/12/21 Accepted 08/01/22 The focus of this research is on male and female family social addresses in the Minangkabau Tribe of Sumatera Barat in Indonesia. The purpose of this study is to examine Minangkabau people's social addresses. The research approach employed in this study was descriptive qualitative. The results of interviews with Minangkabau people were analyzed in this study. As a consequence, Minangkabau people employed twentyfive male and female addresses in their family interactions. This addressee is used to indicate the degree of social status, as well as to convey respect for the eldest family member and affection for the youngest. I. INTRODUCTION A society's identity is revealed through its language, which reveals the cultural basis of the culture. Language adds to the enjoyment of human life. Due to language, there are numerous exchanges between humans, as well as between groups of cultures (Peoples & Bailey, 2014). One of the most fundamental aspects of human life is language since it is one of the most basic ways for people to communicate with one another. There are so many languages in this world, and each location has its language that is spoken by a group of people who live in that area. Language contributes to the formation of a culture in a certain location. Because language informs a member of a group society how to engage with their group, it can help them create their culture (Hall, 2013). If one language has particular terms to describe things or draw distinctions, but another language does not, speakers of the first language will find it simpler to converse about those things and see differences in their surroundings. These results suggest that the language one employs limit both what one can say and, somewhat more importantly, what one can think. Practically, language is always being produced. Language and gender are ultimately social constructs that derive their meaning from the human actions in which they appear (Labov, 2019). Male and females are socially distinct because society has established separate social standards for them and expects them to behave differently. Gender, therefore, refers not only to sexual distinctions but also to a set Male and Female Family Social Addresses in the Minangkabau Tribes of Sumatera Barat, Indonesia. (Rasyimah) 51 of socially defined roles and identities that people create as part of a socialization process including power interactions (Holmes, 2019). Both language and gender are intertwined. Addressing someone is a crucial social interaction in communications. The acknowledgment of social identification, social status, the role of the addressee, and the interrelations between the addresser and the addressee are all important social purposes of addressing (Bucholtz, 2021). It can build, maintain, and strengthen all types of interpersonal relationships. Kinship words, social titles (genetic titles, official titles, and vocational titles), names, and demonstrative pronouns are the four types of address terms (Ramasubramanian & Banjo, 2020). The focus of this paper is on kinship terms, genetic titles, and official titles. A range of social factors influences the use of address phrases, including the occasion, social standing or ranks, sex, age, familial relationship, professional hierarchy, race, or transactional position. Kinship words will play an essential part in family and society when familial relationships are exceedingly strong (Vogt, 2020). When the racial or socioeconomic status is essential in society, address phrases that demonstrate respect and hierarchy will be chosen; nevertheless, address terms may not be as important in a culture that purports to be egalitarian. As a result, there's a strong link between address words and culture (Lubis & Asnawi, 2021). Because Minangkabau tribe has distinct addressing when calling someone elder, younger, or even the same agMinangkabaubau addressee in society was investigated based on culture. Not only do they need to know if their addressee is male or female, but they also need to know whether the addressee is courteous in conversation (Fanany & Fanany, 2018). In the Minangkabau region, one family is usually made up of numerous nuclear families. Extended families are nuclear families that include many nuclear families as well as additional relatives (Hugo, 2019). Nuclear families may be found in the Agam Regency, Banuharnpu District, Sungai Puar, and Nagari Sariak areas. One pusako home houses a huge family who lives in peace and independence. Every family, whether extended or nuclear, yearns for tight and personal contact with his or her own family and with both sides' relatives. This implies that it should last indefinitely at the time of marriage. As a social being, however, a person cannot escape their living environment, which includes conventions and binding standards. They are bonded by traditions that have been passed down from generation to generation, such as welcoming phrases. In Minangkabau society, whether the child/ego is male or female, married or unmarried, each has a different greeting method for the husbands, wives, or relatives of both sides (Dewi et al., 2019). An extended family is a kinship group made up of more than one nuclear family, and the group as a whole is a social unit that lives in the same dwelling. The habit of settling after marriage divides the extended family. The extended family is divided into numerous sections (Li et al., 2004), as follows. (1) Extensive Untrolocal Family The habit of settling down after marriage with someone who lives close by. The senior nuclear family of the husband's or wife's husband or wife's husband or wife's husband or wife's husband or wife's husband or with a son or daughter's nuclear family (2) Extensive Virilocal Family After marriage, it is customary to settle in the area of the house of the husband's relatives, who are senior nuclear families with nuclear families of males. The habit of settling after marriage with the wife's relatives, who are comprised of the senior nuclear family and a nuclear family of daughters, who dwell in the area of the house. In general, the community Minangkabau follows the Uxirlocal structure, which includes the three forms of extended families mentioned above. Even yet, there are currently many different varieties of Virilocal and Untrolocal extended families (Oey-Gardiner, 2021). II. METHODS The descriptive qualitative approach was employed in this investigation. The researcher is the most important equipment, although sound and video recorders were also used to capture interviews with Minangkabau people about the addressee. The data were gathered through interviews and observations in the Minangkabau setting in train reliable information III. RESULT AND DISCUSSION The data is gathered through interviews with Minangkabau locals and colleagues. The gender of the data names is separated into two categories: male and female. In Indonesia's Minangkabau tribe, there are twenty-five family types. These addressees are commonly used in t e family and soc tal communication (Firdaus, 2019). They're also utilized to convey affection and respect to the family's eldest and youngest members. Male and female Minangkabau Tribe members are addressed in the table below. According to the findings of the study, the Minangkabau tribe has unique addressees in communication, particularly among family members. Each of the addressees is communicating with a distinct individual. When Minangkabau people want to start a discussion, they should know exactly who will receive the message, which means they should know what the family's addressee is going to use to welcome a specific individual (Sumardi & Qurrotaini, 2017). Using the wrong family's addressing, they will be labeled as uncivilized or impolite. That is why Minangkabau must be aware of family addressing, which is used to facilitate communication amongst family members (Salliyanti et al., 2021). IV. CONCLUSION This study demonstrates how Minangkabau people address a member of their family. There are twenty-five powerful addressees in this tribe who climb through the ranks of family and community. To show civility in connection with family or society, the people develop addressee. It implies that Mingkabau culture teaches us how to respect one another, such as the young to the old, the same age, and the old to the young. The majority of human group members belong to society and within it. Several cultures exist till the end of time. Especially in the Minangkabau tribe. To maintain their cultural heritage, they employ distinct addressees. People will recognize the speaker and addressee as Minangkabau individuals everywhere they hear specific addressing one another. The Minangkabau people succeed because they maintain mutual respect, as well as their culture and courtesy in communication. As a result, the addressee in this tribe is always something new and intriguing to talk about. They have addressees at every level of the family. In a family or culture, each male and female have their addressee, and it depends on who is speaking and to whom it is directed.
2022-04-08T15:13:22.735Z
2021-01-02T00:00:00.000
{ "year": 2021, "sha1": "4ff47ccec66f5f79cda2d4ff9d97cdff6ed15965", "oa_license": "CCBY", "oa_url": "http://jurnal.umsu.ac.id/index.php/ETLiJ/article/download/9328/6625", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3cc0ecd6f616fc3a9d6fb655f1516270d06c2cee", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [] }
46770925
pes2o/s2orc
v3-fos-license
Video Summarization using Keyframe Extraction and Video Skimming Video is one of the robust sources of information and the consumption of online and offline videos has reached an unprecedented level in the last few years. A fundamental challenge of extracting information from videos is a viewer has to go through the complete video to understand the context, as opposed to an image where the viewer can extract information from a single frame. In this work, we attempt to employ different Algorithmic methodologies including local features and deep neural networks along with multiple clustering methods to find an effective way of summarizing a video by interesting keyframe extraction. I. INTRODUCTION Following the advances of efficient data storage and streaming technologies, videos have become arguably the primary source of information in today's social media-heavy culture and society. Video streaming sites like YouTube are quickly replacing the traditional news and media sharing methods whom themselves are forced to adapt the trend of posting videos instead of written articles to convey stories, news and information. This abundance of videos include new challenges concerning an efficient way to extract the subject matter of the videos in question. It would be frustrating, inefficient, unintelligent and downright impossible to watch all movies thoroughly and catalog them according to their categories and subject matter, which is extremely important when searching for a specific video. Currently, this categorization is dependent on the tags, metadata or titles provided by the video uploaders. But these are highly personalized and unreliable in application and hence a better way is required to create a summarized representation of the video that is easily comprehensible in a short amount of time. This is an open research problem in a multitude of fields including information retrieval, networking and of course computer vision. Video summarization is the process of compacting a video down to only important components in the video. The process is shown in Fig 1. This compact representation can be useful when browsing a large number of videos and retrieve the desired ones efficiently. The summarized video must have the following properties, firstly, it must contain the high priority entities and events from the video and secondly, the summary should be free of repetition and redundancy. It is essential for the summarized video to capture all the important components, so that it represents the complete story of the video. Failure to exclude these components might lead to misinterpretation of the video from its summarized version. Also, redundant and unimportant components should be removed to make the summary compact and effective in representing the content properly. Various approaches have been taken to solve this problem by different researchers. Some of most prominent approaches include keyframe extraction using visual features [1], [2] and video skimming [3], [4]. In this project, we explore the keyframe extraction. We also propose a clustering method to cluster the summarized videos. We use the SumMe dataset for our experimentation and results. Our contributions for this include suggesting a new unsupervised method of video summarization. We have experimented with a method which includes extracting frame based features using RESNET16 trained on image net, and then clustering them with different algorithms. Later, chosing the keyframes as the points which were closest to the center of each cluster. The rest of this literature is organized as follows. The related research is presented in section II, followed by our approach in section III. We present our experimental results is section IV. The paper is concluded with discussions and future goals in section V. II. RELATED RESEARCH The most difficult challenge of video summarization is determining and separating the important content from the unimportant content. The important content can be classified based on low level features like texture [5], shape [6] or motion [7]. The frames containing these important information are bundled together to create the summary. This manner of finding key information from static frames is called keyframe extraction. These methods are used dominantly to extract a static summary of the video. Some of the most popular keyframe extraction methods include [8], [9]. These methods use low level features and dissimilarity detection with clustering methods to extract static keyframes from a video. The clustering methods are used to extract the extract features that are worthwhile to be in the summary while uninteresting frames rich with low level features are discarded. Different clustering methods have been used by researchers to find interesting frames [8]. Some methods use web-based image priors to extract the keyframes, for example, [10], [11]. While extracting static keyframes to compile a summary of the video is effective, the summary itself might not be pleasant to watch and analyze by humans as it will be discontinuous and with abrupt cuts and frame skips. This can be solved by video skimming which appears more continuous and will less abrupt frame changes and cuts. The process is more complex than simple keyframe extraction, however, because a continuous flow of semantic information [12] and relevance is needed to be maintained for videos skimming. Some of the video skimming approaches include [1], which utilizes the motion of the camera to extract important information and calculates the inter-frame dissimilarities from the low level features to extract the interesting components from the video. A simple approach to video skimming is to augment the keyframe extraction process by including a continuous set from frames before and after the keyframe up to a certain threshold and include these collection frames in the final summary of the video to create an video skim. III. APPROACH In this project we use both keyframe extraction and video skimming for video summarization. For static keyframe extraction, we extract low level features using uniform sampling, image histograms, SIFT and image features from Convolutional Neural Network (CNN) trained on ImageNet [cite ImageNet]. We also use different clustering methods including K-means and Gaussian clustering. We use video skims around the selected keyframes to make the summary fore fluid and comprehensible for humans. We take inspiration from the VSUMM method which is a prominent method in video summarization [13]. A. Keyframe extraction 1) Uniform Sampling: Uniform sampling is one of the most common methods for keyframe extraction [cite uniform sampling]. The idea is to select every kth frame from the video where the value of k is dictated by the length of the video. A usual choice of length for a summarized video is 5% to 15% of the original video, which means every 20th frame in case of 5% or every 7th frame in case of 15% length of the summarized video is chosen. For our experiment, we have chosen to use every 7th frame to summarize the video. This is a very simple concept which does not maintain semantic relevance. Uniform sampling is often considered as a baseline for video summarization. 2) Image histogram: Image histograms represent the tonal distribution of an image. It gives us the number of pixels for a specific brightness values rated from 0 to 256. Image histograms contain important information about images and they can be utilized to extract keyframes. We extract the histogram from all frames. Based on the difference between histograms of two frames, we decide whether the frames have significant dissimilarities among them. We infer that, a significant inter-frame image histogram dissimilarity indicates a rapid change of scene in the video which might contain interesting components. For our experiments, if histograms of two consecutive frames are 50% or more dissimilar, we extract that frame as a keyframe. 3) Scale Invariant Feature Transform: Scale Invariant Feature Transform (SIFT) [cite SIFT], has been one of the most prominent local features used in computer vision is applications ranging from object and gesture recognition to video tracking. We use SIFT features for keyframe extraction. SIFT descriptors are invariant to scaling, translation, rotation, small deformations, and partially invariant to illumination, making it a robust descriptor to be used as local features. Important locations are first defined using a scale space of smoothed and resized images and applying difference of Gaussian functions on these images to find the maximum and minimum responses. Non maxima suppression is performed and putative matches are discarded to ensure a collection of highly interesting and distinct collection of keypoints. Histogram of oriented gradients is performed by dividing the image into patches to find the dominant orientation of the localized keypoints. These keypoints are extracted as local features. In our experiment, we have extracted HOGs for each frame in video, and then put a threshold which could take 15% of video. 4) VSUMM: This technique has been one of the fundamental techniques in video summarization in the unsupervised setup. The algorithm uses the standard K-means algorithm to cluster features extracted from each frame. Color histograms are proposed to be used in [13]. Color histograms are 3-D tensors, where each pixels values in the RGB channels determines the bin it goes into. Since each channel value ranges in 0 255, usually, 16 bins are taken for each channel resulting in a 16X16X16 tensor. Due to computational reasons, a simplified version of this histogram was computed, where each channel was treated separately, resulting in feature vectors for each frame belonging to R 48 . The nest step suggested for clustering is slightly different. But, the simplified color histograms give comparable performance to the true color histograms. The features extracted from VGG16 at the 2nd fully connected layer [14] were tried, and clustered using kmeans. 5) ResNet16 on ImageNet: While reading about approach of VSUMM, we decided to test a different approach. We chose ResNet16 trained on image net, with different range of filters, and chopped of last loss layer, so as to obtain the embeddings of each image (512 dimension). We extracted frames out of the videos, and forward pass them through ResNet16, and after obtaining the embeddings for each frame in video, we clustered them using 2 algorithms: Kmeans , and Gaussian Mixture Models. The number of cluster has been take as 15% of the video frame numbers. We later chose the frames closest to the center of clusters as the keyframes. A sample CNN architecture for VSUMM and RESNET16 is presented in Fig 2. B. Clustering 1) K-means clustering: K-means clustering is a very popular clustering method. Given a set of image frames extracted by one of the methods mentioned in section III-A, the goal is to partition these frames into different clusters, so that the within-cluster sum of squared difference is minimum. This is equivalent to minimizing the pairwise squared deviation of points in the same cluster. With this clustering we find the interesting frames to be included in the summarization and discard the ones that are rich in local features but contains less informative or interesting content. For our project, we have used Kmeans for clustering the features obtained from RESNET16 ImageNet trained method. We obtained 512 dimension vector for each frame in video, and clustered them. We have set the number of cluster to be 15% of the video. After clustering, we chose the key points which was closest to the center of that specific cluster. 2) Gaussian Clustering (Mixture Model): Gaussian mixture models (GMM) are often used for data clustering. Usually, fitted GMMs cluster by assigning query data points to the multivariate normal components that maximize the component posterior probability given the data. That is, given a fitted GMM, a cluster assigns query data to the component yielding the highest posterior probability. This method of assigning a data point to exactly one cluster is called hard clustering. However, GMM clustering is more flexible because you can view it as a fuzzy or soft clustering method. Soft clustering methods assign a score to a data point for each cluster. The value of the score indicates the association strength of the data point to the cluster. As opposed to hard clustering methods, soft clustering methods are flexible in that they can assign a data point to more than one cluster. In this project, we used clustering on the embeddings obtained using RESNET16 trained network. we set the number of clusters to be 15% of the video, then chose the points which were closest to the center of the cluster. C. Video Summarization Our approach for video summarization is influenced by the VSUMM method [13]. Firstly, keyframes containing important information is extracted using one of the methods mentioned in section III-A. To reduce the computation time for video segmentation, a fraction of the frames were used. Considering the sequence of frames are strongly correlated, the difference from one frame to the next is expected to be very low when sampled at high frequencies, such as, 30 frames per second. Instead using a low frequency rate of 5 frames per second had insignificant effect on the results but it increased the computation speed by a significant margin. We used 5 frames per second as a sampling rate for our experiments and discarded the redundant frames. After extracting all the keyframes, we perform a clustering on the frames to categorized them into interesting and uninteresting frames using one of the methods mentioned in section III-B. The cluster with the interesting frames were used to generate the summary of the video. The summary of the video was chosen to have the length of approximately 15% of the original video. But this summary was discontinuous and thus different from the way a human observer would evaluate the summary leading to poor scores as our evaluation method coincides with how a human being scores the summary. This problem was overcome by using a 1.8 second skims from the extracted interesting frame. This makes the summary continuous and easy to comprehend. The low frequency sampling of frames helps keep the size if the video in check. A. Dataset For our experimentation, we use the SumMe dataset [15] which was created to be used as a benchmark for video summarization. The dataset contains 25 videos with the length ranging from one to six minutes. Each of these videos are annotated by at least 15 humans with a total of 390 human summaries. The annotations were collected by crowd sourcing. The length of all the human generated summaries are restricted to be within 15% of the original video. Frames from two example videos, a) Air Force One and b) Play Ball is presented in Fig 3. B. Evaluation Method The SumMe dataset provides individual scores to each annotated frames. We evaluate our method by measuring the F-score from the set of frames that have been selected by our method. We compare the F-score to the human generated summaries to validate the effectiveness of our method. F-score is a measure that combines precision and recall is the harmonic mean of precision and recall, the traditional F-measure or balanced F-score: This measure is approximately the average of the two when they are close, and is more generally the harmonic mean, which, for the case of two numbers, coincides with the square of the geometric mean divided by the arithmetic mean. There are several reasons that the F-score can be criticized in particular circumstances due to its bias as an evaluation metric. This is also known as the F 1 measure, because recall and precision are evenly weighted. C. Results We ran mentioned methods on the SumMe Dataset, and compared the F-scores obtained by them (as shown in Table 1) .Our main goal is to be as much close to human, which we were able to obtain using SIFT, VSUMM, and CNN. We also took mean of scores for all videos, and can see that CNN(Gaussian) was performing good followed by Vsumm. We observed that, the videos which had dynamic view point was performing good with VSUMM and CNN, whereas the videos with stable view point was performing very poor even with compared to Uniform Sampling. This is where we can find difference in a human's method of summarizing vs an algorithm method.We can also see that SIFT's and CNN's have positive correlation in terms of F-scores this is due to the features obtained. Though, SIFT is not able to outperform CNN. V. CONCLUSION Video clustering is one of the hardest task because it depends on person's perception. So, we can never have a good baseline to understand whether our algorithm is working or not. Sometimes, Humans just want 1-2 second of video as summary, whereas machine looks for slightest difference in image intensity and might give us 10 seconds of video. From what the baseline has been given in SumMe Dataset, we chose the average human baseline as true, as we would like to consider all perspectives. After testing with all different forms of videos, we can conclude that Gaussian Clustering along with Convolutional Networks can give better performance than other methods with moving point camera videos. In fact, the SIFT algorithm seems to perform well on videos with high motion, the reason behind it is that we used deep layered features, thus they consists of important points inside image, followed by Gaussian Clustering, which is specifically made for mixture based components. We have also observed that, even Uniform Sampling is giving better result for videos which have stable camera view point and very less motion. We can conclude that one single algorithm can't be solution of video summarization, it is dependent of the type of video, the motion inside video.
2018-02-12T22:24:41.146Z
2019-10-10T00:00:00.000
{ "year": 2019, "sha1": "9dbbedbbac68dfefa691149ed32452fa03d507bc", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "8c9a3c10337a49733635da3f850137b524883963", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Engineering" ] }
118574213
pes2o/s2orc
v3-fos-license
Torsion-Adding and Asymptotic Winding Number for Periodic Window Sequences In parameter space of nonlinear dynamical systems, windows of periodic states are aligned following routes of period-adding configuring periodic window sequences. In state space of driven nonlinear oscillators, we determine the torsion associated with the periodic states and identify regions of uniform torsion in the window sequences. Moreover, we find that the measured of torsion differs by a constant between successive windows in periodic window sequences. We call this phenomenon as torsion-adding. Finally, combining the torsion and the period adding rules, we deduce a general rule to obtain the asymptotic winding number in the accumulation limit of such periodic window sequences. In parameter space of nonlinear dynamical systems, windows of periodic states are aligned following routes of period-adding configuring periodic window sequences. In state space of driven nonlinear oscillators, we determine the torsion associated with the periodic states and identify regions of uniform torsion in the window sequences. Moreover, we find that the measured of torsion differs by a constant between successive windows in periodic window sequences. We call this phenomenon as torsion-adding. Finally, combining the torsion and the period adding rules, we deduce a general rule to obtain the asymptotic winding number in the accumulation limit of such periodic window sequences. A conspicuous characteristic in parameter space of dissipative nonlinear dynamical systems is the appearance of periodic states for parameter sets immersed in parameter regions correspondent to chaotic states. In the literature, much attention has been devoted to establish connections between these periodic states. For example, a successive constant increment on the period of oscillation of such states (period-adding phenomenon) [1] have been experimental and numerically observed in several real-world systems such as neuronal activities [2,3], electronic circuits [4], bubble formation [5], semiconductor device [6], and chemical reaction [7]. The periodadding phenomenon has been also observed for sequences of shrimp-shaped periodic windows accumulating in specific parameter space regions [8][9][10][11][12][13]. Once nonlinear dynamical systems can exhibit many different kinds of motion, knowing adding rules, such as the period-adding and further information about the accumulating parameter regions, is very advantageous, specially, for predicting periodic states for different parameter sets in real-world applications. Furthermore, besides the intrinsic period of oscillations, in dissipative systems, periodic states have other interesting convergence properties. For instance, for driven nonlinear oscillators, the torsion number n is defined as number of twists that local flow perform around a given periodic solution during a dynamical period m, and the winding number defined as w = n/m [14][15][16][17][18]. However, besides the existence of such convergence properties, additional connecting rules between periodic states and accumulating regions characteristic have not yet been discovered. Our aim here is to investigate the convergence characteristics, namely, the torsion and winding number of periodic states within complex periodic windows, in periodadding sequences in the parameter space of driven nonlinear oscillators. A torsion-adding formulation between such periodic states are proposed here. Combining both additive sequences properties, the torsion and the period adding, we describe a generic periodic window in a se-quence in terms of its winding number. The asymptotic limit of such description gives a general rule to determine the winding number for any accumulation of periodadding sequences. Generally, the driven nonlinear oscillator is described by:ẍ where h(t) = h(t + T ) is a periodic function with angular frequency ω = 2π/T . For this equation, the winding number is obtained by considering revolutions performed by an orbit γ ′ around of a very close neighbor periodic orbit γ during the interval time ∆t = T (see Fig. 1). The absolute mean value of the revolution angular frequency, Ω(γ) = | <α(t) > |, is called torsion frequency: Thus, considering the T -shift map [19], the winding number is precisely defined as [14]: For a more appropriated form, note that the γ period is given by T γ = mT = 2πm/ω, while the torsion frequency period is T Ω = 2π/Ω (see Fig. 1). Including these periods in Eq. (3), and defining the torsion number as n = T γ /T Ω , we obtain the winding number Figure 1 here In Fig. 2, we show a sketch of the region with the lowest period inside a complex periodic window. We refer to it as main-body window, frequently found in the twodimensional parameter space of the dynamical system given by Eq. 1. This main-body window has the same and DC), due to the coexistence of the two periodic orbits. The curves λ 1 and λ 2 between the colored regions, reveal the window skeleton [20,21]. The flow converges monotonically or non monotonically to the periodic orbit according the skeleton composition (λ 1 , λ 2 ). Crossing a λ curve from one side to the other, the flow convergence suffers a transition. The direction of this transition characterizes λ 1 and λ 2 and causes a change in the winding number [22]. here Then, we consider the winding number concept for sequences of periodic windows. Until now these sequences have been described only by period-adding rules, i.e., the period of a periodic window in the sequence can be determined by adding a constant value ρ to the period of the previous window [23][24][25][26]. Now, we introduce the torsion-adding phenomenon in periodic window sequences. In other words, for each increment ρ, in the window period m 1 , the torsion number of equivalent regions is also incremented by a constant value τ . Therefore, from Eq. 4, the winding number w Ri in a generalized main-body window region R (R = A, B, C, or D) of the i-th window sequence can be determined in function of the torsion number in region R of a known window, in particular, in function of the first one, n R1 : Thus, the asymptotic winding number limit of any region R, shows that the winding number of all regions converge to a constant which only depends on the torsion number and the period increments, τ and ρ, respectively. We denote this limit as w ∞ . Another consequence of the torsion-adding is that the skeleton of all windows in a sequence are equivalent. In fact, one can show that for λ 1 curve, in successive windows i and i + 1, the torsion-adding condition (n Ri+1 = n Ri +τ ) imply that the torsion number difference between regions A and C is the same (n Ai − n Ci = n Ai+1 − n Ci+1 ). Thus, the λ 1 curve promotes the same transition in all sequence. The same is for λ 2 . Now, to verify our results, we present numerical simulations for a specific driven nonlinear oscillator, namely, the Morse oscillator that describes a diatomic molecule, immersed in an external electromagnetic field, modeled by the following differential equation [27]: where the parameter d is the amplitude of the system damping and ω is the angular frequency of the external forcing. Our analysis are in the two parameter space d × ω. As we are interested in how a trajectory γ ′ converges to the stable periodic orbits γ, we consider in our numerical simulation γ ′ starting in a position very close to γ in a such way that the linearized flow is enough to describe γ ′ dynamics. We represent a point in γ as (x * 1 , x * 2 , x * 3 ) and in γ ′ as (y 1 , y 2 , y 3 ). Therefore, the γ ′ evolution is given byẏ where we considered y 3 = 0, since y 3 is any constant in the linearized flow, i.e.,ẏ 3 = 0. Finally, considering the solutions of Eqs. (8) in the polar coordinates y 1 = r cos(α) and y 2 = r sin(α), we determine the angular frequency,α = 1 y 2 1 + y 2 2 (y 1ẏ2 − y 2ẏ1 ). Thus, by Eqs. (2) and (3), we can determine numerically the winding number inside a period-m main-body window and then calculate the associated torsion number with Eq. (4), where m is given by m = T γ /T . We recall that the orbit γ is stable if the Floquet multipliers µ i associated with the mT -shift map (or any multiple of mT ) are |µ i | < 1. For µ i ∈ R, if µ i are positive, the orbit γ ′ converge to γ with orientation preserving (monotonically) and if µ i are negative, γ ′ converge to γ with orientation reversing (non monotonically) [22]. Since the multipliers are |µ i | < 1 from the saddle-node bifurcation (µ = 1), in region A, to the period-doubling bifurcation (µ = −1) in regions C and D, the phase difference by the interval time ∆t = mT computed between regions A and C (or D) is π or, i.e., |n A − n C(D) | = 1/2, as can be seen in Fig. 3. For the Morse oscillator, we obtain two-dimensional parameter spaces [shown in Figs. 3 and 4] by computing the torsion and the winding numbers for each d × ω parameters of a two-dimensional mesh of 500 × 500 equally spaced. We assign different colors to designate winding number values of periodic attractors, while the white color represents the parameters correspondent to chaotic attractors. With this procedure we identify the uniform winding number regions, A, B, C, and D indicated in Fig. 2. We also identify inside the three periodic windows, in Fig. 3, the skeleton separating areas with uniform torsion numbers. To be more precise, lets define the left and right side of the curves λ 1 and λ 2 traversing the curve from the bottom to the top (see Fig. 2). The torsion number is always increased or decreased by 1/2 when we cross the curve λ 1 (λ 2 ) from the right (left) to the left (right). The superscripts + and −, in λ 1 and λ 2 , indicate if the torsion number increases (+) or decreases (−) according to the defined orientation. All the three possible skeleton are shown in Fig. 3. Figure 3 here. Note that the main-body windows, shown in Fig. 3, presents three different winding number values. In Fig. 3(a), the torsion number decreases by 1/2 from A to C (λ 1 = λ − 1 , according our convention) and increases by 1/2 from C to B (λ 2 = λ + 2 ). Thus, the torsion number difference between regions A and B is zero. The same is verified for the route A → D → B, then n A = n B . Since the parameter sets in this window correspond to orbits with the same period m, according Eq. (5), the winding number in regions A and B are the same. Similarly, we conclude that in Fig. 3(b) and (c), where n C = n D , the winding number of regions C and D are the same. Therefore, for the i-th complex window (see Fig. 4), the winding number is given by where k, l ∈ {0, 1, 2} are, respectively, the number of times that the curves type λ + and λ − are crossed from region A to any region R crossing λ 1 and λ 2 just one time. Equation (10) describes the internal regions in these periodic window sequence and obey the convergence limit established in Eq. (6). Moreover, Eq. (10) states that if one periodic windows presents w B = τ /ρ, then, the winding number of region B in any periodic window in the sequence is w Bi = τ /ρ. To illustrate the validity of our results, we display in Fig. 4(a) sequences of periodic windows where all windows have their internal regions separated by λ − 1 and λ + 2 . For this sequence, according to Eq. (10) with k = 1 and l = 1, all periodic windows present different winding number values in their central region B. Additionally, we show in Fig. 4(b) the winding number calculated along a line passing through the region B of all windows that compose the sequence. It is clear that the winding numbers converge to w ∞ = 3.0. The torsion number and the period increment can also be determined, in Fig. 4(a), by τ = n Bi+1 − n Bi = 12 and ρ = m i+1 − m i = 4, respectively. Thus, the winding number convergence is in agreement with Eq. (6). Figure 4 here. In Fig. 4(c), we show another sequence of periodic windows internally separated by λ + 1 and λ + 2 . For this sequence of windows, we measure τ = 5, ρ = 3, and the winding number w B = 5/3. Thus, as predicted by Eqs. (10) with k = 2 and l = 0, we measure w Bi = 5/3. From Fig. 4(d) we also verify the limit w ∞ = τ /ρ = 5/3. Figure 4 shows that the regions where the sequences are accumulating have the same correspondent winding number w ∞ . In conclusion, we report the existence of torsion-adding phenomenon in periodic window sequences of the driven nonlinear oscillators. Additionally, we formulate a general rule [Eq. (10)] to obtain the winding number of any window belonging to the sequence. From this general rule we obtain the winding number asymptotic limit for any sequence (w ∞ = τ /ρ), such ratio seems to be an universal property of dynamical systems, once it requires only the existence of the period and torsion adding phenomena. Moreover, since there are no general theory ensuring that period-adding sequences are composed of an infinite number of windows, this limit is a theoretical evidence of the existence of infinite windows in the sequences. Furthermore, we present numerical simulations for the Morse oscillator where the reported torsion-adding phenomenon and winding number asymptotic limit are verified. We also performed numerical analysis for other nonlinear oscillators described by Eq. (1), and verified that all results are in complete agreement with our theory.
2012-10-22T13:04:23.000Z
2012-10-22T00:00:00.000
{ "year": 2013, "sha1": "9b1dae8a04edc3feb84a2e2c931f3c98a194c10f", "oa_license": "implied-oa", "oa_url": "https://doi.org/10.1016/j.physleta.2013.01.004", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "9b1dae8a04edc3feb84a2e2c931f3c98a194c10f", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics" ] }
250199943
pes2o/s2orc
v3-fos-license
Impact of a Multimodal Simulation-based Curriculum on Endobronchial Ultrasound Skills Background Currently there is no consensus on ideal teaching method to train novice trainees in EBUS. Simulation-based procedure training allows direct observation of trainees in a controlled environment without compromising patient safety. Objective We wanted to develop a comprehensive assessment of endobronchial ultrasound (EBUS) performance of pulmonary fellows and assess the impact of a multimodal simulation-based curriculum for EBUS-guided transbronchial needle aspiration. Methods Pretest assessment of 11 novice pulmonary fellows was performed using a three-part assessment tool, measuring EBUS-related knowledge, self-confidence, and procedural skills. Knowledge was assessed by 20 multiple-choice questions. Self-confidence was measured using the previously validated EBUS–Subjective Assessment Tool. Procedural skills assessment was performed on Simbionix BRONCH Express simulator and was modeled on a previously validated EBUS–Skills and Task Assessment Tool (EBUS-STAT), to create a modified EBUS-STAT based on internal faculty input via the Delphi method. After baseline testing, fellows participated in a structured multimodal curriculum, which included simulator training, small-group didactics, and interactive problem-based learning sessions, followed by individual debriefing sessions. Posttest assessment using the same three-part assessment tool was performed after 3 months, and the results were compared to study the impact of the new curriculum. Results The mean knowledge score improved significantly from baseline to posttest (52.7% vs. 67.7%; P = 0.002). The mean EBUS–Subjective Assessment Tool confidence scores (maximum score, 50) improved significantly from baseline to posttest (26 ± 7.6 vs. 35.2 ± 6.3 points; P < 0.001). The mean modified EBUS-STAT (maximum score, 105) improved significantly from baseline to posttest (44.8 ± 10.6 [42.7%] vs. 65.3 ± 11.4 [62.2%]; P < 0.001). There was a positive correlation (r = 0.81) between the experience of the test participants and the modified EBUS-STAT scores. Conclusion This study suggests a multimodal simulation-based curriculum can significantly improve EBUS-guided transbronchial needle aspiration–related knowledge, self-confidence, and procedural skills among novice pulmonary fellows. A validation study is needed to determine if skills attained via a simulator can be replicated in a clinical setting. Endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) is recommended as the first-line procedure for the diagnosis and mediastinal staging of lung cancer by multiple medical societies. However, debate still exists on methods to effectively train and measure EBUS-TBNA performance in trainees (1)(2)(3)(4)6). Compared with the short and steep learning curve for conventional TBNA (7)(8)(9), the learning curve for EBUS-TBNA is longer and more complex, resulting in the need for extensive training and experience (4)(5)(6)10). Currently, EBUS-TBNA proficiency is either judged by procedure volume or determined by direct observation, both of which vary widely between institutions worldwide (11). Volume-based certification may be arbitrary, because individuals learn at different speeds, and it is problematic for smaller institutions that serve an insufficient number of patients to meet the threshold for each trainee (12). In observation-based assessment, the lack of a structured protocol introduces potential bias of supervisors because it relies on the individual supervisor's level of expertise and experience. A 2015 CHEST consensus statement suggested changing to a system that assesses skill acquisition and knowledge by incorporating tools like simulation (13)(14)(15). Despite this, a national survey of U.S. pulmonary fellowship program directors revealed only 30% of programs used a structured assessment strategy to evaluate EBUS competency (16). Currently, there is no consensus on the ideal teaching method for EBUS training in novice learners. Most pulmonary fellowships currently use the traditional apprenticeship model for EBUS-TBNA training, supplemented with some secondary teaching methods (i.e., literature review, videos, and didactics). However, this training model has significant disadvantages in detecting and addressing gaps in trainees' knowledge and skill, due to overreliance on trainees to recognize their deficiencies and the supervisor's ability to identify those gaps and teach accordingly. During EBUS-TBNA, the patient's safety takes precedence, resulting in the supervisor often taking over and interrupting the trainee's experience or assessment. Trainee involvement during EBUS can also increase procedural duration, anesthesia dose, and complication rates (17,18). Simulation-based training can reduce the learning curve that novice operators need to conduct an independent, successful EBUS-TBNA (19). EBUS simulators can also accurately discriminate between operators at different skill levels and experience and is one of several methods recommended to assess trainees and help achieve proficiency, complementing traditional apprenticeship training models (20,21). Based on this, we wanted to 1) develop a comprehensive assessment of EBUS performance of pulmonary fellows in training at our institution; 2) implement a new multimodal simulation-based EBUS training curriculum; and 3) test the effectiveness of this training curriculum. Curriculum Development For this project, we formed an internal EBUS Expert Committee (EEC) with four teaching faculty physicians in the Division of Pulmonary and Critical Care at our tertiary level hospital and training program. Members of the EEC were considered at the expert level, because each independently performed more than 200 EBUS-TBNA procedures (20). One senior pulmonary fellow designated in the Clinician-Educator track developed this study with the EEC and two simulationcenter medical directors. The Clinician-Educator track fellow had received training and experience in other simulation-based assessments and debriefing before this study. Knowledge gaps among our trainees were identified through a needs-assessment survey of the teaching faculty. Based on this survey, we identified deficiencies such as anatomical identification of segmental bronchi and lymph node stations and best practices for procedural technique. We then targeted these deficiencies via a newly created EBUS curriculum to improve EBUS-TBNA-related knowledge, self-confidence, and procedural skills. This study was declared exempt by the institutional review board. Learning Outcomes and Measures Three learning outcomes were measured to evaluate the efficacy of the EBUS curriculum: knowledge, self-confidence, and procedural skills. Multiple-choice questions (MCQs) were used for EBUSrelated knowledge assessment. Twenty-five MCQs were selected from an online pool of previously validated MCQs by expert bronchoscopists (The Essential EBUS Bronchoscopist, access at http://www. bronchoscopy.org) to address topics per the faculty's needs-assessment survey (lymph node station anatomy, lung cancer staging, and EBUS procedure technique). To refine the MCQ tool, it was tested on a focus group of the three EEC faculty mentors and three graduating pulmonary fellows (post graduate year [PGY] VI), who were trained in the traditional apprenticeship method. Each of the three graduating pulmonary fellows performed at least 35 EBUS-TBNA procedures at the time of testing. The questions were rated by difficulty based on the focus group's responses. The difficulty index or r of a test item is the proportion of a group of test-takers who responded incorrectly. For example, r = 90% is a very difficult question, whereas r = 10% is very easy. Questions that had either r > 75% among experts or r < 25% among the graduating PGY-VI class were eliminated. The resulting 20 MCQs were used for the knowledge assessment of the learners in this study. We included subjective self-assessments as a measurable outcome because the EBUS-Subjective Assessment Tool (EBUS-SAT) (see online supplement) has been previously validated as a tool to measure the change in EBUS skills (22). It also allows trainees to provide feedback on the curriculum. In EBUS-SAT, trainees rate their ability to perform 10 different EBUS-related tasks using a 5-point Likert confidence scale. Our objective assessment was modeled on the EBUS Skills and Tasks Assessment Tool (EBUS-STAT), a validated 10-item assessment tool developed in 2012 (23). Permission was obtained from one of the tool's creators (Dr. Henri Colt) before modifying it for our study. Two validated objective assessment tools exist to evaluate EBUS-TBNA: the EBUS Assessment Tool and the EBUS-STAT (22,23). We chose the latter as a template because it contained more relevant details of TBNA assessment for a novice trainee, was designed as a potential screening tool to assess fundamental skills, and was flexible to be modified per local requirements (24). Certain items in the EBUS-STAT fail to discriminate between novices and experts (24). Therefore, we developed a modified version (mEBUS-STAT) that replaced some items to include tasks measurable by the BRONCH Express simulator ( Figure 1) and elements reflective of our institutional practice, while adherent to evidence-based practices of EBUS-TBNA (25). Using the Delphi survey method, the EEC reached a consensus after three survey cycles on the final modifications of the objective assessment tool. The differences between the original EBUS-STAT and the mEBUS-STAT we used are summarized in Table 1. The mEBUS-STAT has two parts: knowledge (image recognition and decision-making tasks) and technical skills assessment (lymph node identification, mediastinal vascular structure anatomy, and TBNA technique). The simulation case scenario for TBNA technique assessment was the same for all participants and met the objectives of "simple, obvious" mediastinal adenopathy (see online supplement). The TBNA assessment was limited to one case, because it accomplished our goals within a 1-hour duration. To maintain consistency of the assessment method, the Clinician-Educator fellow guided the learners through the session with a standard script (see online supplement) but otherwise did not interfere. Any verbal or manual intervention was tracked. Feedback was only given during targeted debriefing sessions to validate the trainee's accomplishments while highlighting opportunities for improvement. Trainees also completed a questionnaire to assess demographic information such as year of training and EBUS-TBNA experience. The trainee assessment was conducted in a controlled environment in the simulation lab. All simulator case modules were developed by 3D Systems and preloaded into the bronchoscopy simulator. Before implementation, the EEC approved the grading for the final threepart assessment (MCQs, EBUS-SAT, and mEBUS-STAT). Before this project none of the trainees had been assessed formally and were all trained under the apprenticeship model. Approximately 6 months was necessary to perform the needs assessment, develop the assessment tools, and finalize the educational interventions for this curriculum. Participants and Procedures Eleven pulmonary fellows (six PGY-V and five PGY-VI) were the target of this study. After their baseline assessment, each participant successfully attended a didactic lecture, problem-based learning (PBL) session, and individual debriefing session over 4 weeks ( Table 2). The one-on-one debriefing practice session targeted the deficiencies in their individual EBUS-TBNA technical skills on the simulator. Didactics addressed evidence-based practices related to EBUS, including indications, lung cancer staging, lymph node station anatomy, TBNA performance, and slide preparation (26). A flipped classroom model was implemented for the PBL session, because it is better for learner retention than a passive learning model (27)(28)(29)(30). PBL sessions facilitate the learners to remain engaged during the session and take ownership of their self-learning, and faculty act as facilitators to challenge the trainees' thinking without dictating it (31,32). The cases for the PBL session were chosen after consulting with the EEC and based on the baseline MCQ testing of the learners and included the following topics: lung cancer staging, the role of mediastinoscopy, assessing TBNA results, and positron emission tomography-negative adenopathy (see online supplement). During the PBL session, fellows perform a literature review and discuss their answers in a brief Power-Point presentation as a small group, with faculty moderating the sessions. After 3 months, all 11 fellows were retested using the three-part EBUS assessment. All fellows were allowed access to the BRONCH Express simulator to practice, in addition to performing supervised EBUS-TBNA procedures on actual patients in the interval between baseline and posttest. Equipment The simulator used in our study was the BRONCH Express (3D Systems) that consisted of a proxy bronchoscope, a proxy EBUS biopsy needle tool, an interface that tracks equipment movements, and a monitor displaying the computer-generated endoscopic and ultrasound images (Figure 1). Statistics Results are reported using mean ± SD. Paired two-tailed t test analyses were conducted with the trainees' posttest scores compared with their baseline scores. For statistical analysis, SPSS was used (version 20; IBM Corp). The significance level was defined as P , 0.05. RESULTS We conducted a single cohort pretestposttest study in July 2019 and included PGY-V (n = 6) and PGY-VI (n = 5) pulmonary fellows from a single-center tertiary care training program. At baseline pretest assessment, basic demographic data showed that 9 of the 11 participating fellows had assisted in fewer than five EBUS-TBNA procedures under the traditional apprenticeship model. The remaining two fellows were involved in fewer than 10 EBUS-TBNA procedures. Knowledge Assessment The mean MCQ score improved significantly from a baseline of 52.7-67.7% posttest (10.5 ± 1.4 vs. 13.5 ± 1.6; P = 0.002). To assess the effectiveness of the educational intervention for the group as a whole, the total class-averaged gain (g) was calculated (g = 31.7%). A minimum significant increase of 30% is considered for rating the intervention as effective (33). Objective Skills Assessment The mean mEBUS-STAT score improved from 42.7% to 62.2% posttest (44.8 ± 10.6 vs. 65.3 ± 11.4; maximum score, 105; P , 0.001). The learners improved significantly in the bronchoscopic technical skills portion from 38.4% to 62.9% (30.7 ± 9.6 vs. 50.4 ± 10.9; maximum score, 80; P , 0.001). Construct validity for the mEBUS-STAT tool assessment was supported by a positive Spearman correlation between the experience of the operators taking the test (number of EBUS procedures performed or assisted) and the mEBUS-STAT scores when tested with expert members of the EEC and trainees (r = 0.81). The EEC members had a mean mEBUS-STAT of 76.5% (81.3 ± 3.2). Using the contrasting group method, a pass score of 75 was established (34). At the baseline objective assessment, 0% of learners passed, but at the posttest, 27% passed. DISCUSSION Pulmonary fellowship programs commonly determine bronchoscopic competency either by an arbitrary number of Table 2. Endobronchial ultrasound-guided transbronchial needle aspiration assessment and multimodal simulation-based curriculum Step EBUS Course supervised procedures or subjective evaluations, despite evidence that it should be evaluated in a more formalized, objective fashion (35). We modeled our EBUS curriculum based on the three-step approach to EBUS-TBNA training proposed by the European Respiratory Society: learning the necessary anatomy, simulation-based training, and supervised performance (36,37). Simulation training alone has been shown to improve technical skills in EBUS-TBNA (24,38). The multimodal simulation-based curriculum we used rapidly improved EBUS-TBNA technical skills and knowledge among novice pulmonary fellows. It is notable the posttest mEBUS-STAT scores of the novice learners (n = 11) after the 3-month curriculum were similar to the graduating class (PGY-VI, n = 3) who underwent 3 years of traditional apprenticeship training (62.2% vs. 64.8%; P = 0.69), despite the novice learners' involvement in much fewer EBUS-TBNA procedures during posttest (,15 among novice trainees vs. .30 procedures among graduates). This suggests that the addition of a standardized simulation-based training may shorten the learning curve for EBUS-TBNA compared with the traditional apprenticeship training model (22,39). Despite improvement in technical skills, only 27% of learners achieved a passing score on the mEBUS-STAT posttest. It is unclear at this time if the low passing score is from the lack of sufficient selfregulated practice and/or training on the simulator (trainees spent an average of 30 min in self-regulated training on the EBUS simulator), low procedural volume (,15 by posttest), or suboptimal teaching methods in our training curriculum. Strengths and Limitations Because multiple teaching approaches were used, it is difficult to discern which modality proved to be the most effective: small group didactics, flipped classrooms and PBL, or the hands-on EBUS simulation training. Currently, there is a lack of consensus on the best method for EBUS training (20-22, 40, 41). The flexibility to individualize teaching to learners' specific skill or knowledge gaps through one-onone interaction during the debriefing session and hands-on simulation training were strengths of our curriculum. Several limitations may influence the interpretation of the findings in this study. First, this is an observational study with a small sample size at a single institution-a common problem among medical education research. Other studies using bronchoscopy simulators had between 6 and 16 participants (22,42,43). Expanding the curriculum to learners from other programs is needed to validate these tools externally. We considered a crossover cohort study design with the PGY-V class in the educational intervention group and PGY-VI as the control. However, during baseline testing, both classes reported novice level procedural experience based on procedure logs (,10 procedures) and similar scores on their baseline subjective and objective assessments. We therefore decided to provide the multimodal training program to all 11 participating fellows. A second limitation is using a single unblinded proctor for assessment because of scheduling conflicts involved in having a second proctor. The Clinician-Educator fellow was the proctor for the assessment and debriefing sessions because of the limited availability of the EEC faculty to proctor these sessions, given their clinical responsibilities. Anonymous grading was not feasible, because items on the mEBUS-STAT require direct observation. To minimize proctor bias, a scripted protocol was followed during (see online supplement). Third, we considered that acquiescence bias or learners' unrestricted procedural experience on real patients during the 3-month intervention period before posttest could lead to confounding results. This is unlikely, because the EBUS procedure logged by all 11 learners only increased by an additional average of five cases or fewer by the time of the posttest. The improvement in posttest scores is likely out of proportion to the experience gained by five or fewer additional EBUS cases. Fourth, using the same questions and case scenarios for baseline testing and posttest can introduce test-retest bias. We minimized this with the 3-month washout period, collecting all completed knowledge tests and not providing the MCQ answers to learners after the pretest. Fifth, there can be issues with realism and transfer with all types of simulation training. For example, lymph nodes were easier to identify on the simulator than real patients because of a more defined separation between nodes and vessels. Because additional tools such as a stylet or syringe are not available with the BRONCH Express, we supplemented the debriefing session with a real EBUS needle tool to bridge the realism gap. We were unable to assess learners during real patient cases because of scheduling conflicts and the limited availability of the proctor. Still, all faculty commented on a notable improvement in the bronchoscopic skills of the learners who went through this curriculum compared with the prior graduated classes who went through the traditional apprenticeship model. Finally, although it would be desirable for every novice operator to receive simulator training before patient exposure (44), EBUS simulators are a fragile and expensive resource, which may limit their widespread use. The Simbionix BRONCH Express costs $25,000 to purchase. In addition to equipment costs, conducting simulation training is time intensive and requires dedicated trained personnel and/or teachers. Although an assessment based solely on simulatorgenerated metrics does not require additional resources from busy educators, a simulator cannot replace a comprehensive curriculum to achieve all learning objectives. Simulation-based education can be an excellent tool to train novice fellows to acquire technical skills, complementing traditional apprenticeship training to improve procedural skills rapidly, but further studies are needed to test the validity of this curriculum. Conclusions A multimodal simulation-based EBUS curriculum can improve novice learners' EBUS-related knowledge, self-confidence, and technical skills. A validation study is needed to determine if skills attained via a simulator can be replicated in a clinical setting.
2022-07-02T15:03:45.212Z
2022-06-01T00:00:00.000
{ "year": 2022, "sha1": "84906af1cd415b41e6e6b9a6f9390204bca75145", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.34197/ats-scholar.2021-0046oc", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d37801e2890ea4add3d338633abce9fca09cb326", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
265791496
pes2o/s2orc
v3-fos-license
Serum Neurofilament Light Chain in Replication Factor Complex Subunit 1 CANVAS and Disease Spectrum Abstract Background Biallelic intronic AAGGG repeat expansions in the replication factor complex subunit 1 (RFC1) gene were identified as the leading cause of cerebellar ataxia, neuropathy, vestibular areflexia syndrome. Patients exhibit significant clinical heterogeneity and variable disease course, but no potential biomarker has been identified to date. Objectives In this multicenter cross‐sectional study, we aimed to evaluate neurofilament light (NfL) chain serum levels in a cohort of RFC1 disease patients and to correlate NfL serum concentrations with clinical phenotype and disease severity. Methods Sixty‐one patients with genetically confirmed RFC1 disease and 48 healthy controls (HCs) were enrolled from six neurological centers. Serum NfL concentration was measured using the single molecule array assay technique. Results Serum NfL concentration was significantly higher in patients with RFC1 disease compared to age‐ and‐sex‐matched HCs (P < 0.0001). NfL level showed a moderate correlation with age in both HCs (r = 0.4353, P = 0.0020) and patients (r = 0.4092, P = 0.0011). Mean NfL concentration appeared to be significantly higher in patients with cerebellar involvement compared to patients without cerebellar dysfunction (27.88 vs. 21.84 pg/mL, P = 0.0081). The association between cerebellar involvement and NfL remained significant after controlling for age and sex (β = 0.260, P = 0.034). Conclusions Serum NfL levels are significantly higher in patients with RFC1 disease compared to HCs and correlate with cerebellar involvement. Longitudinal studies are warranted to assess its change over time. Biallelic AAGGG repeat expansions in intron 2 of the gene encoding replication factor complex subunit 1 (RFC1) were identified as the cause of cerebellar ataxia with neuropathy and vestibular areflexia syndrome (CANVAS) and disease spectrum (here shortened as RFC1 disease). 1ffected individuals exhibit significant clinical heterogeneity, starting with an isolated sensory neuropathy, with or without chronic cough, and progressing to a more complex ataxia with cerebellar dysfunction in later disease stages. 2,3Bilateral vestibular areflexia is also often present but can be easily overlooked if not tested for.[10][11][12][13] Increased CSF or serum NfL levels in patients carrying biallelic RFC1 expansions have been observed in two single cases. 14,15However, there is still scant evidence of its diagnostic role or its ability to reflect progression of the condition over time. In this multicenter cross-sectional study, we aimed to assess whether serum NfL may represent a potential biomarker for RFC1 disease using ultrasensitive single molecule array (Simoa) immunoassay technology.Furthermore, we sought to evaluate the correlation between NfL concentration and clinical phenotype and severity. Patients' neurological history and clinical signs, including the presence of sensory neuropathy, cerebellar dysfunction (defined by the presence of one or more of the following signs: broken pursuits, dysmetric saccades, gaze-evoked or downbeat nystagmus, dysarthria, and dysphagia), vestibular areflexia, dysautonomia, cognitive impairment, and parkinsonism, were collected based on a standard template.Patients with other neurologic diseases were excluded.In prospectively collected data, neurological examinations were performed at serum collection.In the 11 patients whose serum was retrieved from biorepositories, examination was performed at a mean of 15 months from serum sampling (ranging from 24 months before sampling to 12 months after sampling). Based on the known progression of the disease from an isolated sensory neuropathy to a complex neuropathy with cerebellar dysfunction or full CANVAS, 3 patients were divided into two clinical subtypes as a proxy of disease severity: (1) RFC1 disease without cerebellar involvement, which includes patients with isolated sensory neuropathy/neuropathy with bilateral vestibular areflexia, and (2) RFC1 disease with cerebellar involvement, which includes patients with complex sensory and cerebellar ataxia or full CANVAS.][10][11][12][13][14][15][16] Loss of independent ambulation was also considered as a marker of advanced disease. Blood sampling and storage were conducted following a standard operating procedure at each of the six different centers.In particular, blood was collected into serum separating tubes tubes and centrifuged at 20 C at 3500 rpm for 10 min.Serum was then aliquoted and stored at À20 C. Samples were anonymized and sent blinded for clinical details to the University College London (A.H., H.Z.) for analysis of NfL levels.Serum NfL concentration was measured using the Simoa NfL assay on an HD-X analyzer (Quanterix, Billerica, MA, USA) in one round of experiments with one batch of reagents.Four quality control samples were run in duplicate; the mean intra-assay coefficient of variation of duplicate determinations for concentration was 6.9%. Demographic and clinical data were described as mean (SD) or median (interquartile range) if normally or nonnormally distributed, respectively.Data normality was assessed using Q-Q plots and analytical tests (Kolmogorov-Smirnov test, Shapiro-Wilk test, and Anderson-Darling test), requiring consistent results from all tests to confirm normality. Means between groups were compared using the t test for normally distributed data and the Mann-Whitney U test for nonnormally distributed data.Correlations were assessed using Spearman's or Pearson's coefficients, as appropriate for data distribution.We conducted a multiple linear regression analysis to examine the relationship between serum NfL (dependent variable), cerebellar involvement, age at time of blood collection, and sex (independent variables). The study protocol was approved by the local institutional review boards and ethics committees.Written informed consent was obtained from all patients and HCs.The study adhered to all applicable ethical regulations. Results A total of 61 patients and 48 HCs were enrolled in the study.The demographics and clinical characteristics of participants are summarized in Table 1.There was no significant difference in the mean age at sample collection of the two groups (P = 0.13) or sex distribution (P = 0.823).Mean age at blood collection in patients with RFC1 disease was 67.08 years (AE10.70),with a mean age of onset of 55.16 years (AE10.85)and a median disease duration of 11 years (6-15).Overall, 42 patients (69%) had signs and/or symptoms of cerebellar involvement.At the time of evaluation, 27 patients (44%) required walking aids.In 7 patients (11%) vestibular function was not assessed. No significant difference in serum NfL levels was observed between stored samples and samples collected prospectively (23.18 vs. 26.62pg/mL, P = 0.22). Receiver operating characteristic analysis showed that serum NfL levels could discriminate patients from controls with great accuracy (AUC of 0.9262, 95% CI [confidence interval]: 0.88-0.97)(Fig. 1B).A concentration of 15.86 pg/mL can effectively identify individuals with RFC1 disease with a sensitivity of 92% and a specificity of 81%. The significant difference in serum NfL concentration compared to controls was maintained for individual comparisons of controls versus patients without cerebellar involvement (P < 0.0001), and of controls versus patients with cerebellar involvement (P < 0.0001). Also, mean NfL concentrations were significantly higher in patients with cerebellar involvement compared to patients without cerebellar dysfunction (27.88 vs. 21.84pg/mL, P = 0.0081) (Fig. 1D), and the association remained significant after controlling for age and sex in a multiple linear regression model (β = 0.260, 95% CI: 0.411002, P = 0.034). Conversely, there was no significant difference in NfL levels between patients without signs of vestibular dysfunction (n = 13) and patients with vestibular impairment (n = 41) (25.28 vs. 26.25 pg/mL, P = 0.7061).Serum NfL levels did not appear to correlate with disease duration (r = 0.014, P = 0.917) and did not differ between patients with independent walking (n = 42) and patients using walking aid (n = 27) (24.12 vs. 28.36pg/mL, P = 0.0820). One patient had clinically manifest parkinsonism, with an NfL value (31.78 pg/mL) above the 75th percentile, and 3 patients presented with clinical signs of dysautonomia (mean NfL: 28.82 AE 6.26 pg/mL).Cognitive impairment was not reported in any patients. Discussion This study is the first to investigate serum NfL levels as a biomarker in RFC1 disease using ultrasensitive Simoa technology.We found significantly higher serum NfL levels in patients with biallelic RFC1 expansions compared to HCs of the same age and sex.Elevated NfL levels were observed in various clinical phenotypes, including isolated sensory neuropathy, which is more common in early disease stages.NfL levels demonstrated excellent discriminatory power, supporting its potential as a reliable biomarker in various neurodegenerative ataxias, 12,13,[17][18][19] as well as in hereditary neuropathies. 8,10fL is an axonal cytoskeletal protein released after axonal damage.Abnormal NfL serum levels in our patients reflect the pathology and progression of RFC1 disease.1][22] A recent brain magnetic resonance imaging study also showed basal ganglia and brainstem volumetric reduction and involvement of the cerebral white matter in cases with advanced disease, 23 suggesting a widespread cerebral neurodegeneration. We have also demonstrated that patients with cerebellar damage have significantly higher levels of serum NfL than those without cerebellar involvement.This correlation persisted after correcting for age and gender.This may be explained by the high density of neurons in cerebellum, which would lead to a significantly increased release of NfL in the bloodstream when this structure is affected. Conversely, serum NfL did not appear to correlate with disease duration and the need for walking aids.An explanation of this phenomenon is that NfL may increase rapidly in the initial stages of the disease and then reach a plateau above a certain degree of severity, as observed in other genetic ataxias. 12,13,18Other possible explanations entail the difficulty in accurately defining the onset of the disease, because neuropathy symptoms may remain unnoticed for a long time, or the chronologically variable involvement of the cerebellum in early or late disease stages relative to onset, due to factors yet to be explored, including the repeat size and additional genetic modifiers.Also, no significant difference in serum NfL levels was found based on vestibular involvement.This could be due to the limited quantitative relevance of Scarpa's ganglion, which contains around 20,000 neurons, compared to the cerebellum's 50 billion neurons. This study has some limitations.First, the small sample size limits the statistical power of the study.Second, the relatively wide range in the time interval between serum collection and neurological examination (within 24 months at the most) may have introduced variability.However, previous studies in patients with spinocerebellar ataxias and Charcot-Marie-Tooth disease showed no significant difference in serum NfL concentration after 1 or 2 years, 8,24 suggesting stability of NfL levels in the short time in patients with slowly progressive neurodegenerative diseases.Finally, the crosssectional design of the study restricts causality assumptions and does not allow us to assess changes in serum NfL levels over time. In conclusion, we have demonstrated that serum NfL holds promise as a reliable biomarker in RFC1 disease, as NfL levels are elevated even in the early stages of the disease and exhibit a correlation with cerebellar involvement.Given the significant interindividual variability, NfL may prove to be a valuable tool for stratifying patients, and longitudinal studies are warranted to assess its ability to monitor disease progression. FIG. 1 . FIG. 1. (A) Increased serum NfL concentration in patients with RFC1 disease compared to healthy controls (HC).(B) Receiver operator curve of serum NfL concentration for detecting patients with RFC1 CANVAS and disease spectrum.(C) Correlation between serum NfL concentration and age in both patients carrying RFC1 expansions and HCs.(D) Increased serum NfL concentration in patients with cerebellar involvement compared to patients without cerebellar dysfunction.Both subgroups show higher concentrations compared to HCs.Line is at mean.Error bars: standard deviation.P-value less than 0.05 is flagged with one star (*), P-value less than 0.01 is flagged with two stars (**), and P-value less than 0.001 is flagged with three stars (***).NfL: neurofilament light.CANVAS: cerebellar ataxia, neuropathy, vestibular areflexia syndrome.[Color figure can be viewed at wileyonlinelibrary.com] IRCCS Istituto Auxologico Italiano, Milan, Italy 11 Department of Pathophysiology and Transplantation (DEPT), University of Milan, Milan, Italy 12 Clinical Neurochemistry Laboratory, Sahlgrenska University Hospital, Mölndal, Sweden 13 Department of Psychiatry and Neurochemistry, Institute of Neuroscience and Physiology, The Sahlgrenska Academy at the University of Gothenburg, Mölndal, Sweden 14 Hong Kong Center for Neurodegenerative Diseases, Clear Water Bay, Hong Kong, China 15 Wisconsin Alzheimer's Disease Research Center, University of Wisconsin School of Medicine and Public Health, University of Wisconsin-Madison, Madison, Wisconsin, USA 16 Department of Clinical and Movement Neuroscience, UCL Queen Square Institute of Neurology, London, Objectives: In this multicenter cross-sectional study, we aimed to evaluate neurofilament light (NfL) chain serum levels in a cohort of RFC1 disease patients and to correlate NfL serum concentrations with clinical phenotype and disease severity.Methods: Sixty-one patients with genetically confirmed RFC1 disease and 48 healthy controls (HCs) were enrolled from six neurological centers.Serum NfL concentration was measured using the single molecule array assay technique.Results: Serum NfL concentration was significantly higher in patients with RFC1 disease compared to ageand-sex-matched HCs (P < 0.0001).NfL level showed a moderate correlation with age in both HCs (r = 0.4353, P = 0.0020) and patients (r = 0.4092, P = 0.0011).Mean NfL concentration appeared to be significantly higher in patients with cerebellar involvement compared to patients without cerebellar dysfunction (27.88 vs. 21.84pg/mL, P = 0.0081).The association between cerebellar involvement and NfL remained significant after controlling for age and sex (β = 0.260, P = 0.034).
2023-12-07T06:17:14.740Z
2023-12-06T00:00:00.000
{ "year": 2023, "sha1": "962902ac8d7687c6ea5b3d9074b630725a753343", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/mds.29680", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "ce2b565d1a7c828c597fa22819ef1d4787cb29b8", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
55646169
pes2o/s2orc
v3-fos-license
Bishiklik Petroglyphs in Neyshabur County, Northeastern Iran Neyshabur County is located in the North East of Iran. This county borders to Binaloud mountain range and Qochan County from the north, to Esferayen and Sabzevar counties from the west, to Kashmar and Torbate heydarieh counties from south and to Mashhad County from east. Neyshabur County has witnessed human settlements in different periods because of fertile plains and abundant hydrological resources. Bishiklik petroglyphs are located within 30 km of Neyshabur -Kashmar road (6 km southeast of Kalateh Hassan Abad village), which has direct access via a dirt road passing through the village. A few motifs have been engraved on the smooth surface of a large boulder by percussion and carving techniques. The depth and color of the motifs even those in the same scene is different, so that the depth of some of the motifs has been eroded over time through the effect of natural factors and is not visible since it has become level with the boulder. Stone engraving motifs include stylized animal motifs such as canines as well as a human on horseback motif, mountain goat with large exaggerated backward horns, which forms the majority of the motifs in petroglyphs of Bishiklik. Unfortunately, despite the large number of Rock engravings (rock art) throughout Iran, this rock art has been rarely subject to research and precise analysis. This challenges becomes complicated in North East of Iran (Khorasan) as the poorest region in the field of archaeological research. These issues as well as the poor status of Bishiklik petroglyphs made us evaluate and introduce this valuable relic. Introduction The rock art or petroglyphs on stone and boulder is an ancient method of expression of beliefs and thoughts for man. The rock art dates back to over 30 thousand years ago; however, the study of prehistoric motifs in Iran is relatively contemporary in comparison with the other archaeological studies. The rock art has managed to display the first known sights of artistic sensitivity and aesthetics of distant ancestors of humans in many parts of the world with a significant representation (Rafifar, 2002: 46). Despite a lot of similarities in terms of motif themes, the petroglyphs frequently involve hunting scenes, battles and single image of a mountain goat. In terms of implementation technique, the engravings have been carved and threshed, and have been created by tapping on the rock face using a rigid object in a low depth. Stone engraving themes are affected by geographical, cultural and environmental conditions of each region. Nowadays, Rock engravings involve a large spectrum of periods from Paleolithic to the contemporary. The main problem of these engravings is lack of absolute chronology in them, and their history and specific period can only be specified as comparative chronology (Rezayi & Judi, 2010:3). The study of petroglyphs in Iran was undertaken by Italian researchers. In 1958, when a group of Italian geologists were busy exploring and extracting the minerals in the Baluchistan region, they discovered a number of rock paintings in Gazu district (Dessau 1960). This discovery can be regarded as the first research on petroglyphs in Iran (Mohammadi Qasrian, A. 2007:19). The preliminary report of McBourny after examining rock engravings by Douche and Mirmalas in 1969 can be considered as the first preliminary review on petroglyphs in Iran (Bourny, 1969:14-16). The most important and profuse motifs of Iranian Nowadays engravings is Timereh, the collection of motifs of which has been published in the form of a comprehensive book in 1998 (Farhadi, 1998: 65-66). Although these works of art have been identified in many parts of Iran, there are still many uncertainties about their function and meaning, which prevents accurate determination of their exact cultural-historical position. The heritage of Iranian civilization for the world arises from their rich peace-loving culture. Nowadays petroglyphs (art rock) form a significant portion of this valuable heritage, which unfortunately have been seldom subject to precise research despite their multitude throughout Iran, which is a challenge in the North East of Iran (Khorasan) as the poorest region in the field of archaeological research. These issues on the one hand and the poor status of the petroglyphs of Bishiklik in Neyshabur on the other made us evaluate and introduce this valuable work. Although this article may not be capable of full expression of the capacities and characteristics of this work, we do our best to register and record the endangered data of this valuable petroglyphs. Geographical Situation and Unevenness in the County Neyshabur County is located in the North East of Iran. The surface area of this county is 9308 square kilometers, and it is limited to north by Binaloud mountain range and Qochan County, to west by Esfarayen and Sabzevar counties, to south by Kashmar and Torbate heydarieh counties and to east by Mashhad County (Figure 1). Neyshabur Plain is situated on the slopes of Binaloud mountains, which is the highest mountains of Khorasan (Taheri, 2005:7;Velayati 1988: 96).Foothill plain of Neyshabur involves foothills and southern lowlands in terms of geology, which includes suitable hydrological factors along with other items, providing for human settlements in different historical periods (Basafa, 2011: 58). Neyshabur Plain has a moderate sloping towards southwest. The importance of this plain in the past centuries and now is due to presence of four rivers that originate from Binaloud and all drain to Kalshoor River (Taheri, 2005: 7). Ecological potentials such as matter, energy, space, time and diversity (Watts 2007: 60) have made this plain as one of the most fertile areas of Khorasan. According to geologic conditions and considering the natural and cultural processes, Neyshabour Plain is not directly affected by deserts (Riyazi 1992: 21).Thus, this plain has natural and biological resources, is not faced with space limitation and has had the necessary factors for development of human societies in long term cultural processes (Garazhyan 2008: 3). From the total surface area of Neyshabur County, about 5500 square kilometers is formed by plain and the rest by highlands. The heights of northern have formed a suitable factor to provide a good substrate for human settlements in chronological processes, and have provided proper climatic conditions and water resources. This mountain range is called Binaloud and has northwest-southeast slope and is the continuation of Alborz mountain range in Khorasan. The highest point of the mountain is Binaloud summit and is 3200 to 3400 meters above sea level, causing it to be known as Khorasan roof. The climate of Neyshabur is different with respect to the height of its southern and northern regions. The climate is temperate and cold in the northern and southern mountains, is temperate in the central plains, and is considered a dry region in terms of rainfall (Taheri, 2005: 8). It seems necessary to address some of the climatic characteristics of Reysi region where the Rock engravings are located, as this engraving is directly influenced by these parameters. The winds as one of the important elements of this climate have affected the geology of the region, so that it is situated in a corridor of monsoon. On the other hand, this region is an example of the desert ecosystems in which the torrential downpour causes erosion and leaching of the soil. The percentage of minerals is noteworthy in the soil in this region, which prevents the formation of rich vegetation in this region. The Position of the Petroglyphs Bishiklik Rock engravings with N 35 53 30 E 58 3 9 20 coordinates are located within 30Km of Neyshabur-Kashmar road (6 km southeast of the village of Kalateh Hassan Abad) with access via a dirt road passing from the village (Figure 2). Mr. Ahmad Reza Salem and Mr. Momen Nejad for the first time presented a brief report of this Rock engraving in web site of the Iranian petroglyphs. Location of the Rock engravings is known as Kamar Beshkeli for the locals, and the petroglyphs of Bishiklik is known as talisman stone for them. A 3m deep pit has been dug under this boulder probably to find treasures. In Kamar Bishikli region, boulders separated from the mountain are found in the foothill, and different animal motifs (usually mountain goat) have been engraved on them (Figure 3). The largest boulder on which the engravings are found is nearly 4.20m high and 3.20m wide. The motifs have been engraved on the smooth surface of small and large boulders, which shows cracks and lamination due to the nature of stones and erosion over time. Diversity, Theme of Rocks and Quality of the Motifs On the smooth surface of the large boulder (12 motifs) and two smaller boulders (2 motifs), motifs in the single form and in a set have been engraved using threshing and carving techniques, which are probably related to the prehistoric era. All the motifs have been carved by negative Carving method by percussion igneous rocks, tibia bone of animals and metal. The depth and color of the motifs, their erosion pattern and retention are the factors that can be considered to determine the dating and history of their creation. Depth and color of the designs even those in the same scene are different. The depth of some motifs has been so eroded over time through the influence of natural factors that it is almost level with surface of the rock. Other motifs have been covered with patina and are of the same color with the rock, and so motifs with long backward horns higher than the real size are not clearly visible. Petroglyphs motifs include stylized animal motifs, mainly the mountain goat. Approximately 80percent of the motifs in Bishiklik are related to this animal. Mountain goat motif is the major motif petroglyphs of Iran. Nearly 80 percent of the rock engravings of Iran show the mountain goat motif with long symbolic horns implying the message of "water, rain, abundance, guarding the moon, guardian and savior of policy. This shared verbal and oral message on the pottery has remained from the Neolithic and especially chalcolithic period. Detailed Description of Motifs In general, the motifs include the designs of different animal motifs (mostly goats) and an image of man on horseback ( Table 1). As noted above, mountain goat is the most frequent motif among the animal motifs. We hereby describe the motifs in detail. The motifs are coded based on English letters, and have been specified in Figure 4. Goat motif: Goat motifs show a variety of designs in terms of morphology. Mountain goat is seen in a very high percentage compared to other animal motifs (all in the form of carving and threshing). The images are in profile and the motifs are overall stylized, body details are not displayed, and the goats have long exaggerated horns disproportionate with the body ( Figure 5). Wild animal motif (?): There is a stylized motif of an animal from the canine family (probably Wolf) among the motifs( Figure 6). As these motifs have been created using threshing technique and because of the damage to them, it is difficult to distinguish the animal (Figures 7). Human motifs: The only motif in this category shows the profile of a person on horseback riding in a stylized fashion, and body details of the rider on horseback has not been indicated. This motif has been carved by threshing, and has been largely destroyed due to the natural factors ( Figure 8). Horse motif: As noted above, only one horse motif is visible among the animal motifs, which shows a person on horseback and moving ( Figure 8). Discussion Chronology as one of the most important issues of the rock engravings not only regarding the newly found motifs but in other parts of the country has remained elusive and unanswered. As mentioned by Rafifar concerning the rock art, in general the petroglyphs in most cases have no direct relationship with the land and its ancient layers (except for those observed in the caves, when it is clear that the ancient layers in the cave were contemporary with the engraved motifs). Therefore, the Rock engravings in an open space on natural boulders cannot be easily dated (Rafifar, 2002: 70). Archaeological survey and the landscape where these motifs are located is a basic approach in relation to the chronology of the petroglyphs. In this regard, the settlement radius of the areas subject to archaeological studies as well as similar works surrounding the stone engravings can be useful. Considering the nature of the rock on which the motifs have been carved, the depth of the engraved motif on rock surface, the geographical and climatic factors, the location of the motifs (the rocky shelters, caves or open area), adequate knowledge of the type and texture of the sediments that cover the rock surface and motifs over time, and the way these sediment layers have been formed on the surface of the stone and motifs and that sort of things can be useful in chronological studies (Mohammadi Qasryan, 2006: 63-64). Another current method in chronology is typology and evaluation of the images. Observation of specific datable signs in the motifs (line, motif of tools and harnesses, animals like horse, etc.) and their attribution to specific periods of time and knowing the date they were used are among the methods of determining the chronology ofpetroglyphs. Motifs similar to those in Bishiklik can be seen in the mountains surrounding Torqabeh, Shandiz and Toos Plain in Khorasan Razavi Province (Bakhtyari shahri, 2009), Jarbat and Nargaslu in North Khorasan (Vahdati 2010) and Lakh Mazar in Birjand (Labaf Khaniki, 1997), Sarbisheh Byzham, Makhunak and Nehbandan in South Khorasan. In other parts of Iran, the rock engravings bearing similar motifs with our samples are relatively abundant. The mountain goat image is visible on rocks in many regions in Iran, including the samples identified in Kurdistan (southeast of city of Mahabad, Shahin Shahr, etc.) (Pedram 1994:79), Rock engravings of Arnan in Yazd (Shahrzadi, 1997: 142) and rock art in LeQan in East Azerbayejan (Rafifar,2004: 111). Conclusion It should be noted that no indication has been found so far for recent creation of these works. Local anthropological studies indicate that the local people or their ancestors have not been implicated in creation of such motifs. In terms of motif studies, a number of styles related to different periods can be distinguished for petroglyphs in question. Some motifs are completely organic and nature-oriented, and can be assumed the Earliest motifs of this collection in terms of typology and the extent of sediment cover. A remarkable feature of the set of rock motifs in Bishiklik is that they mostly represent mountain goat with long arcuate horns. The image of this animal can be observed from the early Neolithic period in different formats (pottery designs, effigies, etc.) in ancient Persia. Motifs of this animal are visible on rocks in different areas and cave walls, on pottery containers of Sialk, Susa, Ismael Abad, Tale Bakun, etc. In this case, according to the rock motifs of Bishiklik, which often show mountain goat and are similar to the pottery designs in the chalcolithic period.
2019-04-25T13:12:04.538Z
2016-04-27T00:00:00.000
{ "year": 2016, "sha1": "c1b5d5bb5a335de58b9f4a5fa21aa57b63718ddf", "oa_license": "CCBY", "oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.ija.20160402.11.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "59dcde79be3a509fe63752ea3453530c4cf6de3a", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Geography" ] }
201064136
pes2o/s2orc
v3-fos-license
Resisting big data exploitations in public healthcare: Free riding or distributive justice? We draw on findings from qualitative interviews with health data researchers, GPs and citizens who opted out from NHS England’s care.data programme to explore controversies and negotiations around data sharing in the NHS. Drawing on theoretical perspectives from science and technology studies, we show that the new socio-technical, ethical and economic arrangements were resisted not only on the basis of individual autonomy and protection from exploitation, but also as a collective effort to protect NHS services and patient data. We argue that the resulting opt-outs were a call for more personal control over data uses. This was not because these citizens placed their personal interests above those of society. It was because they resisted proposed arrangements by networks of stakeholders, not seen as legitimate, to control flows and benefits of NHS patient data. Approaching informed consent this way helps us to explore resistance as a collective action for influencing the direction of such big data programmes towards the preservation of public access to healthcare as well as the distribution of ethical decision-making between independent, trustworthy institutions and individual citizens. what did resistance to care.data elucidate on how data from could be exploited as a public What could be the role of opt-outs from big data programmes in the context of national health systems? Introduction Data-based research from electronic health records (EHRs) has been growing steadily in the UK for more than two decades (Vezyridis and Timmons 2016a). After substantial investments in clinical information systems across the NHS, academic and governmental research databases (e.g. CPRD, THIN, QResearch, ResearchOne, CALIBER) are exploiting this data for epidemiological, pharmaceutical and health services research. The UK government announced (in 2013) its own programme called 'care.data'. Led by NHS England and the Health and Social Care Information Centre (HSCIC, now NHS Digital), its aim was to collect, de-identify and link, in one central database, clinical and administrative datasets across the NHS and social services (Taylor 2014) for various uses, including research. However, due to a public outcry, which resulted in approximately 1.5m opt-outs, the programme never fully materialised and was eventually terminated in 2016 after the National Data Guardian's (NDG) review (Lea 2016). Care.data's failure has been attributed to poor communication, security risks and ambiguous information governance and data dissemination practices (HSCIC 2014, Vezyridis andTimmons 2017). Similar national programmes, for example in Denmark (Wadmann and Hoeyer 2018), are also under attack for their opaque dissemination practices in support of governmental and commercial contexts of exploitation. With the exception of some patient populations, the public seems to be largely supportive of health data sharing for research (Aitken et al. 2016), but at the same time confused and sceptical when it comes to claims of public benefit, transparency, anonymisation and consent, especially for commercial purposes (Castell and Evans 2016). The public is largely unaware of how its data is being used (Hill et al. 2013) by corporations and state agencies, often without clear public benefits (Aitken et al. 2016). Most of the literature has focused on public or healthcare professional attitudes to health data sharing, but not on analysing these two groups together, or involving the researchers who are using these datasets. More studies, focusing specifically on patients-citizens who have opted out, are needed (Rosenbloom et al. 2013) as well as studies on the public's acceptance of 'authorisation'institutional decision-making for data releases without informed consent -as a data governance mechanism (Aitken et al. 2016). This research was designed to address these gaps, particularly for national data sharing programmes in healthcare by focusing on three groups involved in care.data: health data researchers, General Practitioners (GPs) and citizens who have opted out from, or have campaigned against the programme. We do this by applying an Actor-Network Theory (ANT) (Latour 1988) approach to ask; what did resistance to care.data elucidate on how data from electronic health records could be exploited as a public good? What could be the role of opt-outs from big data programmes in the context of national health systems? ANT is a set of methodological tools to explore the social as a heterogeneous entanglement of everstabilising and proliferating associations between human and nonhuman entities (Latour 1988). ANT has been used to study ethical controversies in biomedical innovation and research (Williams-Jones and Graham 2003, Hayden 2007, Gardner and Webster 2017, Heeney 2017. Rather than studying moral frameworks from idealist viewpoints, it explores the practices of doing ethics within the material-semiotic networks, without making value judgements and by being attentive to the ontologies and normativities involved in mundane practices (Mol 2013). Previously we examined the scientific impetus and rationale for exploiting NHS patient datasets (Vezyridis and Timmons 2016a), the state's technolegal facilitation of decontextualised NHS patient data movements (Vezyridis and Timmons 2017) and the design affordances of the opt-out form (Vezyridis and Timmons 2016b). Here we focus our analysis on 'the dark side of the translation process and the disruption of the actor-network' (Galis and Lee 2014: 155) to understand how concerned citizens understand and practice ethics, politics and dissidence via the opt-out. We do not do this in isolation but in relation to the other protagonists (Latour 1988) of this hybrid controversy (GPs and health data scientists) who are directly involved in the collection and analysis of NHS patient data and are also affected by the opt-outs. By avoiding linear and deterministic macro-sociological examinations of adoption and acceptance of technoscientific innovations, which often ignore complex power relationships, the emphasis here is on the (in)visible sociomateriality of this assemblage in the context of deliberative politics (de la Bellacasa 2011, Welsh and Wynne 2013). There are varied reasons for patients-citizens to opt out from big data programmes and we do not attempt to present an exhaustive list. Rather, we are concerned with collective forms of resistance and how the opt-outs are 'enacted' (Mol 2002) here and now as one effect of controversial entanglements of ethical, social, political, economic and scientific practices; away from simplistic debates over altruism vs. egoism and closer to questions of (re)distributions of power and benefits. Power and ethics for data movements Science and Technology Studies (STS) has produced an extensive set of analytical tools to demonstrate a situated, relational and performative understanding of power dynamics and their ethics (Galis and Lee 2014) in the making of technological projects and scientific knowledge. For example, Callon's (1984) notion of 'translation' has been widely used to analyse sociotechnical processes of bringing together and forming stable material-semiotic networks of associated human and non-human actors responsible for the launch and maintenance of a technological innovation. STS emphasises that the fate of technoscientific projects is often unpredictable, and always contingent on power struggles weakening or strengthening network ties. These power struggles align interests and values and, thus negotiate and delimit, against many other possibilities (Williams-Jones and Graham 2003), 'the identity of actors, the possibility of interaction and the margins of manoeuvre' (Callon 1984: 203). In his review of approaches to power, Law (1991) argues that stable sociotechnical networks are the relational outcome of power balances ('power storage') where the actors involved have either accepted the new power dynamics or are able to exercise their own 'power/discretion'. In this line of ANT research, Latour (2004b) considers translation as the (political) process of resolving the 'matters of concern' amongst enrolled actors so as to render the resulting technoscientific assemblage a hard-torefute 'matter of fact', while for de la Bellacasa (2011) it is about actors being able to 'care' for the things they are not only materially and ethico-politically but are also affectively concerned with (Latour 2004b). Thus, a programme like care.data is theorised (in this paper) as a political performance full of tensions around possibilities of data exploitations and 'different ways of doing the good' (Mol 2002: 176). The paper critically questions what (and how) actors attempt to become the 'obligatory passage points' (OPPs) (Callon 1984) for the 'decontextualization, dissociation and detachment' (Callon 1998: 19) of NHS patient datasets to increase circulations (Leonelli 2016) within and beyond the network of public healthcare. The paper also highlights how ontologically and normatively multiple (Mol 2002; datasets and data subjects come into being, as they are framed (Callon 1998) and scripted (Akrich 1992) by scientific, governmental or corporate 'centres of calculation' (Latour 1988). As such, care.data provided the opportunity to examine not only the infrastructures and calculations attempted between actors (Callon 1998) for big data exploitations in public healthcare but also how (public) values are prioritised, stabilised or obscured (Gitelman 2013) as they are hybridised in programmes that blur the borders 'between science and politics, culture and technology, morals and economy' (Venturini 2010: 265). For this, STS scholars' understanding of publics and their values regarding power struggles between actors, as they set boundaries and facilitate intersections, (Dussauge et al. 2015, Montgomery 2017 are of importance to this paper. Rather than being self-evident or universal, values are theorised here as things made in practice (Leonelli 2016, Dussauge et al. 2015 which are meaningful only within the networks in which they are enacted (West and Davis 2011). While for Marres (2005) publics are 'ungraspable' and abstract entities, they are nevertheless formed by actors who gather together in 'democratic collectives' (Latour 2004a) to evoke pressure and negotiate their participation in boundary-making practices (Jasanoff and Kim 2009). In light of this, it could be argued that a public that has opted out from care.data did not necessarily pre-exist, but it was formed to respond (as Welsh and Wynne (2013) would argue) to any colonisations of meanings, matters of concern and values by particular (institutional) 'spokespersons' who sought to represent this public (Callon 1984). We thus highlight the inscriptions, power relationships and normativities that the opt-out mediated between the actors associated with this controversial big data programme. Methods The paper draws on the findings of a qualitative study carried out in 2016. Data were mainly collected through semi-structured interviews with three groups: primary EHR data researchers (n=11), GPs (n=7) and citizens (n=9) who had opted out from or campaigned against care.data. Participants (10 female and 17 male) were relatively older and well-educated. Researchers interviewed were working at the academic research databases mentioned previously, in roles including statistics, epidemiology, data architecture, administration, and research analysis. For the GPs, areas of expertise included clinical commissioning, data protection, ethics and academic health data-based research, while for citizens and campaigners their expertise included: information governance, business analysis, (non-) academic research, social work and public healthcare. Findings are also informed by observations at the organisations studied and from attending team meetings, conferences, public consultations and courses on conducting research with data from GP records. Participant recruitment was completed via: (for citizens) social media (#caredata) and newsletters to a patient advocacy group (Healthwatch); (for researchers) targeted emails; (for GPs) targeted emails and bulletins. All the digitally audio recorded and anonymised interviews were conducted by the first author in 2016 via face-to-face (n=16), phone (n=10) and Skype (n=1) and lasted between 20 and 113 min (average duration 55 min). Following a constructivist grounded theory approach (Bryant and Charmaz 2007), data was analysed inductively, using QSR NVivo 11, to identify differences, relationships and patterns among participants' narratives, and construct the conceptual understanding of their important meanings faithfully, inclusively and contextually. The study obtained ethical approval from Nottingham University Business School REC. Resisting exclusion from public good management The study found support for the principle of re-using large conglomerations of personal health data for wider societal benefit. However, the potential value of these datasets translated by care.data as 'matters of fact' (Latour 2004b) for the improvement of health (and wealth) included a prescriptive policy normativity of exclusion of patients-citizens and GPs from co-shaping the exploitation of this public good (Jasanoff andKim 2009, Welsh andWynne 2013). The attempt by the programme's leaders to black-box the aggregation and assetisation of NHS patient datasets via an 'all or nothing' approach to informed consent (see also Vezyridis and Timmons 2017) and their limited engagement with public deliberation destabilised the hybrid (Latour 2004b). The 'illegal and completely cavalier way' (Citizen 11) care.data was to deal with their personal health data, and with the mandatory extractions for GPs raised individual and societal concerns (Latour 2004b) around the construction of values and entanglements of NHS patient datasets within and beyond public healthcare. ..at some point I might opt back in, it's not that I'm against the principle, I just don't think [the programme's leaders] have got it right yet. (Citizen 1) .. most people are altruistic, put up a good suggestion and you'll get cooperation, but if you come in all heavy-handed and say, we are going to, because we can, you're going to get resistance… what [programme's leaders] said was, we're going to use it for economic regeneration and we're not going to ask you first. (GP 1) With its lack of tiered consent for patients-citizens, and without any meaningful choice about potential uses and users of NHS patient data (see also Sterckx et al. 2016), the opt-out attempted to inscribe in this network an unacceptable amplification of power asymmetries. By transferring rights of (obscured) exploitations to certain institutions, this 'interessement' device (Latour 1988) of (inadequate) personal control failed to enrol them, while making visible the lack of 'power/discretion' for the local network to participate actively in decision-making. Data transmissions were irrevocably destabilised (Law 1991), as some GPs and members of the public were seeking reassurances that these datasets would facilitate what they considered socially responsible research for public benefit, rather than appropriating and restricting access to these datasets for personal benefit (cf. Sterckx et al. 2016). I think everybody wants the ability to say, I feel good about donating it because I know what's happening to it … But if that organisation then says, 'you don't have a say in how we spend your money', then you're not going to give them the money. (GP 2) I would like to understand better who's going to benefit from it. So, if it's in the public interest …, I would always say yes … I wouldn't necessarily be happy that any information I gave would be able to be used by any researcher; for example, if a company was then going to buy my data and restrict the use of that information … because they were then going to be able to make a big profit from it … I would want to understand what would that mean for that drug company if they profit by using the information in Britain, but they can't get that same information in developing countries … are you then encouraging drug companies to only research … first world countries' problems? (Citizen 2) In effect, the opt-out was translated as an ethical, political and affective rendering and used as an active commitment in mediating responsible data exploitations (de la Bellacasa 2011) by restoring power imbalances (Latour 1988). The programme could then be reconfigured to address these citizens' matters of concern and care, rather than just protecting themselves at the expense of the society at large (cf. Sterckx et al. 2016). The attempt to place data exploitations in a transactional narrative where data extractions to the state, in exchange for NHS services, bifurcated research and care. This prioritised data collections and exploitations at the expense of the local relationships that produce the data (Montgomery 2017). The use of the NHS was understood as an absolute entitlement of citizenship, independent of any other obligation, and there was a material and affective attachment to the NHS (de la Bellacasa 2011). The attempted enactment of a new reality (and normativity) (Mol 2002) in the use of NHS services as an indistinguishable entanglement of healthcare provision, participation in data-based research and wealth creation, also shifted responsibility for maintenance from institutions to supportive individuals and publics (see also Marris 2015). This was perceived as 'a really radical change to the social contract that we have in this country' (Citizen 3). … it is another sneaky manoeuvre that makes me lose confidence in the ethics of this programme … We have the right to privacy and confidentiality which means we have the right to exercise it without being shamed for it. And I think a lot of the people who have opted out would be inclined to opt in if their concerns about confidentiality and ethics were addressed. That's not freeloading, that's just being sensible. (Citizen 4) … but there are all sorts of more objectionable ways that people get free rides at the moment. We don't deny transplants to people that don't carry a donor card, and I would argue that that's a much worse kind of free ride. (Researcher 1) Enforcing transparency, accountability and trustworthiness The care.data controversy also opened up the black-box of data extractions and releases. Matters of concern around (information) governance, agreements with obscure actors, assetisations, accountability and oversight were exposed and challenged, revealing complex and dynamic relationships that refused reductions. Socio-material practices of data exploitations, political and economic asymmetries between data producers and users, prioritisations and stabilisations of ethical and economic values were all rendered visible and scrutinised for accuracy and conflicts of interest. Publicity campaigns, particularly via 'junk mail' methods, were deemed unsatisfactory (cf. Hays and Daker-White 2015). Citizens perceived as distorted NHS England's claims of completely adequate data security and that no unintended harms would be caused by the proposed centralisation of all personal health data into one database. Confusing vocabularies and terminologies around data(bases) in the public debate excluded concerned citizens further from problematising and defining (Galis and Lee 2014) what is public and what is private, or good and bad uses. For policy-makers, care.data was only an issue of public acceptability and a question of which communication strategy about benefits could help the public overcome its 'privacy paranoia' and 'free riding', so as to manage the controversy away from more inclusive NHS patient data exploitations for public benefit (cf Marris 2015). What everybody's frightened of is that if people ever were asked to opt in, a lot of them wouldn't. (Researcher 1) [Programme's leaders] have used those terms interchangeably as if there's no difference like opt out and various terms like, pseudonymised, anonymous, these terms have, in my view, purposefully been misused and used interchangeably to try to prevent debate and to try to paint anyone who raises criticisms as a conspiracy theorist. Because they're using terms that are meaningless. (Citizen 3) The report by the Institute of Actuaries (Banthorpe et al. 2013) on accurate health insurance pricing, based on the analysis of hospital data provided by HSCIC, and the review of all NHS patient data releases that followed (HSCIC 2014) had been instrumental in raising public anxiety about data disseminations. These publications constituted an 'ethical moment' (Heeney 2017) that brought to the fore not only questions about the normative framework and institutional practices of data releases but also decisions to opt out. They made visible not only the incongruity between the programme's stated aims and actual practices of the institution responsible for these datasets, but also unknown relationships between various 'centres of calculation' (Latour 1988) ..when it came to care.data [Patient Participation Group members] were angry because they felt a betrayal of trust, … that their records were being taken and they were being told that it was for scientific purposes when actually the government were selling it on. They were fearful that insurance companies could actually access their individual record, without permission. (GP 3) Although the researchers interviewed considered information governance procedures in the NHS as strict, to the extent that they restricted 'worthwhile' research, they nonetheless believed that it was important for them to be strict so that data did not get into the 'wrong hands or isn't used for the wrong purposes' (Researcher 2). They thought that it was better to be over-cautious, especially when the omnishambles of care.data had already had a negative impact on their work. Others understood information governance not as a barrier but as a kind of "ethical data hygiene": necessary preparatory steps that researchers should take to make sure that their use of data is socially safe and ethically sound, 'like washing your hands before having dinner' (Researcher 6). … 5 years ago if you wanted to access any of these data sources was much … easier, the process now is lengthy … maybe because of care.data, but it is something that is supposed to be like this so … you cannot say that it made it bad, it changed it … (Researcher 5) ..some organisations pulled out of [the database], took their consent back and it stopped us really being able to go out there and promote it … But it was just so stupid the way they did it, I mean they really did treat people as though they just had the right to do all these things. Protecting public healthcare from predatory market forces The care.data controversy and the resulting resistance to the proposed entanglement of science, technology, politics, economics and ethics for NHS patient data assetisation moved beyond the ethics of individual privacy and confidentiality. It called for an examination of how the proposed arrangements for data exploitations would continue to support other actor-networks entangled with this one, such as access to (public) healthcare and also responsible scientific research. Here, public alienation was fuelled not by profit-making per se (see also Aitken et al. 2016) but by a lack of transparency and engagement in deliberations about the 'ethos of investment' and innovation (Muniesa 2017, Marris 2015 from these datasets. They felt uneasy with the idea that these public datasets could be used by private actors to reap the economic benefits directly, without giving back to the country that had made them available in the first place. Interviewees thought that the concept of care.data was good, notwithstanding issues of communication and meaningful choice. Citizens and some GPs accounted for their criticism in terms of an 'execution [that was] much too commercially focused and it [seemed] to be much more for the benefit of the private sector than the benefit of the health for people in the UK' (Citizen 4). … and the metaphor of course is the immortal life of Henrietta Lacks. So the idea that someone could take something made out of British healthcare records and become fabulously wealthy is something that grates with the average GP and patient. (GP 4) This is not to say that private involvement in the NHS was entirely unacceptable to participants. It was that making the selling of data 'an industry [was] the wrong approach' (Citizen 6). They refused to accept the attempted stabilisation of this network through a silencing of their objections as well as private actors obtaining more power and knowledge of the NHS than could possibly be justified (Callon 1984). They feared that the NHS is threatened by the introduction of a private insurance model that will create additional problems of access to healthcare for many. Care.data was, therefore, seen not as a big data scheme that will ultimately benefit people, but one that will also facilitate the demise of the public character of the NHS via the provision to private companies of 'information at population wide level ... to be able to have enough business intelligence .. to understand how the system works' (Citizen 2). Most citizens and some frontline GPs were worried that this private involvement was more likely to cause, rather than solve, problems that may be detrimental to the sustainability of the NHS (see also Montgomery 2017). Because I don't think there's any trust. I think you can say, giving it to these big pharmaceuticals, giving it to [private healthcare provider], increase the economy, but people say, well I don't want [private healthcare provider] to have this information, I don't want them to be any more better off and in a position to cause detriment to the NHS. (GP 2) What are big pharma using these data for? Well, they're using them … for marketing, which is … objectionable. So, if they've got a new, expensive antidepressant that's not being prescribed to certain people, they could find that out from the GP data, then adjust their marketing to try and cost the NHS more money. (Researcher 1) Such covert profit-making practices based on these data are antithetical to the founding principles of the NHS. Care.data was seen as a new database which would have brought together different social, technical and regulatory actors to form new heterogeneous networks to expand the scope of questionable secondary uses of data, and exert new powers through specific orderings, as one GP suggested. For most participants, if any kind of exploitation was to take place, it should not be at the expense of the individual, whether that was advertising private healthcare, stopping welfare benefits or excluding patients from care. I wonder if new databases make new rules and new rules means the ability to do things that other databases might not do, like export data to other institutions, countries, private companies, etc. which is something that existing databases might not do. So the conspiracy theorist in me wonders whether this is something that's economically driven. (GP 4) I don't believe that it would benefit people with chronic long-term health conditions in any way. I think the insurance companies will cherry pick healthy people to provide coverage to and people like me will be left with no healthcare or bottom of the barrel scraps of healthcare, like in America. (Citizen 4) Citizens expected that their responsibility to share their data would be matched by the responsibility of data users to share the NHS's foundational values (e.g. universal care). These private actors were not understood as residing outside society. They are members of the same assemblage and are expected to continue performing the meanings and values associated with the NHS. If not, then the opt out is mobilised as an OPP for safeguarding the NHS and protecting its assets from unacceptable political and market forces (Latour 1988). Reclaiming responsibility from institutional decision-makers Care.data became a question of what kind of normativities should accompany the ontologies of patients-citizens, datasets, regulations and institutional decision-makers. The controversy was the outcome of two different 'ontonorms' in tension (Mol 2013). On the one hand, these members of society enacted the patient-citizen as a societal active subject in an accountable and mutually beneficial data sharing relationship with institutional decision-makers and specific data actors; on agreed purposes of use as well as benefits distribution. On the other hand, the patient-citizen was scripted (Akrich 1992), via the opt-out, as an individual passive object of data extractions by representative (unaccountable) institutional decision-makers. In anticipation of maximum return on investment and potential wider societal benefits as well as risks of abuse, data exploitations had to be left fluid and open to unknown purposes. However, care.data, as a sociotechnical actor-network in the wider governmental network, was not criticised in isolation, but in relation to other, past and present, translations (Callon 1984) that lacked wider public and clinical support: national technology programme failures (e.g. NPfIT, SCR) (cf. Hays and Daker-White 2015, Greenhalgh et al. 2010) or controversial government data sharing schemes (e.g. Home Office, Google DeepMind) (see also Sexton et al. 2017). This entanglement not only destabilised this network but, more importantly, called for the re-problematisation of NHS patient data exploitations as an issue of institutional trustworthiness and sociotechnical competence, necessitating a public to rise to the occasion (Marres 2005), take responsibility for its data and reclaim a more mediating, and OPP, role (Callon 1984) in decision-making about data releases. For example, some participants, especially those working in the NHS or in the information governance sector, expressed their disappointment for the fact that 'privacy by design' (Citizen 7) was missing from NHS patient data systems. Tracking responsibility, or exercising control was becoming increasingly difficult in public services with too many databases where the burden is always placed on citizens 'to have to know them all, know where the fair processing notice is, know how to opt out' (Citizen 1). It wasn't necessarily care.data exclusively ... the rest of government is no better than the NHS was. We've seen that the rest of government is just as screwed up with data as HSCIC was two years ago. The problem is that people cared about the NHS more and the NHS was first. It was not worse. (Citizen 5) I don't trust anybody with it, because I'm a Data Protection Officer, and I know what mistakes are made. The fact that I can no longer trust the NHS, really disappoints me, … and I would probably trust someone like [grocery retailer] to look after my data … because to them, it's money, they can't afford data breaches, because of their reputational damage, and the costs that they might incur. I think public sector people have been too lackadaisical, and have not realised the value of data. (Citizen 1) Most citizens were not confident that they could trust institutions such as HSCIC or the Information Commissioner's Office (ICO), to act as their 'spokespersons' (Callon 1984). For example, the current NDG was highly regarded by participants due to her experience in the field of information governance in healthcare as well as her 'code of ethics' (Citizen 4), but contrary to the recent NDG review (Lea 2016) most citizens felt that some institutions' proclaimed independence and trustworthiness had been compromised too many times in the past by government. Therefore, they were not 'complacent in giving up individual rights to an institution' (Citizen 2). … HSCIC, they have no credibility from my perspective because they're not independent … I think I would place my trust in [Caldicott review body] because they have raised these issues and Dame Fiona Caldicott has had a critical independent stance on care.data … But ideally, I would like to see individual patients being able to take control of their care record. (Citizen 3) I don't think the [ICO], for example, appears to instil a great deal of fear into corporations. I think they know that they are an over-stretched organisation … I just think they're not the best. They may have some of the sort of legal powers behind them, but I don't think they make much use of them, and I think what powers they have are fairly ineffectual. (GP 5) Unlike other programmes of data collection for scientific analyses, the stability and durability of this network required constant local work and maintenance from all actors involved for continuous extraction of patient data. The expectation was of an ongoing relationship between patients, the NHS and institutional decision-makers. In effect, consent did not exist a priori but it was understood as an ongoing relational achievement of stable socio-material relationships (Mol 2002;Leonelli 2016) across and beyond the NHS. Citizens were aware of the technical and financial challenges in setting up and maintaining these relationships. They also understood that not everyone might be sufficiently 'educated' or 'articulate' to make informed decisions. However, they considered 'a one size that fits all' (Citizen 1) model of control and consent unsatisfactory. They would like to see a model where the individual has the opportunity to make a choice, while for those who do not wish to take up control other arrangements, such as 'the GP to have control of that data' (Citizen 3), can be put in place. Within this more collaborative governance framework, institutional experts could provide them with their informed opinions about potential uses and users of their data, but it should be 'the people with the final say over where their information goes' (Citizen 9). Conclusion Care.data was a big data programme that attempted to disrupt and re-configure, unsuccessfully, the current epistemological, political, economic and social practices of big data-based research in biomedicine (Leonelli 2016). By mobilising normative assumptions about citizenship and economics (Woods 2016), it brought new expectations and promises (Brown 2007) of innovative research for capitalising NHS patient data. However, just as with other national (EHR) centralised implementations, it took 'on a civic character' (O'Doherty et al. 2011: 368) and followed an unpredictable path through a complex and dynamic storm of social, technical, ethical and legal challenges (Greenhalgh et al. 2010) translated as decisions about which material-semiotic reality should be made durable and visible, and which not (Strathern 2000). We used an ANT approach to illuminate care.data's failure to consider the fact that any values of such programmes for the public good are not inherent and, therefore, accepted as given by the sociomaterial networks they attempt to re-configure. They are always enacted in practice by specific stabilised networks against a backdrop of many other possibilities (Mol 2002). In developing one of the biggest healthcare databases in the world, embedded norms and values need to be translated as differences can be expected among ambivalent actors (Singleton and Michael 1993) who are provoked to react (Callon 1984). From an STS perspective, care.data can be described as a typical example of an imposed 'sociotechnical imaginary' (Jasanoff and Kim 2009) that for sceptical voices was more of an institutional reassurance of self-acquired ethical legitimacy. It failed to engage with the full sociomateriality of NHS patient datasets and, failed to consider (in public deliberation and policymaking) all matters of concern, entanglements of values (Dussauge et al. 2015), affective attachments from, and invisible work by, those who care for the production of the data (de la Bellacasa 2011). Therefore, it could not develop collectively a wider common vision and practice for data exploitations by debating the extent to which (and by whom) the marketisation of datasets and the public good will coincide (Gardner and Webster 2017). Data research charities and institutions, private companies and government departments failed to eliminate the stark contrast between a 'regime of hope' (Moreira andPalladino 2005, Brown 2007) for speculative investments in scientific discoveries in the future and a 'regime of truth' where, for some of our participants, the risks of data exploitations, knowledge monetisation and the privatisation of NHS services were already evident. By focusing too much on raising awareness of the scientific value of these datasets, which none of our participants rejected in principle, they failed to reach consensus about other important 'matters of concern' (Latour 2004b). A convincing discourse and practice, for example, in support of the public character of NHS services or how these datasets would be protected from profitable 'closed-data and closed-algorithm business models in health' (Wilbanks and Topol 2016: 347) was missing. The programme aimed at securing the bare minimum of trust while maximising potential returns on investment. It thus quickly dismissed privacy and respect for individual autonomy as individualistic rights opposing wider prosperity, rather than seeing them as principles of social trust and public engagement (Taylor 2014). The rigidness of the programme's design and the 'blurring of the distinction between ownership of technical innovation and ownership of information' (Munns and Basu 2013: 130-131) propagated the 'myth' of a paranoid public of free riders. At the same time, shifts in alliances between the NHS, patients-citizens and unknown future users (Zuboff 2015) of these datasets were taking place beyond the context of public healthcare. In that respect, the care.data controversy was also about 'about power, both power over data and power over the outputs data can produce' (Taylor 2016: 11). It was about choice with regards to the kinds of logics and subsequent entanglements of exploitation in which concerned citizens, GPs and even health data researchers were expected to enrol (Callon 1984) without any 'spaces of contestation' (Barry 2002: 270). The programme instilled in parts of the public a sense that it was a politically and economically driven project, rather than a clinical-scientific one that would benefit patients-citizens (Greenhalgh et al. 2010). The opt-out was the materialisation of critique, protection and distrust towards institutional representatives and designers of such programmes (Greenhalgh et al. 2010) that extended 'beyond individual modes of opting out to collective forms of conscientious objection' (Benjamin 2016: 968). It was a material and affective commitment by dissenters to mediate between prescriptive and exclusionary science and policy normativities (Jasanoff andKim 2009, Welsh andWynne 2013) and affirm that these datasets are utilised for wider public benefit, with minimal social and economic harm for patients-citizens. As such, it was an ethical, social, political and economic collective activity against a "conscripted marketisation" of NHS patient data. Personal responsibility and control were mobilised as a way for the public to maintain its OPP role (Callon 1984) for secondary data uses as well as for adequate representation of the public's interests and values (Gottweis et al. 2011) when it comes to the distribution of public benefits (Benjamin 2016). In effect, it moved the discussion from the ethics of public altruism to questions around loyalty to NHS and, the trustworthiness and transparency of people and practices behind such programmes to represent public interests and values (Zuboff 2015). As Winickoff (2016: 54) argues 'benefit sharing attempts to stitch a distributive norm at the seam of the market and gift economies'. It calls for both distributive justice and power (O'Doherty et al. 2011), away from individualistic understandings of autonomy and protection . In this way, we can begin exploring ways of including in governance structures (of national big data programmes) 'the collectivity as sovereign ethical subject' (Hayden 2007: 744). We can then develop, for example, a 'partnership governance' (Winickoff 2016) that goes beyond occasional public consultations, to provide them with opportunities to co-configure public meanings, risks and values of these datasets (Jasanoff and Kim 2009), co-formulate governance policies (cf. Dove et al. 2012) and exert a share in decision-making on how best to distribute the benefits they have co-produced (Winickoff 2016). These would be based on mutually agreed boundaries of data exploitations and standards of responsive accountability (Welsh and Wynne 2013). Based on our findings (and the fate of care.data) we suggest that choice should be supported to act as the necessary 'mediator' (Latour 1988) that could bring this material-semiotic network together. As de la Bellacasa (2011) would assert, establishing and maintaining the technoscientific assemblage of a national healthcare database should not be about deprecating concerns, distorting agendas (Galis and Lee 2014) or excluding from translations those who disagree with it, based on binary (moral and epistemological) understandings of caring (or not) to 'save lives'. If treated as a collective action in the context of deliberative politics (Welsh and Wynne 2013), choice could act as, for example, a barometer of 'social acceptability' (Floridi and Taddeo 2016) for big data programmes in healthcarenot only the ethical right of the public but also a distributed form of enforced public awareness, accountability, oversight and social responsibility. In a political-economic environment where a clear determination of 'public interest' is lacking, growing commercial interests are increasingly structured around proprietorial control of such data assets (Taylor 2016) and the knowledge produced for the financialisation of biomedicine (Birch 2017). Data collections are increasing in size and scope across and beyond healthcare, while public confidence in privacy protection is reducing (Phillips et al. 2017). Simultaneously, shared decision making between clinicians and patients is promoted to move from a paternalistic to a more patientcentred model of communication (Spatz et al. 2016). Removing a meaningful and collectively agreed opt-out confines data users' relationship with the public solely to communication of benefits and assurances of sufficient data protection practices (cf Lea 2016). This may result in another form of (data) paternalism and a 'governance by elites' (Woods 2016) model of NHS patient data use where data users and policy-makers set the usage agenda and the conditions of public research participation, increasing the 'agency gap' (Winickoff 2016). Consequently, public distrust and resistance to national big data programmes in healthcare grows.
2019-08-20T13:03:19.284Z
2019-08-18T00:00:00.000
{ "year": 2019, "sha1": "f38437f762875cca120a8800a6fa746e95f64a37", "oa_license": "CCBY", "oa_url": "https://nottingham-repository.worktribe.com/preview/2165798/resisiting%20big%20data%20exploitations%20in%20public%20healthcare.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "330171a8ee232289ee5a047a3b06693f25625aa6", "s2fieldsofstudy": [ "Sociology", "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine", "Political Science" ] }
219687225
pes2o/s2orc
v3-fos-license
$\mathbb{Z}_2\times \mathbb{Z}_2$-graded supersymmetry: $2$-d sigma models We propose a natural $\mathbb{Z}_2 \times \mathbb{Z}_2$-graded generalisation of $d=2$, $\mathcal{N}=(1,1)$ supersymmetry and construct a $\mathbb{Z}_2^2$-space realisation thereof. Due to the grading, the supercharges close with respect to, in the classical language, a commutator rather than an anticommutator. This is then used to build classical (linear and non-linear) sigma models that exhibit this novel supersymmetry via mimicking standard superspace methods. The fields in our models are bosons, right-handed and left-handed Majorana-Weyl spinors, and exotic bosons. The bosons commute with all the fields, the spinors belong to different sectors that cross commute rather than anticommute, while the exotic boson anticommute with the spinors. As a particular example of one of the models, we present a `double-graded' version of supersymmetric sine-Gordon theory. Introduction and Preliminaries 1.1. Introduction. Inspired by the recent developments in both Z n 2 -geometry (see [7,10,11,12,14,15]) and the appearance of Z n 2 -gradings in theoretical physics (see [1,2,3,4,9,8]), we propose a natural generalisation the d = 2, N = (1, 1) supersymmetry algebra that is Z 2 × Z 2 -graded, or more colloquially, double-graded. Taking this new Z 2 2 -Lie algebra (Z 2 2 := Z 2 × Z 2 ) as out starting point, we develop 'superspace' methods to allow us to define double-graded versions of a two-dimensional supersymmetric sigma models. Sigma models have a long history, starting with Gell-Mann & Lévy (see [20]), and have provided many intersting links between field theory and differential geometry. Good reviews of supersymmetric sigma models can be found in [17,19]. We also remark that two-dimensional supersymmetry algebras play a fundamental rôle in superstring theory. Importantly, under quite general assumptions, two-dimensional field theories are renormalisable, including those with highly non-linear interactions. We will not touch upon quantisation in this paper. Part of the motivation for this work was to extend the classical models that have Z 2 2 -supersymmetry as first defined by the author (in four dimensions) in [7]. This gauntlet was picked up by Aizawa, Kuznetsova & Toppan who showed that there is a plethora of mechanical models, both classical and quantum that do indeed exhibit this kind of supersymmetry (see [3,4]). The first quantum mechanical model, a direct generalisation of Witten's model [33], was given by the author and Duplij (see [8]). Given that the spin-statistics correspondence is not necessarily true in less than three spatial dimensions, it is plausible that this novel Z 2 2 -supersymmetry Date: June 16, 2020. could be realised in experimental condensed matter physics. Thus, getting a handle on classical and quantum field theories in low dimensions that are Z 2 2 -supersymmetric is of potential importance in physics. The models we construct here are 1 + 1-dimensional, and we must mention that many systems have effectively one spatial dimension, such as quantum wires, carbon nanotubes, and edge states in quantum Hall systems and topological insulators. Non-linear sigma models have long been applied to condensed matter physics. As the field theories we present are inherently two-dimensional, it is particularly convenient to use Dirac's light-cone coordinates (see [18]), and this is reflected in our initial definitions. Not only does this choice of coordinates shorten the length of some expression as compared to writing them in inertial coordinates, but it also makes the transformation properties under two-dimensional Lorentz boosts of various expressions clear. The basic component fields in our models are bosons, right-handed Majorana-Weyl spinors, left-handed Majorana-Weyl spinors, and exotic bosons. Typically, the right-handed and left-handed spinors belong to different sectors that cross commute while the exotic bosons anticommute with the spinors. The models will consist of dynamical bosons and fermions together with non-propagating exotic bosons, or dynamical exotic bosons and fermions together with non-propagating boson. The former being more natural from our perspective of sigma models. We refer to models with propagating exotic bosons as "exotic models". The reader should, of course, be reminded of Green-Volkov parastatistics (see [22,31]). The rôle of Z n 2 -gradings in parastatistics and parasupersymmetry has long been recognised (see, for example, [30,34,35]). However, as shown in [8], the Z 2 2 -Lie algebras we study are not the same as those found in parasupersymmetry. Arrangement. For the remainder of this section we recall the notion of a Z 2 2 -Lie algebra and remind the reader of the basics of Z 2 2 -geometry as needed for the main part of the paper. In Section 2 we define the d = 2 N = (1, 1) Z 2 2 -supertranslation algebra (Definition 2.1) and then present a representation of this algebra on a Z 2 2 -manifold (see Definition 2.2). That is, we construct the Z 2 2 -supercharges, etc. as vector fields on a Z 2 2 -graded version of two-dimensional super-Minkowski spacetime. We build some aspects of "Z 2 2 -space methods" that we then apply to construct sigma models that are Z 2 2 -supersymmetric (see Definition 3.3 and Definition 3.7) in Section 3. We end the main text with a few closing remarks in Section 4. Two appendices are included, Appendix A recalls the definition of the Z 2 2 -Berezinian and Appendix B on Berezin integration on Z n 2 -space with two degree zero coordinates and one coordinate each of the non-zero degrees. The general theory of integration on Z n 2 -domains, or even just Z 2 2 -domains with more that one of each non-zero degree coordinate is very much work in progress. Appendix B is included to prove that the integration method used in this paper is mathematically sound. Here we recall the notion of a Z 2 2 -Lie algebra (see [28,29]). A Z 2 2 -graded vector space is a vector space (over R or C) that is the direct sum of homogeneous vector spaces g = g 00 ⊕ g 11 ⊕ g 01 ⊕ g 10 . Note that we have fixed an ordering for the elements of Z 2 2 and that other ordering do appear in the literature. We will denote the Z 2 2 -degree of an element of a ∈ g as deg(a) ∈ Z 2 2 . The standard scalar product on Z 2 2 is we denote by −, − . That is, if deg(a) = (γ 1 , γ 2 ) and deg 2 -graded vector space had a decomposition into its even and odd subspaces, defined by the total degree, g ev := g 00 ⊕ g 11 , g od := g 01 ⊕ g 10 . Definition 1.1. A Z 2 2 -Lie algebra is a Z 2 2 -graded vector space equipped with a bi-linear operation, [−, −], such that for homogeneous elements a, b and c ∈ g, the following are satisfied: Extension to inhomogeneous elements is via linearity. We have written the Jacobi identity for a Z 2 2 -Lie algebra in Loday-Leibniz form, though due to the symmetry of the Lie bracket one can recast this in a more traditional form. Moreover, generalising this definition to Z n 2 -Lie algebras is straightforward. 1.3. Elements of Z n 2 -geometry. The locally ringed space approach to Z n 2 -manifolds was pioneered by Covolo, Grabowski and Poncin (see [14]) We restrict attention to real Z n 2 -manifolds and will not consider the complex analogues. , is a pair S := (|S|, O S ) where |S| is a secondcountable Hausdorff topological space and a O S is a sheaf of Z n 2 -graded, Z n 2 -commutative associative unital R-algebras, such that the stalks O S,p , p ∈ |S| are local rings. Here, Z n 2 -commutative means that any two sections a, b ∈ O S (|U |), |U | ⊂ |S| open, of homogeneous degree deg(a) and deg(b) ∈ Z n 2 , respectively, commute with a Koszul sign rule defined by the standard scalar product ab = (−1) deg(a),deg(b) ba. We need to fix a convention on the order of elements in Z n 2 , we do this filling in zeros from the left and ones from the right and then putting the elements with zero total degree at the front while keeping their relative order. For example, and pertinent for this paper, we order the elements of Z 2 2 as Z 2 2 := Z 2 × Z 2 = (0, 0), (1, 1), (0, 1), (1, 0) , which, of course, agrees with how we ordered the homogeneous subspaces of a Z 2 2 -graded vector space. A tuple q = (q 1 , q 2 , · · · , q N ), where N = 2 n − 1 provides all the information about the non-zero degree coordinates, which we collectively write as ξ. ). Here C ∞ R p is the structure sheaf on the Euclidean space R p . Local sections of R p|q are formal power series in the Z n 2 -graded variables ξ with smooth coefficients, i.e., This local diffeomorphism allows the construction of a local coordinate system. We write the local coordinates as x A = (x a , ξ i ). The commutation rules for these coordinates are given Changes of coordinates, i.e., different choices of the local isomorphisms, can be written (using standard abuses of notation) as x A ′ = x A ′ (x), where we understand the changes of coordinates to respect the Z n 2 -grading. Note that generically we have a formal power series rather than just polynomials. We will refer to global sections of the structure sheaf of a Z n 2 -manifold as functions and employ the standard notation C ∞ (M ) := O M (|M |). Example 1.4 (Z n 2 -graded Cartesian spaces). Directly from the definition, R p|q := (R p , C ∞ (R p )[[ξ]]) is a Z n 2manifold. Global coordinates (x a , ξ i ) can be employed, where the coordinate map is just the identity. In this paper, we will only meet Z 2 2 -manifolds that are globally isomorphic to R 2|1,1,1 . We have a chart theorem ([14, Theorem 7.10]) that allows us to uniquely extend morphisms between the local coordinate domains to morphisms of locally Z n 2 -ringed spaces. That is, we can describe morphisms using local coordinates. Naturally, the Z n 2 -degree of a coordinate must be preserved under the pull-back by a morphism. A vector field on a Z n 2 -manifold is a Z n 2 -derivation of the global functions. Thus, a (homogeneous) vector field is a linear map X : for any homogeneous f and not necessarily homogeneous g ∈ C ∞ (M ). It is easy to check that the space of vector fields has a (left)C ∞ (M )-module structure. We denote the module of vector fields as Vect(M ). An arbitrary vector field can be 'localised' (see [15,Lemma 2.2]) in the sense that given |U | ⊂ |M | there always exists a unique derivation such that X(f )| |U| = X| |U| (f |U| ). Because of this local property, it is clear that one has a sheaf of O Mmodules formed by the local derivations, this defines the tangent sheaf of a Z n 2 -manifold (see [15,Definition 5.]). Moreover, this sheaf is locally free. Thus, we can always write a vector field locally as where the partial derivatives are defined as standard for the coordinates of degree zero and are defined algebraically for the remaining coordinates. We drop the explicit reference to the required restriction as is standard in differential geometry. The order of taking partial derivatives matters, but only up to sign factors, Vect(M ) becomes a Z n 2 -Lie algebra (see [16,28,29]). The grading and symmetry of the Lie bracket are clear, and one can directly check that the Jacobi identity holds. There are now many other technical results known about Z n 2 -manifolds, most of which we will not require in the rest of this paper. The interested reader can consult [10,11,12]. 2 -graded supertranslation algebra. Following our earlier work [7], we propose the following Z 2 2 -Lie algebra as the starting place for our constructions. 2 -Lie algebra with 5 generators with the following assigned degrees where they are assumed to transform under 2-d Lorentz boosts (here β is the rapidity) as This definition has been chosen to exploit light-cone coordinates and to easily identify the action of Lorentz boosts on various expressions later in this paper. In all, we see that the generators consist of a pair of righthanded and left-handed vectors, a pair of right-handed and left-handed Majorana-Weyl spinors and a Lorentz scalar. It is a direct exercise to check that the appropriately graded Jacobi identity holds. For brevity, we will refer to the Z 2 2 -supertranslation algebra. 2.2. A Z 2 2 -graded Cartesian space realisation. One could use the BakerCampbellHausdorff formula to formally 'integrate' the Z 2 2 -Lie algebra structure to obtain a Z 2 2 -Lie group, i.e., a group object in the category of Z 2 2 -manifolds. To do this carefully, just as in the standard supercase, one requires the use of the functor of points or Λ-points (see [12]). However, here we can make an educated postulate of the required Z 2 2 -space (see Example 1.4). will be referred to as d = 2, N = (1, 1) Z 2 2 -Minkowski spacetime and shall be denoted as M Clearly, from the definition M . Under Lorentz boosts these coordinate transform as again, β is the rapidity. Thus, the underlying reduced manifold is two-dimensional Minkowski spacetime, here explicitly given in light-cone coordinates. For brevity, we will refer to Z 2 2 -Minkowski spacetime. The following vector fields form a representation of the Z 2 2 -supertranslation algebra (see Definition 2.1) The reader can easily verify that these vector fields have the right transformation properties under a Lorentz boost. The Z 2 2 -supertranslations acting on M and are explicitly given by and the invariance of the coordinate volume. Integration on general Z n 2manifolds is still very much work in progress. However, the situation for low dimensional Z 2 2 -domains is understood (see [27]). We will present the details of the soundness of the definition of the Berezin integral in Appendix B. Here we give the definition and prove that the coordinate Berezin volume is well-behaved with respect to Lorentz boost and Z 2 2 -supertranslations. Two-dimensional Minkowski spacetime is orientable and we will assume without further reference, that an orientation has been chosen. An integrable Berezin section on M is a compactly supported Berezin section (not necessarily homogeneous) is the coordinate Berezin volume and transforms under general coordinate changes as where Z 2 2 Ber is the Z 2 2 -graded generalisation of the Berezinian (see [13,16,27] for further details). There is some slight abuse of notation here, as in general, we need not assume that the 'primed' coordinates on the reduced manifold are light-cone coordinates. The Berezin integral of σ is then defined as The final integral over R 2 is well-defined as we have taken all the components of an integrable Berezin density to be compactly supported. In particular, σ +− is compactly supported. As we will not consider general coordinate transformations in this paper, we will not need to be more explicit here with the Z 2 2 -Berezinian and we simply point the interested reader to the literature and Appendix A. However, we will show that the coordinate Berezin volume is invariant under (infinitesimal) Z 2 2 -supertranslations. To do this we will use the generalisation of Liouville's formula. Thus, we write J S = ½ 5×5 + A − + A + , noting that the matrices A − and A + are infinitesimal. Thus, we can use the relation between the Z 2 2 -Berezinian and the Z 2 2 -trace (see [16,Section 6]) to deduce that Z 2 2 Ber(J S ) = 1 + Z 2 2 tr(A − ) + Z 2 2 tr(A − ) = 1, as the Z 2 2 -graded trace is essentially the sum of ± the diagonal entries (see [16,Section 2]). Thus, the coordinate Berezin volume is invariant under Z 2 2 -supertranslations. As for Lorentz boosts (2.1), the Jacobi matrix is As the Z 2 2 -Berezinian of a diagonal matrix is just the product of the diagonal entries we see that Z 2 2 Ber(J L ) = 1 (this follows from the definition, see [16]). Thus, the coordinate Berezin volume is invariant under Lorentz boosts. 2.4. The Covariant Derivatives. In standard supersymmetry, the origin of the SUSY covariant derivatives is the fact that the partial derivatives with respect to the odd coordinates of superspace are not invariant under supertranslations. The same is true in the current situation and it is easy enough to see that the required where M is a smooth manifold, or more generally Z 2 2 -manifold, or possibly some other "space" for which we can make sense of maps. Note that we take all maps and not just those that preserve the Z 2 2 -degree. To make this precise, we need the internal homs to make proper sense of this, however, we will work formally as is customary in physics. The interested reader can consult Lledo [24] for details of the standard supercase. That is, we will allow ourselves the luxury of allowing external parameters that carry non-trivial Z 2 2 -degree. Let us fix some finite-dimensional smooth manifold M , and consider an open U ⊂ M "small enough" so that we can employ local coordinates y a . Then we write (with the standard abuses of notation) Φ * y a = Φ a (x − , x + , z, θ − , θ + ). We will avoid global issues in this paper and just remark that any map Φ has such local representative and that a family of such local maps can be glued together to form a global map. We will take the Z 2 2 -superfield to be a scalar with respect to the 2-d Lorentz transformations. Moreover, because we have taken the target to be a manifold, the Z 2 2 -superfields we consider here are Z 2 2 -degree zero. Then to first order in z, In general, we have a formal power series in z, that is, we have an infinite number of component fields. However, for our later purposes, it is sufficient to truncate to expressions being at most linear in z. Then first four types of fields in a multiplet are X a . In order for the Z 2 2 -superfield to be a Lorentz scalar we require that, under Lorentz boosts Again we have bosons, exotic bosons, and mutually commuting left-handed and right-handed fermions. Remark 2.4. It is also possible to consider Z 2 2 -superfields that transform non-trivially under Lorentz boosts. For example, spinor-valued maps can be made sense of. Moreover, Z 2 2 -superfields that carry a non-zero Z 2 2 -degree are also possible and will be considered later. We define the action of Z 2 2 -supertranslations on a Z 2 2 -superfield via the Lie derivative. Specifically, the Z 2 2supersymmetry transformations are defined as δΦ a := ǫ − Q − + ǫ + Q + Φ a . Then, to lowest order in the field, the component form of Z 2 2 -supersymmetry is given by where we have used the shorthand ∂ − = ∂ ∂x − and ∂ + = ∂ ∂x + . Lemma 2.5. Let Φ a be a degree zero Z 2 2 -superfield (see (2.6)), then Proof. This follows from direct computation using (2.6) and (2.5). 3. Z 2 × Z 2 -graded sigma models 3.1. Sigma models with non-propagating exotic bosons. We are now in a position to generalise supersymmetric sigma models to our double-graded setting. The target space we will take as n-dimensional Minkowski spacetime Mink n . A Z 2 2 -superfield is then a map Φ : M (1,1) 2 → Mink n , and using coordinates y a on the target we write Φ * y a := Φ a (x − , x + , z, θ − , θ + ). For any model to be well-defined, the Lagrangian Berezin section L[Φ] := D[x − , x + , z, θ − θ + ] L(Φ) needs to be an integrable Berezin section (see Subsection 2.3). This means that the Z 2 2 -superfields in questions must be compactly supported, that is each of their components is compactly supported. Furthermore, the Lagrangian Berezin section must not contain a term of the form z L(x − x + ). In fact, this condition is independent of the coordinates chosen (see Appendix B). In light of Proposition 2.3 and the form of the Z 2 2 -supertranslations, this condition is also Z 2 2 -supersymmetric. Depending on the exact form of the Lagrangian Berezin section, being integrable will place constraints on the Z 2 2 -superfields involved. The most general action involving a scaler Z 2 2 -superfield is where K(Φ, D − Φ, D + Φ) has to be a Lorentz scalar and of degree (1,1). We specialise to specific actions. The simplest Lagrangian Berezin section that is second-order in the covariant derivatives, with a degree (1, 1) Being integrable means that the Z 2 2 -superfield (see (2.6)) cannot contain the terms zθ − χ a + and zθ + χ a − . As we want an irreducible multiplet, and later we wish to add interactions, we will insist that the Z 2 2 -superfield does not contain the term zG a either. → M be a (scalar) Z 2 2 -superfield. Then Φ is said to be z-constrained if and only if its representative Φ a (x − , x + , z, θ − , θ + ) (see (2.6)) does not contain the terms zθ − χ a + (x − , x + ), zθ + χ a − (x − , x + ), and zG a (x − , x + ). Remark 3.2. A small modification of the proof of Proposition B.4 shows that being z-constrained is independent of the choice of coordinates on the source. However, we will not concern ourselves here with general coordinate transformations on the source. Remark 3.4. For simplicity, we have not included a potential term and so exclude Landau-Ginsburg type models from our discussion for the moment. We now proceed to present the component form of this action. We only need to keep track of the θ − θ + terms due to how the Berezin integral is defined. Thus, a quick calculation shows that Hence the component form of this action is which is a massless Wess-Zumino type model (see [32]) in 2-d with an axillary field (in light-cone coordinates). This action is (quasi-)invariant under the Z 2 2 -supersymmetries (2.7). The equations of motion for the exotic boson are just F a +− = 0. We now extend the kind of sigma model we are studying by now allowing the target manifold to be a finite-dimensional (pseudo-)Riemannian manifold (M, g). This leads to the following definition. Theorem 3.8. The Z 2 2 -graded non-linear sigma model action is well-defined and is invariant under Proof. We break the proof up into the two statements. Well-defined: Taylor expanding the metric we see that g ba (Φ), we observe that this is itself z-constrained as we have taken Φ to be z-constrained. Thus, the Lagrangian L = D − Φ a D + Φ b g ba (Φ) is z-constrained (see Proposition 3.5). Moreover, because products of compactly supported Z 2 2 -superfields and their derivatives are again compactly supported, this Lagrangian is compactly supported. Invariance under Z 2 2 -supertranslations: This is evident as the canonical volume is invariant (Proposition 2.3) and the Lagrangian is build from invariant objects; Φ a , D − Φ b and D + Φ c . Now we address the possibility of including a superpotential. The immediate problem is that a smooth function U (Φ) of a degree zero Z 2 2 -superpotential is itself degree zero. Thus, it cannot be used as a potential in the linear or non-linear Z 2 2 -graded sigma models. A similar situation occurs in N = 1 supersymmetric mechanics where one cannot directly include a potential term, i.e., a term like U (x)ψ is odd and cannot be included in the action. Manton's solution (see [26]) was to include an odd constant in the potential, or in other words, to consider a Grassmann odd potential of the form W (x) = α U (x), where α is the odd constant, i.e., an odd element of a chosen Grassmann algebra. This kind of complication was also noticed in Z 2 2 -mechanics by Aizawa et al. [3]. We propose a similar solution, by taking a degree (1, 1) Z 2 2 -superpotential W (Φ). Mathematically we can understand this as a composition of internal homs. More informally, we are allowing graded constants in both the definition of Φ and W (−). The interaction term is then which, assuming Φ is z-constrained, is well-defined and is invariant under both Lorentz transformations and Z 2 2 -supertranslations. Then the total action, taking the target to be Minkowski spacetime for simplicity, written in component form is The equation of motion for the exotic boson is ∂X a . This can then be used to eliminate the exotic boson and the action can be written as This is very similar to the standard case of supersymmetry. However, we replace the exotic boson with an exotic potential, the physical meaning of which is somewhat obscured, at least in this quasi-classical setting. Quantisation will, most likely, resolve this issue. As an example, one could pick a potential such that W (Φ) = αU (Φ), where α is a degree (1, 1) constant. Then, the previous action becomes To avoid complications with the equations of motion for the bosonic sector, it makes sense to assume that α 2 − 1 = 0, so α belongs to a one-dimensional Clifford algebra. Notice that the parameter α appears in the (on-shell) component Z 2 2 -supersymmetry transformations. We remark that Clifford algebra valued parameters have already appeared in the context of supersymmetry (see [23]). Moreover, Akulov & Duplij, in the context of supersymmetric mechanics, proposed that quasi-classical solutions to the equations of motion should also depend on Grassmann algebra valued constants (see [5]). As a specific example, take the target manifold to be R and consider the potential The component actions is This action could be referred to as the Z 2 2 -graded sine-Gordon action. Note that the parameter α is required in the Yukawa-like coupling just on degree grounds. The equations of motion are which in the full classical limit, i.e., ψ − = ψ + = 0, reduces to the classical sine-Gordon equation. Given the importance of sine-Gordon and supersymmetric sine-Gordon in the theory of integrable systems and soliton theory, we firmly believe this Z 2 2 -graded version deserves further study. 3.2. A Model with propagating exotic bosons. We now examine a simple model in which the exotic boson is propagating. To do this we consider a scaler Z 2 2 -superfield that carries Z 2 2 -degree (1, 1). That is, we consider the target space to be R 0|1,0,0 . Naturally, we insist that this Z 2 2 -superfield Ψ ∈ Hom c (M (1,1) 2 , R 0|1,0,0 ) be compactly supported and z-constrained as before. where the degrees are G . We can directly include a potential by considering a formal power series in the Z 2 2 -superfield. We require that the potential be degree (1, 1) and so, assuming no constants of non-zero degree, the power series must be of the form where a k+1 ∈ R. The following theorem is evident given our earlier discussions. Theorem 3.10. The Z 2 2 -graded exotic scalar field theory action is well-defined, invariant under two-dimensional Lorentz boosts and Z 2 2 -supertranslations. A direct calculation shows that the component of the action, ignoring the potential for simplicity, is The component form of Z 2 2 -supersymmetry are Concluding Remarks In this paper we proposed a double-graded version of the N = (1, 1) supertranslation algebra in two dimensions, built a Z 2 2 -space realisation thereof and used this to construct Z 2 2 -supersymmetric sigma models with either auxiliary or propagating exotic bosons. The focus has been on quasi-classical aspects of the theory, showing both the novel features and potential problems. There is much that remains to be done, including quantisation, the construction sigma models with general Z 2 2 -manifold target spaces (including Z 2 2 -Lie groups), models with higher dimensional sources, and the development of Z n 2 -graded models (n > 2). The immediate issue here is that the theory of integration on Z n 2 -manifolds is not yet properly developed. Thus, at the time of writing, it is impossible to mimic superspace methods in any generality. Many aspects of the theory of Z n 2 -manifolds are currently being developed. None-the-less, in this paper, we have shown that relatively simple, yet novel double graded quasi-classical models in two-dimensions exist. An interesting feature is that the grading puts restrictions on the actions, quite independently of the symmetry. In particular, the Lagrangian Berezin sections must be degree (1,1), so that the component action is degree (0, 0). This adds another layer of complication when directly trying to mimic superspace constructions. For example, in order to include interactions in some of the models, we require graded constants in the potentials. The physical relevancy of Z 2 2 -graded theories, and in particular Z 2 2 -supersymmetry is not at all clear. The exact rôle of exotic degree (1, 1) bosons in physics is an open question. Potentially less confusing are the two sectors of mutually commuting spinors. When one has two independent spinors it is usually assumed that they mutually anticommute. However, this is an assumption that can be dropped, and taking the spinors to mutually commute is consistent (see Aste & Chung [6]). What would be a clear phenomenological signal that nature utilises Z 2 2 -gradings and this novel kind of supersymmetry? This is the key question and one that may be answered as further models are developed. In fact, using Proposition B.5 again, we can drop terms in the Jacobi matrix that are O(z 2 ) and those that contain η and θ. Thus, we need only examine the following matrix,  From Definition A.1 it is clear that the determinant of the top-left block of the above matrix has no term linear in z, one can directly calculate this determinant. The determinant of the bottom-right block is φ θ 010 φ η 100 − z 2 φ θ 101 φ η 011 . We need the inverse of this, which is a formal power series in z. It is well-known that (1 − z 2 ) −1 = 1 + z 2 + z 4 · · · and so it is clear that the inverse of the determinant of the bottom-right block does not contain a term linear in z. Thus, Z 2 2 Ber(J) does not contain a zf (t, s). Theorem B.7. The Berezin integral of an integrable Berezinian section of R 2|1,1,1 (once an orientation has been fixed) is well-defined. Due to the definition of the Berezin integral, we can ignore terms of the type z in the integrand. In effect, we can set z = 0 in the integrand. So, we can restrict attention to functions of the form σ ′ = θηφ z 110 (t, s) κ θφ θ 010 (t, s) α ηφ η 100 (t, s) β σ ′ βακ (φ t (t, s), φ s (t, s)). Following Proposition B.5 we can drop terms of type z when calculating the Z 2 2 -Berezinian. Thus, we can consider the following simplified Jacobian matrix  Then, remembering we have assumed integrability, dtds σ 110 (t, s).
2020-06-16T01:00:44.776Z
2020-06-15T00:00:00.000
{ "year": 2020, "sha1": "11b8e4dc10fea05800b2e382802efa774d85b055", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2006.08169", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "cff91587cb077e432031f6fec88658b0e91477b2", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
211099330
pes2o/s2orc
v3-fos-license
Numerical and Experimental Analysis of Material Removal and Surface Defect Mechanism in Scratch Tests of High Volume Fraction SiCp/Al Composites This paper addresses a comprehensive and further insight into the sensitivity of material removal and the surface defect formation mechanism to scratch depth during single-grit scratch tests of 50 vol% SiCp/Al composites. The three-dimensional (3D) finite element model with more realistic 3D micro-structure, particle-matrix interfacial behaviors, particle-particle contact behaviors, particle-matrix contact behaviors and a Johnson-Holmquist-Beissel (JHB) model of SiC was developed. The scratch simulation conducted at scratch velocity 10 mm/min and loading rate 40 N/min revealed that the scratch depth plays a crucial role in material removal and the surface forming process. Brittle fracturing of SiC particles and surface defects become more deteriorative under a large scratch depth ranging from 0.0385 to 0.0764 μm. The above phenomenon can be attributed to the influence of scratch depth on SiC particles’ transport; the increase in the amount of SiC particle transport resulting from an increase of scratch depth raises the occurrence of particle-particle collision which provides hard support and shock for the scratched particles; therefore, brittle fracturing gradually becomes the major removal mode of SiC particles as the scratch depth increases. On the deteriorative surface, various defects are observed; i.e., lateral cracks, interfacial debonding, cavies filled with residually broken particles, etc. The von Mises stress distribution shows that SiC particles bear vast majority of load, and thus present greater stress than the surrounding Al matrix. For example: their ratio of 3 to 30 under the scratch depth of 0.011 mm. Namely, SiC particles impede stress diffusion within the Al matrix. Finally, the SEM images of the scratched surface obtained from the single-grit scratch experiments verify the numerical analysis’s results. Introduction Particle reinforced metal matrix composites (PRMMCs) own excellent ratios of strength-to-weight; relatively low thermal expansion coefficients; and corrosion and wear resistance. Therefore, they have drawn worldwide attention in transportation, aerospace and defense fields. But as difficult-to-machine materials resulting from non-homogeneity properties between hard SiC and soft metal matrix, PRMMCs present various surface and subsurface defects after machining operations, especially high volume fraction PRMMCs [1]. In order to achieve good surface and subsurface quality, grinding has been introduced as a relatively effective processing method for those PRMMCs, but extensive grinding-induced damages still exist in high volume fraction SiCp/Al. Systematic improvement of grinding quality, such as by parameter optimization, needs a comprehensive understanding of the material removal mechanism of high volume fraction SiCp/Al composites [2]. Perfectly The study aims to track dynamic material removal and defect the surface formation process of high volume fraction SiCp/Al composites during single-grit scratch tests via a more realistic 3D finite element (FE) model, in which the study realizes the randomness of reinforced particle sizes, shapes and positions in 3D space; additionally, weak particle-matrix interfacial effects, particle-particle contact and particle-matrix contact behaviors are taken into account. The simulation results are finally validated by single-grit scratch experiments. The simulation and experiment of single-grit scratch provides theoretical guidance and the key parameters optimal for the grinding of SiCp/Al in real applications. Additionally, the simulation technology of scratch tests in the study will also promote behavioral studies of coatings applied for cutting tools, which are widely being conducted by more and more researchers, such as Rodriguez-Barrero [16], Fernandez-Abia [17] and Fernandez-Valdivielso [18]. Specimen In order to satisfy the urgent request of the opto-mechanical structure for grinding quality of SiCp/Al composites (40-50 vol%), material removal and the surface defect mechanism were analyzed via single-grit scratch tests of 50 vol% SiCp/Al composites, which were prepared by powder metallurgy technique, in which the average diameter of reinforced particle was 20 µm. Firstly, mixing of Al5083 powders and SiC particles were sufficiently conducted by ball milling at 150 rpm for 10 h with the weight ratio of 27:32, and the mixture was put into a mold for cold isostatic compaction. It was then heated in a vacuum furnace HIP-200 at 753 K for 2 h under 120 Mpa, and followed by cooling with the furnace. Finally, the extradition of the composite was conducted at 773 K with an extrusion ratio of 20:1; then, the composite was held at 833 K for 3 h and artificially aged at 423 K for 18 h. provides theoretical guidance and the key parameters optimal for the grinding of SiCp/Al in real applications. Additionally, the simulation technology of scratch tests in the study will also promote behavioral studies of coatings applied for cutting tools, which are widely being conducted by more and more researchers, such as Rodriguez-Barrero [16], Fernandez-Abia [17] and Fernandez-Valdivielso [18]. Specimen In order to satisfy the urgent request of the opto-mechanical structure for grinding quality of SiCp/Al composites (40-50 vol%), material removal and the surface defect mechanism were analyzed via single-grit scratch tests of 50 vol% SiCp/Al composites, which were prepared by powder metallurgy technique, in which the average diameter of reinforced particle was 20 μm. Firstly, mixing of Al5083 powders and SiC particles were sufficiently conducted by ball milling at 150 rpm for 10 h with the weight ratio of 27:32, and the mixture was put into a mold for cold isostatic compaction. It was then heated in a vacuum furnace HIP-200 at 753 K for 2 h under 120 Mpa, and followed by cooling with the furnace. Finally, the extradition of the composite was conducted at 773 K with an extrusion ratio of 20:1; then, the composite was held at 833 K for 3 h and artificially aged at 423 K for 18 h. Finite Element Modelling 3D FE scratch models of 50 vol% SiCp/Al composites were established via Abaqus/Explicit 2017; the scratch process is as follows: The indenter with cone angle 120° and tip radius 20 μm exerts an increasing load on the specimen. In the meantime, the specimen moves with uniform motion, and then a gradually deepening scratched groove appears on the specimen's surface. The process is divided into 3 sequential sub-processes to reduce computation time, as shown in Figure 2. The selected scratch parameters are listed in Table 2. Finite Element Modelling 3D FE scratch models of 50 vol% SiCp/Al composites were established via Abaqus/Explicit 2017; the scratch process is as follows: The indenter with cone angle 120 • and tip radius 20 µm exerts an increasing load on the specimen. In the meantime, the specimen moves with uniform motion, and then a gradually deepening scratched groove appears on the specimen's surface. The process is divided into 3 sequential sub-processes to reduce computation time, as shown in Figure 2. The selected scratch parameters are listed in Table 2. SiC particles own polyhedral structures, and particles shapes, sizes and positions in the Al matrix are all random. Based the above structural characteristics of reinforced particles, a program package was developed to achieve geometric modeling of various polyhedral particles with random sizes and shapes, and random distributions of particles positions within a prescribed scope; namely, the volume fraction of SiC particles is about 50%, and particles sizes satisfy a normal distribution of mean 20 μm and variance 5 μm. An established 3D geometry model of SiCp/Al composite (100 × 100 × 100 μm 3 ) for the first two sub-processes is shown in Figure 3, and the size of the SiCp/Al composite for the third sub-process is 200 × 200 × 200 μm 3 . Free 10-node modified thermally coupled second-order tetrahedron meshing technique and seed size of 3 μm were adopted to generate mesh for SiC particles and Al matrix, as shown in Figure 4. SiC particles own polyhedral structures, and particles shapes, sizes and positions in the Al matrix are all random. Based the above structural characteristics of reinforced particles, a program package was developed to achieve geometric modeling of various polyhedral particles with random sizes and shapes, and random distributions of particles positions within a prescribed scope; namely, the volume fraction of SiC particles is about 50%, and particles sizes satisfy a normal distribution of mean 20 µm and variance 5 µm. An established 3D geometry model of SiCp/Al composite (100 × 100 × 100 µm 3 ) for the first two sub-processes is shown in Figure 3, and the size of the SiCp/Al composite for the third sub-process is 200 × 200 × 200 µm 3 . SiC particles own polyhedral structures, and particles shapes, sizes and positions in the Al matrix are all random. Based the above structural characteristics of reinforced particles, a program package was developed to achieve geometric modeling of various polyhedral particles with random sizes and shapes, and random distributions of particles positions within a prescribed scope; namely, the volume fraction of SiC particles is about 50%, and particles sizes satisfy a normal distribution of mean 20 μm and variance 5 μm. An established 3D geometry model of SiCp/Al composite (100 × 100 × 100 μm 3 ) for the first two sub-processes is shown in Figure 3, and the size of the SiCp/Al composite for the third sub-process is 200 × 200 × 200 μm 3 . Free 10-node modified thermally coupled second-order tetrahedron meshing technique and seed size of 3 μm were adopted to generate mesh for SiC particles and Al matrix, as shown in Figure 4. Free 10-node modified thermally coupled second-order tetrahedron meshing technique and seed size of 3 µm were adopted to generate mesh for SiC particles and Al matrix, as shown in Figure 4. Table 3 presents the basic mechanical properties of SiC particles, 5083Al matrix and the diamond indenter. The Johnson Cook constitutive model and Johnson Cook damage law, which are universally used in metal machining FE simulations, were adopted to describe 5083Al matrix. The Johnson Cook constitutive model is described as follows [19]: where σ, ε pl , . ε pl and . ε 0 denote equivalent flow stress, plastic strain, plastic strain rate and reference plastic strain rate respectively; T, T room and T melt are the material real-time, room and melting temperature respectively; additionally, five material constants are as follows: A-initial yield strength, B-strain hardening modulus, C-strain rate sensitivity coefficient, n-strain hardening coefficient and m-thermal softening coefficient. The Johnson Cook constitutive model parameters for 5083Al are listed in Table 4. The Johnson Cook damage law is given by where D 1 -D 5 are material constants; . ε 0 and η are reference strain rate and pressure to von Mises ratio. Those parameters for 5083Al are presented in Table 5. When the damage parameter ω = 1, which is calculated in Equation (3), fracture occurs. where ∆ε pl is the equivalent plastic strain increment. In all studies of SiCp/Al machining so far, a linearly elastic constitutive model with damage initiation has been adopted for SiC particles. However, this study describes SiC particles by the Johnson-Holmquist-Beissel (JHB) model which is more suitable for SiC responses under large strain and high-strain rate [20]. The strength of the undamaged SiC (damage variable, D = 0) is given by Equation (4). where and C are material parameters; . ε * is the dimensionless equivalent strain rate, ε 0 are the equivalent plastic strain rate and . ε 0 is the reference strain rate, respectively); and P is a pressure function expressed in Equation (6). The strength of the fractured SiC (D = 1) is given by Equation (5). where The pressure function P is described as Equation (6). where µ = ρ/ρ 0 − 1, K 1 , K 2 and K 3 are constants; ρ 0 and ρ denote the reference and current density respectively; µ f equals µ at the time of failure; β is the ratio of the elastic energy loss; and ∆U denotes the dilation increment. The damage parameter, ω, is expressed as Equation (7). where ∆ε pl denotes the equivalent plastic strain increment and ε pl f (P) is given by where D 1 and D 2 are material constants and P * = P/σ max In the JHB model, SiC particles suffer failure immediately when ω = 1; otherwise, (ω < 1) there is no damage. The JHB model material parameters used for SiC are listed in Table 6. Table 6. JHB model material parameters used for SiC [20]. The diamond indenter was treated as an analytical rigid body because of its high elastic modulus. The material parameters applied in the FE computational analysis are listed in Table 3. Particle-Matrix Interfacial Modeling The interfacial behaviors between SiC particles and an Al matrix play a crucial role in material removal and the surface defect formation processes. Because of tiny dimensions, the interfacial thickness is ignored. Two kinds of interfacial behaviors are considered in the study: (i) a cohesive interface which may be damaged and fail; (ii) a friction interface which occurs after failure of the cohesive interface. The cohesive interfacial behaviors are achieved via cohesive behavior model and damage model. The friction interfacial behaviors are realized via tangential behavior model and normal behavior model in the general contact type of Abaqus/Explicit 2017. The cohesive behavior model specifies that only salve nodes initially in contact experience cohesive behaviors, and it uses the traction-separation model which is defined by Equation (9). where t denotes the nominal traction stress vector consisting of normal t n , shear t s and shear t t ; the corresponding separations are δ n , δ s and δ t . According to Lotfian, S. [21], the undamaged stiffnesses K n , K s and K t are all set as 1 × 10 12 MPa·mm −1 . The cohesive damage model includes damage initiation specified by the cohesive interfacial strength t 0 and damage evolution specified by the fracture energy Γ . As for particle-matrix cohesive interfacial strength t 0 , Guo [24] designed a novel experiment to measure interfacial strength t 0 = 133 ± 26 MPa; and Nan [25] and Su [26] reasonably figured out that the cohesive interfacial strength followed approximately t 0 ∼ 1/d 1/2 , where d is the average particle size; thus, while the average size is Materials 2020, 13, 796 9 of 25 20 µm, t 0 = 138 MPa, and fracture energy is set as Γ = 91.9 J/m 2 . Obviously, the two above conclusions are very close; therefore, the cohesive interfacial strength is set as t 0 = t 0 n = t 0 s = t 0 t = 133 MPa, and Γ = 0.0919 mJ/mm 2 in the work. On friction interfacial behaviors, Coulomb's friction law with friction coefficient µ = 0.3 is used to model sliding conditions for tangential behavior model, and hard contact is selected as pressure-overclosure for normal behavior model. Particle-Particle, Indenter-Particle and Indenter-Matrix Contact Modeling Regular contact interfaces were adopted for particle-particle, indenter-particle and indenter-matrix interactions; Coulomb's friction law was used to model sliding conditions for tangential behavior and hard contact for normal behavior; the friction coefficient µ = 0.4 was used for particle-particle contact; µ = 0.1 was used for indenter-particle contact; and µ = 0.15 was used for indenter-matrix contact. Loads and Boundary Conditions In Figure 5, the indenter constrained with the reference point RF could only move along x-direction at v = −10 mm/min and along y-direction at a normal force where F 0 and ∆ F ∆ t are the initial normal force and increment per unit time, respectively. Single-Grit Scratch Experiments The experiments were conducted on MFT-4000 Scratch Tester for Material Surface Properties, which as developed by Lanzhou Huahui Instrument Technology Co., Ltd. (Lanzhou, China), via using a conical diamond indenter with cone angle 120° and tip radius 20 μm (see Figure 6a). Table 7 shows the main technical indicators of MFT-4000 Scratch Tester. An automatic loading device exerts an increasing load on the specimen via the indenter; in the meantime, the specimen moves with a uniform motion, and then a gradually deepening scratched groove appears on the specimen's surface, as illustrated in Figure 6b. The selected scratch parameters are listed in Table 2 As shown in Figure 5, the bottom and front faces of the SiCp/Al 3D model are constrained in 6-Degree of Freedom(DOF), U x = U y = U z = UR x = UR y = UR z = 0, while the z-direction of its two lateral faces is fixed, V Z = 0. Single-Grit Scratch Experiments The experiments were conducted on MFT-4000 Scratch Tester for Material Surface Properties, which as developed by Lanzhou Huahui Instrument Technology Co., Ltd. (Lanzhou, China), via using a conical diamond indenter with cone angle 120 • and tip radius 20 µm (see Figure 6a). Table 7 shows the main technical indicators of MFT-4000 Scratch Tester. An automatic loading device exerts an increasing load on the specimen via the indenter; in the meantime, the specimen moves with a uniform motion, and then a gradually deepening scratched groove appears on the specimen's surface, as illustrated in Figure 6b. The selected scratch parameters are listed in Table 2. The scratched surface morphology was observed via ZEISS Ultra Plus Field Emission Scanning Electron Microscope (SEM, ZEISS, Jena, Thuringia, Germany). Single-Grit Scratch Experiments The experiments were conducted on MFT-4000 Scratch Tester for Material Surface Properties, which as developed by Lanzhou Huahui Instrument Technology Co., Ltd. (Lanzhou, China), via using a conical diamond indenter with cone angle 120° and tip radius 20 μm (see Figure 6a). Table 7 shows the main technical indicators of MFT-4000 Scratch Tester. An automatic loading device exerts an increasing load on the specimen via the indenter; in the meantime, the specimen moves with a uniform motion, and then a gradually deepening scratched groove appears on the specimen's surface, as illustrated in Figure 6b. The selected scratch parameters are listed in Table 2. The scratched surface morphology was observed via ZEISS Ultra Plus Field Emission Scanning Electron Microscope (SEM, ZEISS, Jena, Thuringia, Germany). Loading Mode Automatically Load Loading range 0.25 N~200 N automatic loading continuously, the precision is 0.25 N Scratch length 2 mm~40 mm Scratch velocity 10 mm/min Loading rate 1 N/min~100 N/min Measuring range 0.5 µm~30 µm Friction measuring range 10 N~100 N, precision is 0.25 N Material Removal Process Multiple factors affect the material removal process in scratch tests of high volume fraction SiCp/Al composite, such as indenter-particle contact, particle-matrix interfacial behavior and particle motion within the matrix, which are all directly influenced by the scratch depth. The Initial Scratch Process During the initial scratch process under the normal load of 0 to 5 N and the scratch depth of 0 to 0.011 mm, the model is sectioned along the scratch direction for material removal analysis, as shown in Figure 7. When the scratch depth is very small and the indenter scratches only the SiC particle, the SiC particle and Al matrix endure elastic deformation; they revert to their original state after the indenter scratches into the Al matrix from the SiC particle. The material removal is negligible (see Figure 7a,b). As the scratch depth increases, the Al matrix is removed in ductile mode when the indenter scratches the Al matrix (see Figure 7c). When the indenter scratches into the SiC particles from the Al matrix, a SiC particle first rotates on a small scale and moves toward the lower left because of the indenter's negative rake, and then heads toward other SiC particles. As the indenter advances, the Al matrix between the approaching SiC particles endures squeezing and damage failure, and the SiC particle removal occurs mainly in the ductile mode (see Figure 7d,e), but partly in the brittle fracture mode when a particle crashes into an another particle (see Figure 8d,e). After the particle is completely removed, the indenter scratches some other particles (see Figures 7f and 9f); the indenter is in contact with some particles almost all the time due to high volume fraction of particles in the Al matrix. indenter scratches the Al matrix (see Figure 7c). When the indenter scratches into the SiC particles from the Al matrix, a SiC particle first rotates on a small scale and moves toward the lower left because of the indenter's negative rake, and then heads toward other SiC particles. As the indenter advances, the Al matrix between the approaching SiC particles endures squeezing and damage failure, and the SiC particle removal occurs mainly in the ductile mode (see Figure 7d,e), but partly in the brittle fracture mode when a particle crashes into an another particle (see Figure 8d,e). After the particle is completely removed, the indenter scratches some other particles (see Figures 7f and 9f); the indenter is in contact with some particles almost all the time due to high volume fraction of particles in the Al matrix. The von Mises stress distributions in the longitudinal section along the scratch direction and the cross section perpendicular to the scratch direction during the initial scratch models are shown in Figures 8 and 9, respectively. Because of high hardness, SiC particles bear the vast majority of load; thus, they present greater stress than the surrounding Al matrix. As shown in Figure 8, in the longitudinal section, von Mises stress diffuses from the workpiece region pressed by the indenter tip to the lower left because of the indenter's negative rake. The particles in the lower left impede stress diffusion; therefore, SiC particles equivalently enhance the anti-deformation ability of the Al matrix. Figure 8 reveals four situations: (1) The indenter scratches a SiC particle; the particle experiences highly concentrated stress (see Figure 8a,e,f). (2) The indenter scratches only the Al matrix; the stress in the scratched area of the Al matrix is of a larger amplitude, and accumulated stress is spread to the particles in the lower left (see Figure 8c). (3) The indenter scratches into the Al matrix from the SiC particle; the stress of the Al matrix increases with a large amplitude as that of the SiC particle decreases (see Figure 8b). (4) The indenter scratches into the SiC particle from the Al matrix; a high localized stress zone is found at the indenter-particle contact area when the indenter first engages with the particle (see Figure 8d). As the indenter advances, the maximum localized stress zone with larger amplitude moves to the left side of the particles, which leads to the damage initiation, and the stress is transferred to an another particle in the lower left via the Al matrix, and then the particle scratched by the indenter is removed gradually and mainly in the ductile mode. The Al matrix between the two approaching particles is continuously squeezed to deform seriously and finally be removed with advancement of the indenter (see Figure 8e). On the cross section shown in Figure 9, the stress is transferred from the workpiece region pressed by the indenter tip to the bottom, left and right sides, respectively. As indenter advances, the stress on the particle contacting the indenter tip increases gradually; the particle is pushed down toward to an another particle (see Figure 9b,c); then, the Al matrix between the two approaching particles is squeezed to be removed, and the particle mainly experiences ductile removal (see Figure 9c,d) and slight brittle fracturing (see Figure 9e). A very small portion of particle-matrix interfacial debonding (see Figure 9d) occurs because of the particle rotation phenomenon, which is also observable in Figure 7d,e. During the process, the indenter pushes the particles aside; these particles are removed in ductile mode because of the small scratch depth (see Figure 9e,f). During the initial scratch process under the scratch depth of 0 to 0.011 mm, SiC particles and Al matrix are primarily removed in ductile mode; brittle fracturing of SiC particles rarely occurs because of small scratch depth and the flexible support provided by Al matrix which is beneficial to ductile removal of SiC particles. Moreover, a large amount of particle-matrix interfacial failure and particleparticle collision do not appear due to minor migration of SiC particles. The final state of SiC particles is shown in Figure 10. The von Mises stress distributions in the longitudinal section along the scratch direction and the cross section perpendicular to the scratch direction during the initial scratch models are shown in Figures 8 and 9, respectively. Because of high hardness, SiC particles bear the vast majority of load; thus, they present greater stress than the surrounding Al matrix. As shown in Figure 8, in the longitudinal section, von Mises stress diffuses from the workpiece region pressed by the indenter tip to the lower left because of the indenter's negative rake. The particles in the lower left impede stress diffusion; therefore, SiC particles equivalently enhance the anti-deformation ability of the Al matrix. Figure 8 reveals four situations: (1) The indenter scratches a SiC particle; the particle experiences highly concentrated stress (see Figure 8a,e,f). (2) The indenter scratches only the Al matrix; the stress in the scratched area of the Al matrix is of a larger amplitude, and accumulated stress is spread to the particles in the lower left (see Figure 8c). (3) The indenter scratches into the Al matrix from the SiC particle; the stress of the Al matrix increases with a large amplitude as that of the SiC particle decreases (see Figure 8b). (4) The indenter scratches into the SiC particle from the Al matrix; a high localized stress zone is found at the indenter-particle contact area when the indenter first engages with the particle (see Figure 8d). As the indenter advances, the maximum localized stress zone with larger amplitude moves to the left side of the particles, which leads to the damage initiation, and the stress is transferred to an another particle in the lower left via the Al matrix, and then the particle scratched by the indenter is removed gradually and mainly in the ductile mode. The Al matrix between the two approaching particles is continuously squeezed to deform seriously and finally be removed with advancement of the indenter (see Figure 8e). On the cross section shown in Figure 9, the stress is transferred from the workpiece region pressed by the indenter tip to the bottom, left and right sides, respectively. As indenter advances, the stress on the particle contacting the indenter tip increases gradually; the particle is pushed down toward to an another particle (see Figure 9b,c); then, the Al matrix between the two approaching particles is squeezed to be removed, and the particle mainly experiences ductile removal (see Figure 9c,d) and slight brittle fracturing (see Figure 9e). A very small portion of particle-matrix interfacial debonding (see Figure 9d) occurs because of the particle rotation phenomenon, which is also observable in Figure 7d,e. During the process, the indenter pushes the particles aside; these particles are removed in ductile mode because of the small scratch depth (see Figure 9e,f). During the initial scratch process under the scratch depth of 0 to 0.011 mm, SiC particles and Al matrix are primarily removed in ductile mode; brittle fracturing of SiC particles rarely occurs because of small scratch depth and the flexible support provided by Al matrix which is beneficial to ductile removal of SiC particles. Moreover, a large amount of particle-matrix interfacial failure and particle-particle collision do not appear due to minor migration of SiC particles. The final state of SiC particles is shown in Figure 10. The Middle Scratch Process During the middle stage of scratch process under the normal load of 5 to 12 N and the scratch depth of 0.011 to 0.0385 mm, the model is sectioned along the scratch direction for material removal analysis, as shown in Figure 11. As the scratch depth increases from 0.011 to 0.0385 mm, not only does SiC-Al interfacial debonding becomes more evident and widespread, but matrix failure around the interface and particle-particle collision does too (see Figure 11a,b,d,e). It is clear that brittle fracturing of particles is more severe when particle-particle collisions occur (see Figure 11c), because a particle-particle collision has a dramatic impact on each particle. With the indenter advances, the particle is pushed to experience great migration and (lateral) deflection, which result in serious deformation and failure of Al matrix and interfacial debonding (see Figure 11c,e). During the process, some broken particles are pushed into Al matrix (see Figure 11d,f) while some small SiC fragments under the indenter tip are possibly pushed ahead on the scratched surface (see Figure 11f). Moreover, microcracks widely exist on particles and the particle-matrix interface (see Figure 11d). The Middle Scratch Process During the middle stage of scratch process under the normal load of 5 to 12 N and the scratch depth of 0.011 to 0.0385 mm, the model is sectioned along the scratch direction for material removal analysis, as shown in Figure 11. As the scratch depth increases from 0.011 to 0.0385 mm, not only does SiC-Al interfacial debonding becomes more evident and widespread, but matrix failure around the interface and particle-particle collision does too (see Figure 11a,b,d,e). It is clear that brittle fracturing of particles is more severe when particle-particle collisions occur (see Figure 11c), because a particle-particle collision has a dramatic impact on each particle. With the indenter advances, the particle is pushed to experience great migration and (lateral) deflection, which result in serious deformation and failure of Al matrix and interfacial debonding (see Figure 11c,e). During the process, some broken particles are pushed into Al matrix (see Figure 11d,f) while some small SiC fragments under the indenter tip are possibly pushed ahead on the scratched surface (see Figure 11f). Moreover, microcracks widely exist on particles and the particle-matrix interface (see Figure 11d). The von Mises stress distributions in the longitudinal section along the scratch direction and the cross section perpendicular to the scratch direction during the middle scratch process are shown in Figures 12 and 13, respectively. The characteristics of von Mises stress distributions on the two sections are similar to those during the initial scratch process; namely, SiC particles bear the vast majority of load and the greatest stress, and impede stress transmission in the Al matrix. Von Mises stress in the longitudinal section diffuses from the region pressed by the indenter tip to the lower left (see Figure 12), and von Mises stress in the cross section is transferred from the region pressed by the indenter tip to the bottom, left and right sides (see Figure 13). As shown in Figures 12 and 13, the overall von Mises stress level of the scratched zone in various stages of the middle scratch process is relatively higher than that of the initial scratch process, which illustrates the deteriorative material removal process, including the interfacial debonding, brittle fracturing of SiC particles, serious deformation and failure of the Al matrix and particle-particle collision. a particle-particle collision has a dramatic impact on each particle. With the indenter advances, the particle is pushed to experience great migration and (lateral) deflection, which result in serious deformation and failure of Al matrix and interfacial debonding (see Figure 11c,e). During the process, some broken particles are pushed into Al matrix (see Figure 11d,f) while some small SiC fragments under the indenter tip are possibly pushed ahead on the scratched surface (see Figure 11f). Moreover, microcracks widely exist on particles and the particle-matrix interface (see Figure 11d). The von Mises stress distributions in the longitudinal section along the scratch direction and the cross section perpendicular to the scratch direction during the middle scratch process are shown in Figures 12 and 13, respectively. The characteristics of von Mises stress distributions on the two sections are similar to those during the initial scratch process; namely, SiC particles bear the vast majority of load and the greatest stress, and impede stress transmission in the Al matrix. Von Mises stress in the longitudinal section diffuses from the region pressed by the indenter tip to the lower left (see Figure 12), and von Mises stress in the cross section is transferred from the region pressed by the indenter tip to the bottom, left and right sides (see Figure 13). As shown in Figures 12 and 13, the overall von Mises stress level of the scratched zone in various stages of the middle scratch process is relatively higher than that of the initial scratch process, which illustrates the deteriorative material removal process, including the interfacial debonding, brittle fracturing of SiC particles, serious deformation and failure of the Al matrix and particle-particle collision. Figure 13 also reveals the lateral material removal process: the particles underneath the indenter tip are pushed down to another particle, while the particles on the left or right side of the tip are pushed aside to impact other particles. During the process, failure of Al matrix between the approaching particles and particle-matrix interfacial debonding occurs, followed by particle-particle collision which causes the brittle fracturing of particles (see Figure 13b,c). With the indenter advances, the above phenomena become more evident, and some fragmented particles are pushed ahead by the indenter (see Figure 13d,e). With some fragmented particles pushed ahead, cavities filled with residual particle fragments emerge (see Figure 13f). pushed aside to impact other particles. During the process, failure of Al matrix between the approaching particles and particle-matrix interfacial debonding occurs, followed by particle-particle collision which causes the brittle fracturing of particles (see Figure 13b,c). With the indenter advances, the above phenomena become more evident, and some fragmented particles are pushed ahead by the indenter (see Figure 13d,e). With some fragmented particles pushed ahead, cavities filled with residual particle fragments emerge (see Figure 13f). During the middle scratching process at a depth of 0.011 to 0.0385 mm, ductile removal and brittle fracturing of SiC particles equally occur; meanwhile, some terrible phenomena, i.e., particlematrix interfacial debonding, serious deformation of Al matrix and particle-particle collision, become more common, and cavities filled with residual particle fragments appear. Because of greater migration of SiC particles, the cluster of particles becomes more evident; the final state of SiC particles is shown in Figure 14. The Final Scratch Process During the final stage of scratch process under the normal load of 12 to 20 N and the scratch depth of 0.0385 to 0.0764 mm, the model is sectioned along the scratch direction for material removal analysis, as shown in Figure 15. Based on observations in Figure 15, the deteriorative material removal process becomes more evident with the increasing scratching depth, including particleparticle collision, brittle fracturing of particles, interfacial debonding, serious deformation and failure of the Al matrix and cavities filled with residual particles, and so on. Additionally, large-scale cracks in SiC-Al interface become widespread (see Figure 15e,f). The above phenomenon is the result of the larger scratch depth, under which particles are forced to move and rotate on a large scale; then, particle-particle collision and particle-matrix interfacial debonding consequentially occur and become universal during the process (see Figure 15b-e). On account of the high pressure applied by the indenter and the impact effect produced by particle-particle collision, brittle fracturing is the Figure 13 also reveals the lateral material removal process: the particles underneath the indenter tip are pushed down to another particle, while the particles on the left or right side of the tip are pushed aside to impact other particles. During the process, failure of Al matrix between the approaching particles and particle-matrix interfacial debonding occurs, followed by particle-particle collision which causes the brittle fracturing of particles (see Figure 13b,c). With the indenter advances, the above phenomena become more evident, and some fragmented particles are pushed ahead by the indenter (see Figure 13d,e). With some fragmented particles pushed ahead, cavities filled with residual particle fragments emerge (see Figure 13f). During the middle scratching process at a depth of 0.011 to 0.0385 mm, ductile removal and brittle fracturing of SiC particles equally occur; meanwhile, some terrible phenomena, i.e., particle-matrix interfacial debonding, serious deformation of Al matrix and particle-particle collision, become more common, and cavities filled with residual particle fragments appear. Because of greater migration of SiC particles, the cluster of particles becomes more evident; the final state of SiC particles is shown in Figure 14. During the middle scratching process at a depth of 0.011 to 0.0385 mm, ductile removal and brittle fracturing of SiC particles equally occur; meanwhile, some terrible phenomena, i.e., particlematrix interfacial debonding, serious deformation of Al matrix and particle-particle collision, become more common, and cavities filled with residual particle fragments appear. Because of greater migration of SiC particles, the cluster of particles becomes more evident; the final state of SiC particles is shown in Figure 14. The Final Scratch Process During the final stage of scratch process under the normal load of 12 to 20 N and the scratch depth of 0.0385 to 0.0764 mm, the model is sectioned along the scratch direction for material removal analysis, as shown in Figure 15. Based on observations in Figure 15, the deteriorative material removal process becomes more evident with the increasing scratching depth, including particle-particle collision, brittle fracturing of particles, interfacial debonding, serious deformation and failure of the Al matrix and cavities filled with residual particles, and so on. Additionally, large-scale cracks in SiC-Al interface become widespread (see Figure 15e,f). The above phenomenon is the result of the larger scratch depth, under which particles are forced to move and rotate on a large scale; then, particle-particle collision and particle-matrix interfacial debonding consequentially occur and become universal during the process (see Figure 15b-e). On account of the high pressure applied by the indenter and the impact effect produced by particle-particle collision, brittle fracturing is the primary mode of SiC material removal (see Figure 15b,d-f). With the indenter advances, the broken particles are pushed into the Al matrix (see Figure 15a,b,d), and particles of complete debonding are forced to move ahead (see Figure 15e,f). Due to severe and irreversible plastic deformation of the Al matrix around the particle-particle collision area, failure and cracking of the Al matrix also frequently appear (see Figure 15c,e,f). Because of the non-uniform deformation between the SiC particles and the Al matrix, when the particle is forced to move or rotate on a large scale, which both result from a large scratch depth, the interfacial debonding evolves into cracks in the interface (see Figure 15f) which further result in large-scale lateral cracks on the scratched surface. That will be discussed in the next section. During the process, cavities filled with residual particles are also observed, as shown in Figure 15d. appear (see Figure 15c,e,f). Because of the non-uniform deformation between the SiC particles and the Al matrix, when the particle is forced to move or rotate on a large scale, which both result from a large scratch depth, the interfacial debonding evolves into cracks in the interface (see Figure 15f) which further result in large-scale lateral cracks on the scratched surface. That will be discussed in the next section. During the process, cavities filled with residual particles are also observed, as shown in Figure 15d. The von Mises stress distributions in the longitudinal section along the scratch direction and the cross section perpendicular to the scratch direction during the final scratch process are shown in Figures 16 and 17, respectively. The characteristics of von Mises stress distributions in the longitudinal section and the cross section during the final scratch process are similar to those during The von Mises stress distributions in the longitudinal section along the scratch direction and the cross section perpendicular to the scratch direction during the final scratch process are shown in Figures 16 and 17, respectively. The characteristics of von Mises stress distributions in the longitudinal section and the cross section during the final scratch process are similar to those during the initial and middle scratch process, but the difference is that the von Mises stress in the Al matrix is higher and closer to the von Mises stress in SiC particles during the final stage because the impeding effect of SiC particles on stress transmission in the Al matrix weakens with the increasing scratch depth. (e) (f) During the final scratch process under the scratch depth of 0.0385 to 0.0764 mm, brittle fracturing is the primary mode of SiC material removal; meanwhile, the deteriorative phenomena, such as particle-particle collision, particle-matrix interfacial debonding, failure and cracks in the Al matrix, long-distance transport of particles and large-scale cracks on the particle-matrix interface, become evident. In particular, large-scale cracks on the particle-matrix interface are the major cause of lateral cracks on the scratched surface, which will be discussed in the next section. Because of the large-scale transport of SiC particles, clusters of particles become more evident during the final scratching process. The final state of SiC particles is shown in Figure 18. The lateral material removal process can be investigated in detail via observation in Figure 17 during the final stage of scratch process. The remarkable increase of particle transport distance within the Al matrix that results from the increasing scratch depth leads to widespread occurrence of particle-particle collision (see Figure 17b-f). Then brittle fracturing of SiC particles and interfacial debonding become common phenomena (see Figure 17c,d). With the indenter advances, broken particles of complete debonding are pushed ahead (see Figure 17d-f), along with failure of and cracks in the Al matrix (see Figure 17e,f), and large-scale cracks on particle-matrix interfaces which will be discussed in the next section. During the final scratch process under the scratch depth of 0.0385 to 0.0764 mm, brittle fracturing is the primary mode of SiC material removal; meanwhile, the deteriorative phenomena, such as particle-particle collision, particle-matrix interfacial debonding, failure and cracks in the Al matrix, long-distance transport of particles and large-scale cracks on the particle-matrix interface, become evident. In particular, large-scale cracks on the particle-matrix interface are the major cause of lateral cracks on the scratched surface, which will be discussed in the next section. Because of the large-scale transport of SiC particles, clusters of particles become more evident during the final scratching process. The final state of SiC particles is shown in Figure 18. particle-particle collision, particle-matrix interfacial debonding, failure and cracks in the Al matrix, long-distance transport of particles and large-scale cracks on the particle-matrix interface, become evident. In particular, large-scale cracks on the particle-matrix interface are the major cause of lateral cracks on the scratched surface, which will be discussed in the next section. Because of the large-scale transport of SiC particles, clusters of particles become more evident during the final scratching process. The final state of SiC particles is shown in Figure 18. The Scratched Groove Topography The scratched groove topography is a result of the material removal process. Figure 19 shows the formation process of the groove topography during the initial scratch stage under the scratch depth of 0 to 0.011 mm; SiC particles are primarily removed in ductile mode. It is clear that the scratched groove has a good surface quality with very few defects. The Scratched Groove Topography The scratched groove topography is a result of the material removal process. Figure 19 shows the formation process of the groove topography during the initial scratch stage under the scratch depth of 0 to 0.011 mm; SiC particles are primarily removed in ductile mode. It is clear that the scratched groove has a good surface quality with very few defects. particle-particle collision, particle-matrix interfacial debonding, failure and cracks in the Al matrix, long-distance transport of particles and large-scale cracks on the particle-matrix interface, become evident. In particular, large-scale cracks on the particle-matrix interface are the major cause of lateral cracks on the scratched surface, which will be discussed in the next section. Because of the large-scale transport of SiC particles, clusters of particles become more evident during the final scratching process. The final state of SiC particles is shown in Figure 18. The Scratched Groove Topography The scratched groove topography is a result of the material removal process. Figure 19 shows the formation process of the groove topography during the initial scratch stage under the scratch depth of 0 to 0.011 mm; SiC particles are primarily removed in ductile mode. It is clear that the scratched groove has a good surface quality with very few defects. As the scratch depth increases from to 0.011 to 0.0385 mm during the middle scratch stage, particle-particle collision becomes evident, which induces brittle fracturing to become the major removal mode of SiC particles, while the Al matrix between the approaching particles heavily deforms; these phenomena can be seen in Figure 20. During the process, particles are broken into large pieces and small fragments which are pushed into the Al matrix. Because of non-uniform deformation between SiC particles and Al matrix, interfacial debonding appears on the surface, which evolves into cracks on interfaces. In additional, some broken particle pieces are occasionally pushed out by the indenter to form cavities filled with residual particles. Consequently, the scratched surface is considerable coarse, consisting of cracks on the particle-matrix interface and cavities filled with residual particles, as shown in Figure 20f. As the scratch depth increases from to 0.011 to 0.0385 mm during the middle scratch stage, particle-particle collision becomes evident, which induces brittle fracturing to become the major removal mode of SiC particles, while the Al matrix between the approaching particles heavily deforms; these phenomena can be seen in Figure 20. During the process, particles are broken into large pieces and small fragments which are pushed into the Al matrix. Because of non-uniform deformation between SiC particles and Al matrix, interfacial debonding appears on the surface, which evolves into cracks on interfaces. In additional, some broken particle pieces are occasionally pushed out by the indenter to form cavities filled with residual particles. Consequently, the scratched surface is considerable coarse, consisting of cracks on the particle-matrix interface and cavities filled with residual particles, as shown in Figure 20f. particle-particle collision becomes evident, which induces brittle fracturing to become the major removal mode of SiC particles, while the Al matrix between the approaching particles heavily deforms; these phenomena can be seen in Figure 20. During the process, particles are broken into large pieces and small fragments which are pushed into the Al matrix. Because of non-uniform deformation between SiC particles and Al matrix, interfacial debonding appears on the surface, which evolves into cracks on interfaces. In additional, some broken particle pieces are occasionally pushed out by the indenter to form cavities filled with residual particles. Consequently, the scratched surface is considerable coarse, consisting of cracks on the particle-matrix interface and cavities filled with residual particles, as shown in Figure 20f. The final formation process of the scratched groove under the scratch depth of 0.0385 to 0.0764 mm is shown in Figure 21. Particle-particle collision, brittle fracturing of SiC particles and deteriorative deformation of the Al matrix become more serious and common. It is worth noting that cracks on particle-matrix interface evolve into large-scale lateral cracks, which are one of major surface defects (see Figure 21d,e). The evolution process is as follows: The broken particles are pushed ahead by the indenter, and then cracks (debonding) on the particle-matrix interface occur due to nonuniform deformation between SiC particles and matrix. Cracks grow with continuous transport of particles while the surrounding matrix material is torn apart, and lastly, cracks connect to form the lateral crack on the scratched surface. The scratched surface quality becomes more deteriorated with the increase of scratch depth. In addition, the indenter tends to press the particles into the scratched surface, which induces the severe deformation of Al matrix surrounded by particles. Based on the The final formation process of the scratched groove under the scratch depth of 0.0385 to 0.0764 mm is shown in Figure 21. Particle-particle collision, brittle fracturing of SiC particles and deteriorative deformation of the Al matrix become more serious and common. It is worth noting that cracks on particle-matrix interface evolve into large-scale lateral cracks, which are one of major surface defects (see Figure 21d,e). The evolution process is as follows: The broken particles are pushed ahead by the indenter, and then cracks (debonding) on the particle-matrix interface occur due to non-uniform deformation between SiC particles and matrix. Cracks grow with continuous transport of particles while the surrounding matrix material is torn apart, and lastly, cracks connect to form the lateral crack on the scratched surface. The scratched surface quality becomes more deteriorated with the increase of scratch depth. In addition, the indenter tends to press the particles into the scratched surface, which induces the severe deformation of Al matrix surrounded by particles. Based on the observations of the final scratched surface (see Figure 21e), most of defects on the scratched surface occur in or around the SiC particles, such as lateral cracks, cavities filled with residual particles, particle fragments remaining in the Al matrix and particle-matrix interfacial debonding. So the removal mode of SiC particles plays an important role in the scratched surface formation, and that is allied to scratch depth; namely, the removal mode of SiC particles is almost ductile removal under a small scratch depth, but is brittle fracturing under a large scratch depth. observations of the final scratched surface (see Figure 21e), most of defects on the scratched surface occur in or around the SiC particles, such as lateral cracks, cavities filled with residual particles, particle fragments remaining in the Al matrix and particle-matrix interfacial debonding. So the removal mode of SiC particles plays an important role in the scratched surface formation, and that is allied to scratch depth; namely, the removal mode of SiC particles is almost ductile removal under a small scratch depth, but is brittle fracturing under a large scratch depth. Experimental Verification The scratched surface microstructures of the single-grit scratch experiments performed on the MFT-4000 Scratch Tester were observed by ZEISS Ultra Plus Field Emission Scanning Electron Microscope (SEM), as shown in Figure 22. Figure 22a presents the whole scratched groove; Figure 22b shows that the initial scratched surface under the scratch depth of 0 to 0.011 mm is considerably smooth and exhibits very few defects. The SiC particles are primarily removed in ductile mode. As shown in Figure 22c, the final scratched surface quality under the scratch depth of 0.011 to 0.0385 mm Experimental Verification The scratched surface microstructures of the single-grit scratch experiments performed on the MFT-4000 Scratch Tester were observed by ZEISS Ultra Plus Field Emission Scanning Electron Microscope (SEM), as shown in Figure 22. Figure 22a presents the whole scratched groove; Figure 22b shows that the initial scratched surface under the scratch depth of 0 to 0.011 mm is considerably smooth and exhibits very few defects. The SiC particles are primarily removed in ductile mode. As shown in Figure 22c, the final scratched surface quality under the scratch depth of 0.011 to 0.0385 mm becomes very deteriorative and coarse, on which various surface defects are observed; i.e., lateral cracks, small SiC fragments pushed ahead and then pressed into the Al matrix, cavies filled with residually broken particles, fragmented particles remaining in the matrix and interfacial debonding. It is worth noting that lateral cracks are one of the primary defects. In order to analyze the formation mechanism of lateral cracks, a certain zone of the final scratched surface is magnified in Figure 22d where the SiC particles are marked with red dots. The lateral cracks initiate at several particle-matrix interfacial debonding sites (micro-cracks), and then grow through the matrix as the indenter advances. These interfacial micro-cracks ultimately link together to evolve into large-scale lateral cracks. In the final scratch stage, most of particles are broken into small pieces and/or large pieces which induce various surface defects, after some pieces are removed from the workpiece, leaving lots of cavities filled with residually broken particles, or fragmented particles remaining in the matrix. In addition, removed small SiC fragments are occasionally pushed ahead under the indenter top and then pressed into the matrix. These phenomena can be observed in Figure 22e. Obviously in Figure 22, particle-matrix interfacial debonding on the final scratching surface is almost universal, but one kind of defect, namely, cavities without any residual SiC fragments due to complete pushing out of a SiC particle, barely occurs on the single-grit scratched surface. Nevertheless, that is one of the major defects on turning and milling a SiCp/Al surface [9,12]. The phenomenon is attributed to the difference between the grit with a negative rake and turning (milling) tool with a positive rake. Conclusions Based on the more accurate single-grit scratch model with a more realistic 3D micro-structure, particle-matrix interfacial behaviors, particle-particle contact behaviors, particle-matrix contact behaviors and the Johnson-Holmquist-Beissel (JHB) model of SiC, the material removal process and the surface defect formation mechanism of the 50 vol% SiCp/Al composite at various scratch depths were investigated. SEM images of the scratched groove obtained from a corresponding experiment . I-the SiC particle which is marked with a red dot; II-small SiC fragments pushed ahead and pressed into the matrix; III-a cavity filled with residual particles; IV-fragmented particles remaining in the matrix. Conclusions Based on the more accurate single-grit scratch model with a more realistic 3D micro-structure, particle-matrix interfacial behaviors, particle-particle contact behaviors, particle-matrix contact behaviors and the Johnson-Holmquist-Beissel (JHB) model of SiC, the material removal process and the surface defect formation mechanism of the 50 vol% SiCp/Al composite at various scratch depths were investigated. SEM images of the scratched groove obtained from a corresponding experiment provide good verification. The numerical and experimental studies allow us to draw the following conclusions: (1) The scratch depth plays a crucial role in the material removal process. SiC particles are primarily removed in ductile mode under a small scratch depth ranging from 0 to 0.011 mm, and then brittle fracturing of SiC particles becomes more evident with an increase of the scratch depth. It is eventually exhibited as the primary removal model under a large scratch depth ranging from 0.0385 to 0.0764 µm. The above phenomenon is attributed to transport of SiC particles within the Al matrix. Small-scale transport of SiC particles induced by a small scratch depth barely results in particle-particle collision; in this case, SiC particles are mainly sustained by the Al matrix which provides a flexible support that is beneficial to ductile removal of SiC particles. The increase of SiC particle transport with scratch depth raises the occurrence of particle-particle collision, which provides a hard support and shock for the scratched particles; therefore, brittle fracturing gradually becomes the major removal mode of SiC particles as the scratch depth increases. The Al matrix is removed in ductile mode during the whole scratch process. (2) The removal model of SiC particles plays a significant role in the deformation of the scratched surface. If ductile removal of SiC particles is predominant, the scratched surface is considerably smooth and exhibits very few defects, whereas if brittle fracturing of SiC particles occurs more prevalently, the deteriorative and coarse surface becomes more significant, on which various surface defects are observed; i.e., particle-matrix interfacial debonding, lateral cracks, small SiC fragments pushed ahead and then pressed into the matrix, cavies filled with residually broken particles and fragmented particles remaining in the matrix. (3) Numerical and experimental analyses both reveal that lateral cracks are one of primary surface defects, which were barely referred in previous simulation literature. The formation mechanism of the lateral cracks is as follows: the lateral cracks initiate at several particle-matrix interfacial debonding sites (micro-cracks), and then grow through the matrix as the indenter advances; these interfacial micro-cracks ultimately link together to evolve into large-scale lateral cracks. The formation process simulation of lateral cracks was performed successfully in the study. (4) A defect, cavities without any residual SiC fragments due to complete pushing out of a SiC particle, barely occurs on the single-grit scratched surface, while that is one of major defects on turning and milling SiCp/Al surface [9,12], which is attributed to the difference between the grit with a negative rake and turning (milling) tool with a positive rake. This indicates that grinding is more beneficial to improving the processed surface quality of PRMMCs than turning and milling. (5) The von Mises stress distribution shows that SiC particles bear the vast majority of load; thus, they present greater stress than the surrounding Al matrix. Namely, they impede stress diffusion within the Al matrix. (6) The SEM images of the scratched surface obtained from the single-grit scratch experiments verify the numerical analysis. Due to the importance of scratch depth for SiC particles removal and surface quality, it can be suggested that a relatively small scratch depth be applied to improve surface quality.
2020-02-13T09:22:06.294Z
2020-02-01T00:00:00.000
{ "year": 2020, "sha1": "44cc80b926294eb859771a136355ecb5f96aba45", "oa_license": "CCBY", "oa_url": "https://res.mdpi.com/d_attachment/materials/materials-13-00796/article_deploy/materials-13-00796.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "aa5216eddeed17d0685ca14c557f9e9be55fb373", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
226192087
pes2o/s2orc
v3-fos-license
Renal involvement in patients with COVID-19 Individuals who are infected with the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) and develop coronavirus disease 2019 (COVID-19) are also at risk of developing acute kidney injury (AKI) (1), although the exact incidence of AKI in the context of COVID-19 is unknown. COVID-19 a 701 patients with COVID-19 the overall prevalence of COVID-19-induced AKI interesting nephrological that, at 43.9% of the patients proteinuria the of in-hospital death was significantly higher in patients with kidney disease than in Individuals who are infected with the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) and develop coronavirus disease 2019 are also at risk of developing acute kidney injury (AKI) (1), although the exact incidence of AKI in the context of COVID-19 is unknown. A recently published study evaluated 41,000 COVID-19 cases in China and found that the prevalence of AKI among those patients was 1.6% (2). Another study conducted at a teaching hospital in the Chinese city of Wuhan assessed 701 patients with COVID-19 and reported that the overall prevalence of COVID-19-induced AKI was 3.2% (3). Another interesting nephrological finding was that, at admission, 43.9% of the patients had proteinuria and 26.7% had hematuria. The authors also found that the risk of in-hospital death was significantly higher in patients with kidney disease than in those without such disease. In a single-center retrospective study conducted in China, data from 333 hospitalized patients with COVID-19 pneumonia showed that 75.4% had abnormal urine dipstick test results or AKI (4). The authors found that overall mortality was higher among patients with renal involvement than among those with no renal involvement (11.2% vs. 1.2%). Another single-center retrospective study conducted in China showed that AKI occurred in 15 (29%) of the 52 evaluated critically ill adult patients (5). In a multicenter study, Zhou et al. evaluated 191 adult inpatients with COVID-19 and reported that 28 (15%) had AKI (6). The Italian National Institute of Health reported that within a population of 2,000 patients infected with SARS-CoV-2, the prevalence of AKI was 27.8% (7). In a large cohort of COVID-19 subjects (over 2,000 patients) in New York, 22.2% had AKI and 3.2% required kidney replacement therapy (8). The mechanisms underlying kidney injury in patients infected with coronaviruses have been studied since the SARS-CoV epidemic. In a study conducted in Hong Kong in 2003, seven patients with SARS who developed AKI and later died underwent postmortem histopathological analysis; significant acute tubular necrosis was found in all cases but no viral particles were found in the renal tissue (9). The authors concluded that the mechanism of AKI in patients is probably multifactorial, including antibiotic interstitial nephritis, acute tubular necrosis due to ischemia, and sepsis due to secondary infections. It is important to note that the study did not exclude the possibility of direct renal injury by coronavirus. The results could have been influenced by the use of polymerase chain reaction to detect viral particles in the urine of the patients and by the fact that the ultrastructural analysis was impaired by the autolysis of renal parenchyma cells. In another study conducted in Wuhan, Diao et al. performed histological and immunohistochemical analyses to identify the SARS-CoV-2 nucleocapsid antigen, markers of cellular immunity, and complements in renal tissues collected postmortem from six patients diagnosed with COVID-19 and AKI and with unfavorable outcomes (10). In addition to the extensive acute tubular necrosis previously observed in patients with SARS-CoV infection, the authors identified the accumulation of the nucleocapsid protein antigen in the tubules, infiltration of CD68+ macrophages, and deposition of C5b-9, indicating that the virus is capable of infecting renal tubular cells, causing direct and indirect injury through the activity of macrophages and the complement system. In an autopsy study of a patient with COVID-19 and AKI, Farkash et al. identified intracellular viral arrays in the proximal tubular epithelial cells on electron microscopy, findings that are consistent with direct infection of the kidney with SARS-CoV-2 (11). Su et al. also identified coronavirus-like particles with distinctive spikes in the cytoplasm of the proximal and distal tubular epithelium as well as in the podocytes (12). The mechanism underlying AKI in COVID-19 includes a maladaptive response of the immune system that induces a cytokine storm and could be responsible for the systemic inflammatory response syndrome-induced AKI (13). Patients with COVID-19 in the intensive care unit (ICU) occasionally require mechanical ventilation, vasopressor drugs, and nephrotoxic drugs, all of which can aggravate AKI. It is well known that COVID-19 can induce thrombotic events. In the postmortem histopathological analysis of kidney tissue in patients who died from COVID-19, Su et al. observed erythrocyte aggregates obstructing peritubular capillaries and segmental fibrin thrombi in the glomeruli (12). Rhabdomyolysis is a common finding in COVID-19. It can lead to elevated serum creatinine phosphokinase levels (more than five times the upper limit of normal) and could be another important factor for the development of AKI (4,14). Furthermore, hemosiderin granules have been observed in the lumina of tubular cells in patients with COVID-19 (12). Angiotensin-converting enzyme 2 (ACE 2) and members of the serine protease family, which are essential for a virus to bind to the host cells, are highly expressed in podocytes and tubular epithelial cells. Therefore, COVID-19 can also cause hematuria and proteinuria, further supporting the idea that the SARS-CoV-2 shows tropism in the kidney (12,15). ACE 2 receptors have been found to play an important role in the development of severe acute respiratory SARS-CoV-2, and thus concerns about the use of renin angiotensin system (RAS) blockers for COVID-19 patients have been raised. Two important studies in Milan and New York showed no evidence of increased risk of severe COVID-19 with the use of RAS blockers (16,17). An experimental study also demonstrated that RAS blockers did not result in an increase in ACE 2 levels in kidney and lung epithelia (18). There is currently no specific treatment for infection with SARS-CoV-2. Although various drugs are being investigated in clinical trials, the management of COVID-19 continues to be mainly supportive, and a significant number of patients require ICU admission (15). For patients with COVID-19-induced AKI, renal replacement therapy might be necessary (19). Among ICU patients with COVID-19-induced AKI in Seattle, Washington, 5% required dialysis, typically 2 weeks after the onset of symptoms (20). In a recent meta-analysis of three studies evaluating ICU patients with COVID-19-induced AKI, the proportion of patients requiring renal replacement therapy ranged from 5.6% to 23.1%, with a pooled incidence of 13% (21). There is still insufficient scientific evidence to support the superiority of one method of dialysis over any other (13). We recommend selecting the type of dialysis according to the severity of the disease, considering the availability of resources and experts in the health care facility in question. In the months of May and June 2020, we performed 704 and 1,235 hemodialysis sessions, respectively, for ICU patients at our facility. Approximately 27% of the sessions have been of the continuous venovenous type (hemodialysis, hemodiafiltration, or hemofiltration). The choice between continuous and intermittent dialysis depends on the traditional indications (hemodynamic stability, cerebral edema, or the need to remove fluid overload). We are still learning about renal abnormalities in COVID-19 patients. We hope that we will soon be able identify the best treatment and develop an effective vaccine for this devastating disease. ' AUTHOR CONTRIBUTIONS Arantes MF was responsible for the editorial idea and conception, manuscript drafting and final approval of the version to be published. Rodrigues CE, Seabra VF, Seabra VF, Reichert BV, Sales GTM, Smolentzov I and Cabrera CPS were responsible for critical review of the manuscript, intellectual content, and final approval of the version to be published. Andrade L was responsible for mentoring, critical review, and final approval of the version to be published.
2020-10-31T05:07:00.701Z
2020-10-21T00:00:00.000
{ "year": 2020, "sha1": "dcfe9f35363b17bacf6f10352cef62113a38028b", "oa_license": "CCBY", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7561068/pdf/cln-75-2194.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dcfe9f35363b17bacf6f10352cef62113a38028b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
119102585
pes2o/s2orc
v3-fos-license
Thermodynamic Properties of Strongly Interacting Matter in a Finite Volume using the Polyakov-Nambu-Jona-Lasinio model We present the thermodynamic properties of strongly interacting matter in finite volume in the framework of Polyakov loop enhanced Nambu$-$Jona-lasinio model within mean field approximation. We considered both the 2 flavor and 2+1 flavor matter. Our primary observation was a qualitative change in the phase transition properties that resulted in the lowering of the temperature corresponding to the critical end point. This would make it favorable for detection in heavy-ion experiments that intend to create high density matter with considerably small temperatures. We further demonstrate the possibility of obtaining chiral symmetry restoration even within the confined phase in finite volumes. I. INTRODUCTION The strongly interacting matter is supposed to have a rich phase structure at finite temperatures and densities [1]. While our Universe at the present epoch contains a significant fraction of color singlet hadrons, color non-singlet states especially quarks and gluons may have been prevalent in the past − a few microseconds after the Big Bang [2]. The temperature of the Universe at that epoch is estimated to be ∼ 200 MeV. Similar state of matter is also expected to exist inside the core of super-massive stars in the present day Universe, where the density is ∼ 10 times that of normal nuclear matter. A direct study of such natural phenomenon is out of bounds even to modern astrophysicists. Fortunately experimental facilities at CERN (France/Switzerland), BNL (USA) and recently at GSI (Germany) are exploring the possibilities of creating and studying the properties of such exotic states of matter in a controlled environment. The key differences that appear in such experiments as compared to the natural phenomenon are the lifetime of matter created in the exotic state and its volume. Whereas in natural phenomenon the lifetime of the exotic matter may be large compared to the interaction time-scale, it is usually very small in an experimental situation. Some effects of the enhanced lifetime on the physical aspects of the system relating to the onset of equilibrium of weak interactions were discussed by us in Ref. [3]. Here we shall discuss about the effects of finite volume on the properties of strongly interacting matter. In the following we shall generically define the matter with color confined states as the hadronic phase and the exotic state with colored degrees of freedom as the quark-gluon plasma (QGP). In the experiments this exotic phase may be produced by ultra relativistic collisions of heavy ions. The volume of the system thus created would depend on the nature of the colliding nuclei, the center of mass energy ( √ s) and the centrality of collision. Once created, the system expands until the constituents are so far separated that their interaction ceases and they flow out as free streaming particles. The distribution of particles thus freezes out, except for some further decays to smaller particles. There have been a large number of efforts to estimate the system size at freeze-out for different √ s and different centralities. The most popular way of doing so is to measure the Hanbury-Brown-Twiss radii (see e.g. [4,5] for reviews). In Ref. [6] it has been shown that the freeze out volume increases as the √ s increases. Here the authors have estimated the freeze out volume and found it to vary from 2000 f m 3 to 3000 f m 3 . In a very recent paper [7] the volume of homogeneity has been calculated using UrQMD model [8] and compared with the experimentally available results. The √ s considered was in the range of 62.4 GeV to 2760 GeV for lead-lead collisions at different centralities. The system volume has been found to vary from 50 f m 3 to 250 f m 3 . Given that these are the freeze-out volumes, one can trace back to the initial equilibration time and expect an even smaller system size. In fact one cannot even consider the whole fireball, which is an isolated system to be in thermodynamic equilibrium. One has to choose a proper rapidity interval to act as the system under consideration. Therefore it becomes important to study how the various thermodynamic quantities in a strongly interacting matter depend on the volume of the system. Specifically we know that finite system sizes would lead to smoothening of any singularities appearing at a phase transition [9]. Thus important signatures of such transitions must be reanalyzed with the help of finite size scaling analysis [10]. In the context of heavy ion collisions such a possible analysis has been discussed in the literature (see e.g. [11][12][13]). On the theoretical side a study of finite volume effects was done in Ref. [14] with a bag of non-interacting quarks and gluons and it was found that the effective degrees of freedom are reduced. In Ref. [15] a two model equation of state was used to show that the separation between the hadronic and QGP phases around the critical temperature looses its sharpness resulting in a soft effective equation of state. A few first principle study of pure gluon theory on space-time lattices were performed, showing the possibility of significant finite size effects [16,17]. Similar studies are going on in various QCD inspired models. In Ref. [18,19] the quark mass gap equation has been studied with Schwinger-Dyson equation parallel to equivalent Lattice QCD (LQCD) calculations and various meson properties are found to have significant volume dependence. In the context of chiral perturbation theory the implications of finite system size have been discussed [20,21]. Then there are studies with four-fermi type interactions, like the Nambu−Jona-Lasinio (NJL) [22] models [11,23,24], linear sigma models [12,25,26] and Gross-Neveu models [27]. While in Ref. [25] the scaling behavior of chiral phase transition for finite and infinite volumes has been studied, the character of phase diagram has been studied in Ref. [11,12,26,27]. In refs. [23] and [24] the authors have studied the chiral properties as a function of the radius of a finite droplet of quark matter. The stability of such a droplet in the context of strangelet formation within the NJL model has been addressed in Ref. [28]. Size dependent effects of difermion states within 2-dimensional NJL model has been studied in Ref. [29] and that of magnetic field is discussed in Ref. [30]. Recently in a 1+1 dimensional NJL model the induction of charged pion condensation phenomenon in dense baryonic matter due to finite volume effects have been studied in [31]. In this work we shall use the Polyakov loop enhanced NJL (PNJL) model to study the thermodynamic properties of the strongly interacting matter in a finite volume. This model originated from the NJL model [32][33][34] which incorporates the global symmetries of QCD quite nicely. A four quark interaction term in the NJL Lagrangian is able to generate the physics of spontaneous breaking of chiral symmetry − a property of QCD which is manifested as the non-degenerate chiral partners of the low-mass hadrons. However a reasonable description of the physics of color confinement is missing. With the introduction of a background field in the NJL model, motivated by the dynamics of the Polyakov Loop [35], one obtains the PNJL model which describes a number of features of confinement physics quite satisfactorily (see e.g. [36][37][38][39][40][41][42][43][44]). Certain aspects of finite volume effects in the PNJL model has been discussed in Ref. [45] through a coarse graining of the Lagrangian, followed by a Monte Carlo simulation. This method goes on similar lines as the numerical studies of LQCD. Normally this would involve the same kind of complex determinant problem that has plagued the direct LQCD computations for non-zero baryon number densities. So it may be desirable to keep using the saddle point approximation in PNJL model to study the finite volume effects. Here we make the first case study, albeit with some simplified assumptions towards that direction. We organize our paper as follows. In the next section we briefly describe the PNJL model and the modifications for finite volume. In section III we describe phase transition at finite volume and in section IV we discuss the thermodynamic properties. The pion and sigma meson masses and the pion decay constant at finite volume have been discussed in section V. In section VI we summarize and conclude. II. THE PNJL MODEL We shall consider the PNJL model with light flavors (2 flavor) and light plus strange flavors (2+1 flavor). In the PNJL model the gluon physics comes into play through the chiral point couplings between quarks (present in the NJL part) and a background field which represents Polyakov Loop dynamics. The Polyakov line is represented as, where A 4 = iA 0 is the temporal component of Eucledian gauge field (Ā, A 4 ), β = 1 T , and P denotes path ordering. L(x) transforms as a field with charge one under global Z(3) symmetry. The Polyakov loop is then given by Φ = (T r c L)/N c , and its conjugate by,Φ = (T r c L † )/N c . The gluon dynamics can be described as an effective theory of the Polyakov loops. The Polyakov loop potential can be expressed as, where U(φ) is a Landau-Ginsburg type potential commensurate with the Z(3) global symmetry. Here we choose a form given in Ref. [37], where For the quarks we shall use the usual form of the NJL model except for the substitution of a covariant derivative containing a background temporal gauge field. Thus the 2 flavor version of PNJL model is described by the Lagrangian, For 2+1 flavor the Lagrangian may be written as, where f denotes the flavors u or d or s respectively. The matrices P L,R = (1 ± γ 5 )/2 are respectively the lefthanded and right-handed chiral projectors, and the other terms have their usual meaning, described in details in refs. [39,41,43,[46][47][48]. This NJL part of the theory is analogous to the BCS theory of superconductor, where the pairing of two electrons leads to the condensation causing a gap in the energy spectrum. Similarly in the chiral limit, NJL model exhibits dynamical breaking of SU (N f ) L × SU (N f ) R symmetry to SU (N f ) V symmetry (N f being the number of flavors). As a result the composite operatorsψ f ψ f pick up nonzero vacuum expectation values. The quark condensate is given as, where trace is over color and spin states. The self-consistent gap equation for the constituent quark masses are, where σ f = ψ f ψ f denotes chiral condensate of the quark with flavor f . Here if we consider The expression for σ f at zero temperature (T = 0) and chemical potential (µ f = 0) may be written as [42], Λ being the three-momentum cut-off. This cut-off have been used to regulate the model because it contains dimensionful couplings rendering the model to be non-renormalizable. Due to the dynamical breaking of chiral symmetry, N 2 f − 1 Goldstone bosons appear. These are the pions and kaons whose masses, decay widths etc. from experimental observations are utilized to fix the NJL model parameters. The parameter values have been listed in table I. Here we consider the Φ,Φ and σ f fields in the mean field approximation (MFA) where the mean field are obtained by simultaneously solving the respective saddle point equations. Now that the PNJL model is described for infinite volumes we discuss how we implement the finite volume constraints. Ideally one should choose the proper boundary conditions − periodic for bosons and anti-periodic for fermions. This would lead to a infinite sum over discrete momentum values p i = πn i /R, where i = x, y, z and n i are all positive integers and R is the lateral size of a cubic volume. This implies a lower momentum cut-off p min = π/R = λ (say). One should also incorporate proper effects of surface and curvatures. In this first case study we shall however take up a number of simplifications listed below: (i) We shall neglect surface and curvature effects. (ii) The infinite sum will be considered as an integration over a continuous variation of momentum albeit with the lower cut-off. (iii) We shall not use any modifications to the mean-field parameters due to finite size effects. Our philosophy had been to hold the known physics at zero T , zero µ and infinite V fixed. That means we treat V as a thermodynamic variable in the same footing as T and µ. Therefore any variation due to change in either of these thermodynamic parameters were translated into the changes in the effective fields of σ f , Φ etc. and through them to the meson spectra. The values of meson masses and decay constants used to fix the model parameters were thus naturally expected to be the values strictly at T = 0 and µ = 0 and V = ∞. Thus the Polyakov loop potential as well as the mean-field part of the NJL model would remain unchanged. They shall feel the effect of changing volume only implicitly through the saddle point equations. III. PHASE TRANSITION To study the finite volume effects on the thermodynamic properties of strongly interacting matter we begin by writing down the thermodynamic potential in MFA. The expression is given by, where ω n = πT (2n + 1) are Matsubara frequencies for fermions. The inverse quark propagator is given in momentum space by using the identity T r ln (X) = ln det (X), we get, where E p f = p 2 + M 2 f is the single quasiparticle energy. In the last line Ω contains all the terms of Ω ′ except the Vandermonde term. We now search for the saddle point of the thermodynamic potential which gives the temperature and density dependence of the fields. For all the system sizes, at zero baryon density, we found that the order parameters for both chiral (σ =<ūu > + <dd >) and deconfinement (Φ) transition smoothly passes from the hadronic phase to the quark phase. This indicates that the system does not have a real phase transition, rather there is a smooth crossover. The crossover temperature is identified to be the point of inflection of σ u and Φ with temperature. In Fig. 1 we have plotted dΦ/dT and dσ u /dT for 2 flavor and 2+1 flavor matter for different system sizes. The peak position of these plots give respective inflection points. Note that, the deconfinement and chiral transitions do not take place exactly at the same temperature. Here we take the average of these two temperatures as T c . The results are shown in table II, where we quote the different values of the crossover temperatures corresponding to different system sizes. From table II it can be seen that the T c has a strong dependence on system size. For 2 flavors the T c varies from 167 MeV to 186 MeV which means a change of about 10%. A similar result is observed for 2+1 flavor. One should note that the shift in the T c is mainly due to the shift in the transition temperature of the chiral phase transition. The transition temperature of the deconfining phase transition almost does not change. This result is similar to that obtained with PNJL model on the lattice [45]. This is somewhat expected as the Polyakov loop potential feels the effect of changing volume only indirectly through the fields Φ andΦ. In Fig. 2 we have plotted the temperature dependence of the constituent quark masses for both 2 flavors and 2+1 flavors. Below the crossover temperature they exhibit very strong volume dependence. Smaller the volume, smaller is the constituent mass. In the 2+1 flavor case, the masses of the light flavors drop faster than the strange quark. It thus seems that the chiral symmetry is gradually getting restored as one looks into smaller and smaller volumes. This is also the reason why the T c itself is lowered for smaller volumes given in table II. Similar feature has also been observed in NJL models [23,24]. Given that the quark condensation is similar to the superconducting condensate it is interesting to note that there are in fact certain superconductors which show similar decrease of band gap with the system size [49]. Let us now take a look into the situation at non-zero quark chemical potential µ q = f µ f /N f . For infinite volume the phase transition is of first order and one observes a gap in the order parameter at sufficiently high chemical potential. At some smaller µ q , the first order transition ends at a critical end point (CEP). At this point the system undergoes a second order transition. At even smaller µ q we have only a crossover. As the volume of the system is lowered we find the phase transition characteristics fade away. Even the crossover characteristics start to die down. This is clear from the Fig. 3 where we plot dσ u /dT and dΦ/dT for µ q = 300 MeV as a function of temperature. In Fig. 4 the phase diagram as a function of system size is shown. Note that the CEP gradually shifts towards higher µ q and lower T and finally disappears as the volume is reduced. This is an encouraging fact for the critical point search in heavy-ion collision experiments. To attain such high densities one needs to collide the ions at low √ s, which means the temperature attained is lower. So if it were an infinite system one would have been far away from the CEP. Fortunately the experiments would produce small system volumes and this may lead to the location of the respective CEP possible. Thereafter one would need to do the finite size scaling analysis to extrapolate to the CEP for infinite volumes. The location of CEP for different volumes is collected in table III. IV. THERMODYNAMICS In this section we discuss the behavior of a few thermodynamic observables namely pressure, energy density, specific heat, speed of sound etc. for different system sizes. The pressure inside a volume V may be written as, P (T, µ q ) = − ∂(Ω(T,µq)V ) ∂V , where T is the temperature and µ q is the quark chemical potential. In the top left panel of Fig. 5 we plot the temperature dependence of scaled pressure (P/T 4 ) for 2 flavor system. As can be seen there is a significant change in scaled pressure for small system sizes. For example at T c the P/T 4 for a system with R = 2 f m is almost half of that of an infinite system. As the temperature increases the difference slowly diminishes. The decrease of scaled pressure with increasing volume may be a surprise given that the constituent quark masses were shown to decrease drastically with decreasing volume, which should imply increase in pressure. This can be understood as follows. With decreasing volume, not only the constituent masses decrease, but also the lowest momentum increases due to the infrared cut-off. These two conditions somehow seem to keep the lowest available energy of the quark quasi-particles almost same for different volumes. Thus the pressure does not increase with decreasing volume. However when plotted against T /T c it seems to decrease because the T c itself is smaller for smaller volumes, and therefore the pressure at the corresponding T /T c for smaller volume is smaller than that for a larger volume. The volume dependence is also quite strong for the energy density ǫ = −T 2 ∂(Ω/T ) + Ω. In the top right panel of the Fig. 5 we have plotted the ǫ/T 4 as a function of T /T c for different system sizes. It has similar characteristics as P/T 4 but the difference seems to diminish faster with increasing temperature. As the system size becomes R = 5 f m both the scaled pressure and scaled energy density converge to the R → ∞ case for almost all temperatures. It is well known that for infinite volumes the definition of pressure simplifies to, P (T, µ q ) = −Ω(T, µ q ), which is commonly used in the literature for PNJL models at infinite volumes. However since we are considering finite volumes here it would be interesting to check how much difference will it make if we keep using this definition rather than the correct one with a volume derivative. In the bottom two panels of Fig. 5 we have made a comparison of −Ω/T 4 and P/T 4 . For R = 2 f m we see that these two quantities differ by about 10%. Again, as the size goes close to R = 5 f m this difference is almost washed out. Let us now consider the quantity ǫ − 3P . In our mean field approach this is the trace of the energy-momentum tensor given by, T µµ = ǫ − 3P . In a conformaly symmetric theory, for example a theory of free massless quarks and gluons the energy momentum tensor is supposed to be zero as it signifies the conservation of the conformal currents. Thus ǫ = 3P in that limit. In QCD however the conformal symmetry is broken due to non-zero quark masses as well as quantum anomalies as evident from the presence of a scale in the running coupling constant [50,51]. Thus the energy-momentum tensor does not remain traceless. This was also found to be true in the PNJL model that have been reported in our earlier studies and compared with LQCD results [39,41,46]. The PNJL model is however not QCD and the reason for the scale symmetry breaking is the introduction of an ultraviolet cut-off in the NJL part, a temperature scale T 0 in the Polyakov loop part and of course a quark mass term similar to that in QCD. The physical implication of the two different scales in the quark and Polyakov sector is to give rise to separate crossover temperatures for the two sectors. To compare quantities obtained in PNJL model against LQCD results one then averages out two crossover temperatures as done by us here in the last section. Now for finite system sizes we have introduced an infrared cutoff which should further enhance the effect of conformal symmetry breaking. In fig. 6 we show the variation of the conformal measure C = (ǫ − 3P )/ǫ with temperature for both 2 flavor and and 2+1 flavor matter for different system sizes. That the smaller system sizes lead to larger conformal symmetry breaking effects is evident, except for the anomalous behavior of the lowest size of R = 2 f m. The specific heat at constant volume C V = ∂ǫ ∂T V is shown in Fig. 7. We find that with the change in volume, C V changes prominently up to the temperature corresponding to the crossover region. For smaller volumes the specific heat is smaller indicating a higher rise in temperature for the same rise in energy density. Obviously this can be correlated with the temperature dependence of energy density discussed earlier in Fig. 5. We found that a given amount of scaled energy density is obtained at a higher scaled temperature for a smaller volume. This can be of interest in heavy-ion collision experiments. A given energy density deposited in the finite volume would create a plasma with temperature somewhat higher than that expected in a similar volume inside an infinite volume system having the same energy density. The specific heat is also a measure of energy fluctuations in the system [52]. Fluctuations tend to rise sharply near a phase transition. For a crossover they are somewhat subdued. Obviously for finite volumes a true phase transition is not possible and as one keeps on decreasing the volume all signatures even for a crossover should die down. This is exactly the behavior of C V as presented in Fig. 7. The squared speed of sound v 2 s = ∂P ∂ǫ is shown in Fig. 8. At large temperatures the v 2 s reaches its maximum value as the system becomes almost ideal. Interactions grow with decreasing temperatures resulting in the lowering of v 2 s . The conformal measure C may be considered as a measure of the strength of the interaction in the system. Thus lower the value of C, higher should be the value of v 2 s . This is evident from Fig. 8, where we find the v 2 s to decrease with decreasing temperature, just opposite to the behavior of C shown in Fig. 6. This correlation between C and v 2 s also apparent for variation in volume. With decreasing volume the speed of sound decreases. (In fact an anomalous behavior for the smallest size R = 2 f m is also apparent for v 2 s .) A smaller speed of sound for smaller volumes would mean a slower flow for finite size systems created in heavy-ion collisions. V. PROPERTIES OF NON-STRANGE MESONS For infinite volumes the meson properties in the PNJL model has been discussed for 2 flavors [53,54] as well as for 2+1 flavors [43,55]. In this section we describe the properties of non-strange mesons at finite volumes in the PNJL model. A detailed account of the calculational procedure for meson masses at finite temperatures and densities in the PNJL model may be found in Ref. [43]. Here we sketch the outline of the task. The collective excitations, the fluctuation of the mean field around the vacuum can be handled within the Random Phase Approximation (RPA) [56]. In this approximation, which is equivalent to summing over the ring diagrams, the retarded correlation function for a meson M is given by, Here G M is the suitable coupling constant and Π M (k 2 ) is the one-loop polarization function for the mesonic channel under consideration. Within the RPA, Π M may be written as, where S(p) is the Hartree quark propagator, Γ M is the appropriate combination of gamma matrices of different mesonic channels and the trace is taken over the Dirac and color spaces. The lower limit on the integration is now required for finite volume studies. Here we concentrate on the scalar (σ) and pseudoscalar (π) channels. These contributions can be written as, 4 Tr (S(p + q)S(q)) . The pole mass can be obtained by solving, where m M is the mass of a particular meson. The detailed expression for Π M and G M for π and σ mesons may be found in Ref. [43]. In the upper panels of Fig. 9 we have plotted the masses of pion (m π ) and sigma (m σ ) as a function of temperature for different system sizes. In any given volume we see that for low temperatures the masses of pion and sigma are different and they become degenerate above T c where chiral symmetry is expected to get restored. With decrease in volume we find the pion mass to increase. However above 1.2 T c the pion mass for infinite volume suddenly shoots up above those for the finite volumes. This may have important consequences in heavy-ion reactions where system size is small. Whereas for infinite volume the fast increasing mass of pion would drastically reduce chances of obtaining pion-like bound states, the same may not be true for finite volume. We note here that the increase of pion mass with decreasing volume has also been observed in computations with chiral perturbation theory [57] and renormalization group methods in quark-meson model [58]. While the mass of pion increases with decreasing volume at low temperatures, the mass of sigma is found to decrease quite fast. One can actually see a trend to the masses of the two chiral partners becoming closer to each other with decreasing volume. This, yet again, shows that chiral symmetry breaking effects reduce with decreasing volumes. The pion decay constant may be obtained from the matrix element 0|J a µ,5 |π b (k) = iδ ab f π k µ , where J a µ,5 = ψγ µ γ 5 τ a 2 ψ is the chiral current. At finite temperature and for a particular volume it can be written as (see [32,33] and including the low momentum cut-off λ), where E p = p 2 + M 2 u is the single particle energy of a light quark and f (E p ) is the distribution function properly modified due to Polyakov loop interaction. As shown in the lower panels of Fig. 9, the pion decay constant decreases both with the decrease in temperature and with that of system size. The decrease of f π with temperature has also been observed in other effective models [54,59], Dyson-Schwinger approaches [60] as well as in LQCD [61]. This is also an indication of the restoration of chiral symmetry as f π is directly proportional to the divergence of the chiral current. The tendency of chiral symmetry getting restored in finite volumes may also be noted by comparing Fig. 9 with Fig. 2. At low temperatures the constituent quark masses decrease with decreasing volume. It so happens that the light constituent quark masses become smaller than the pion mass for the smallest sizes studied here. These quarks should then become thermodynamically more favored than the pions. Though fortunately in the PNJL model, such constituent quarks will be suppressed due to the presence of the Polyakov loop, the pions would still loose their significance as the lightest particles that made them suitable candidates for becoming the Goldstone bosons. Thus what seems to happen is that the decrease of volume restores the spontaneous breaking of chiral symmetry in the same way as increase in temperature. The critical size R c for such symmetry restoration would be somewhere between 2 f m and 2.5 f m. From Fig. 2 it may be noted that this range of sizes is almost equal to the respective constituent quark masses. This observation is commensurate with the expectation from chiral perturbation theory that chiral symmetry restoration may take place once the quark masses become equal to the inverse of system size [62]. With all the strong indication of a possible chiral symmetry restoration with decreasing volume it would be interesting to see what happens to the Gell-Mann Oakes Renner (GMOR) [63] relation, which in the lowest order of chiral expansion is given by f 2 π m 2 π = m u σ u +m d σ d . Normally with increase in temperature as the spontaneously broken part of the chiral symmetry gets restored the GMOR relation should start to break down. This is exactly what we find in our calculations and shown in Fig. 10. But surprisingly we find that similar effect is not observed for the decrease in volume and the GMOR relation holds good for the all the ranges of volumes we considered almost up to temperatures as high as 0.8 T c . In fact even at higher temperatures the GMOR relation is violated the most for infinite volumes. The way one can understand this is that for a physical chiral expansion a quantity m π /M is required, where M is some suitable scale. In chiral perturbation theory M is usually the neucleon mass, in zero temperature NJL or PNJL models it is the high momentum cut-off Λ, etc. For finite temperatures one can then consider T to play the role of M. Thus, given a temperature if the corresponding m π in a given volume is less than T , chiral identities would work properly (see e.g. [64]). So here we have a situation where chiral symmetry is getting restored while partial conservation of axial current is still maintained. VI. CONCLUSION We have tried to understand the dynamics of strongly interacting matter inside finite volume in the framework of PNJL model with saddle point approximation. Several interesting results were observed that can have important implications for heavy-ion collision experiments. Our major finding was that the spontaneously broken chiral symmetry may be restored at much lower temperatures in small volume. This was shown through the computation of various thermodynamic observables as well as certain hadron properties. Changes in the equation of state and speed of sound may have important consequences in the flow properties of the exotic medium created in the experiments. A measure of the specific heat in heavy-ion experiments is the transverse momentum fluctuations. We find the specific heat to decrease with decreasing volume indicating that the momentum fluctuations may not be as large as expected at a given T /T c . From the variation of the phase boundary with changing volume we demonstrated a stronger possibility of finding the signatures of a critical end point in low energy experiments that intend to create high baryonic densities where the expected temperature is not too high. Finally from the hadron properties we observed the possibility of obtaining a chiral symmetric but confined phase in small volumes. We hope that a combination of heavy ion collisions and not-so-heavy ion collisions at similar center of mass energies, followed by an appropriate finite size scaling study may give us a better understanding of the QCD phase structure. As discussed earlier we made a couple of simplified assumptions in this work. The Polyakov loop potential used here does not have an explicit volume dependence. The discrete momentum states in the quark potential was replaced with a continuum, and the only explicit dependence on system size was through the lower momentum cut-off. Though we believe that these assumptions would not affect the gross features observed, we hope to address these issues in future. It would be highly desirable to have a concurrent study of finite size effects in Polyakov-Quark-Meson models [65] to further understand the systematics of model artifacts.
2013-04-16T08:48:28.000Z
2012-12-24T00:00:00.000
{ "year": 2012, "sha1": "1c4a0599352f7b5b6d049a0c2f5062bb47904472", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1212.5893", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1c4a0599352f7b5b6d049a0c2f5062bb47904472", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
249051773
pes2o/s2orc
v3-fos-license
Proteome Analysis of Aflibercept Intervention in Experimental Central Retinal Vein Occlusion Aflibercept is a frequently used inhibitor of vascular endothelial growth factor (VEGF) in the treatment of macular edema following central retinal vein occlusion (CRVO). Retinal proteome changes following aflibercept intervention in CRVO remain largely unstudied. Studying proteomic changes of aflibercept intervention may generate a better understanding of mechanisms of action and uncover aspects related to the safety profile. In 10 Danish Landrace pigs, CRVO was induced in both eyes with an argon laser. Right eyes were treated with intravitreal aflibercept while left control eyes received isotonic saline water. Retinal samples were collected 15 days after induced CRVO. Proteomic analysis by tandem mass tag-based mass spectrometry identified a total of 21 proteins that were changed in content following aflibercept intervention. In retinas treated with aflibercept, high levels of aflibercept components were reached, including the VEGF receptor-1 and VEGF receptor-2 domains. Fold changes in the additional proteins ranged between 0.70 and 1.19. Aflibercept intervention resulted in a downregulation of pigment epithelium-derived factor (PEDF) (fold change = 0.84) and endoplasmin (fold change = 0.91). The changes were slight and could thereby not be confirmed with less precise immunohistochemistry and Western blotting. Our data suggest that aflibercept had a narrow mechanism of action in the CRVO model. This may be an important observation in cases when macular edema secondary to CRVO is resistant to aflibercept intervention. Introduction Central retinal vein occlusion (CRVO) is a visually disabling condition caused by a thrombus of the central retinal vein, which is the major outflow vessel of the eye [1,2]. Occlusion of the central retinal vein results in increased resistance to blood flow in retinal arterioles. The reduced blood flow causes closure of retinal capillaries and small arterioles, resulting in retinal hypoxia, which drives an increased production of vascular endothelial growth factor A (VEGF-A) and a complex inflammatory response mediated by interleukin (IL)-6, IL-8 and monocyte chemotactic protein-1 [3,4]. VEGF-A and the inflammatory response increase vascular permeability resulting in macular edema, which is the most common cause of vision loss in CRVO [4]. Macular edema secondary to CRVO is effectively treated with intravitreal injections of anti-VEGF agents including bevacizumab, ranibizumab and aflibercept, which reduce retinal vascular permeability and cause absorption of the macular edema [5]. Aflibercept is a frequently used inhibitor of VEGF and has a well-documented efficacy in the treatment of macular edema secondary to CRVO [6,7]. It consists of a constant Fc domain of human immunoglobulin G1 fused with the second immunoglobulin domain of VEGF receptor-1 (VEGFR-1) and the third immunoglobulin domain of VEGF receptor-2 (VEGFR-2) [8,9]. Although aflibercept treatment has become the standard of care, retinal large-scale protein changes following aflibercept intervention in CRVO remain largely unstudied [10,11]. The overall objective of proteomic studies is to identify and quantify the entire set of proteins in a given cell, tissue or biofluid to provide insights into biological processes in the disease or intervention under study [11][12][13]. Studying retinal proteome changes following aflibercept intervention in CRVO may bring important therapeutic insights into the mechanism of action of aflibercept. Furthermore, studying retinal proteome changes with proteomic techniques may bring insights into the safety profile of aflibercept. Elucidating the retinal proteome in CRVO following aflibercept intervention may also provide a potential for discoveries that can lead to the improvement of existing therapies [10,11]. Retinal tissue exposed to CRVO is generally only available from animal models. In the study presented, aflibercept was tested in a well-established porcine model of laserinduced CRVO [14], which is suited for expressional studies due to its non-invasive nature. Advanced proteomic techniques were used to study large-scale retinal protein changes following aflibercept intervention in the CRVO model. Evaluation of Experimental CRVO Model In porcine eyes with CRVO, flame-shaped hemorrhages and venous dilation were observed upstream of the site of occlusion within 30 min after inducing CRVO ( Figure 1A-C). Fluorescein angiography was performed three days after CRVO to confirm that CRVO was successfully induced. Fluorescein angiography of eyes with CRVO showed delayed filling of retinal branch veins, retinal capillary non-perfusion and leakage around retinal veins ( (Tables S1 and S2). We first tested the reproducibility of proteome changes in the CRVO model in seven animals (five animals were used for mass spectrometry and Western blotting, and two animals were used for immunohistochemistry). With tandem mass tag (TMT)-based mass spectrometry and Western blotting we compared CRVO (n = 5) induced right eyes with the left control eyes (n = 5), which received laser without inducing occlusion. Overall, retinal proteome changes in the CRVO model (Tables S3 and S4) were consistent with previous findings in the model [14], including an upregulation of fibronectin (fold change = 13.50, p = 0.0016) and galectin-3 (fold change = 5.31; p = 0.0016) as well as a downregulation of neurofilament light polypeptide (p = 0.036; fold change = 0.31). Samples from one animal were excluded from the dataset as the samples were not successfully labeled with the TMT kit. Western blotting confirmed the increased content of galectin-3 in CRVO (n = 5) vs. control (n = 5) ( Figure 3A). The regulation of fibronectin, galectin-3 and neurofilament light polypeptide was confirmed by immunohistochemistry comparing the CRVO (n = 2) vs. control (n = 2) ( Figure 3B-G). Additional immunohistochemistry is available in the supplementary material ( Figure S1). Figure 3. Reproducibility of the CRVO model at the molecular level. The reproducibility of the CRVO model was tested by confirming a number of key proteins that were previously found to be regulated in the model, including galectin-3, fibronectin and neurofilament light polypeptide. Laser-induced CRVO was compared to control eyes that received laser without inducing occlusion. Representative immunohistochemistry is provided. Additional immunochemistry from an additional animal is provided in the supplementary material ( Figure S1 Retinal Proteome Changes Following Aflibercept Intervention in Experimental CRVO Retinal proteome changes following aflibercept intervention were studied in eight Danish Landrace pigs. CRVO was induced in both eyes. Right eyes were treated with aflibercept (n = 8) while left control eyes (n = 8) were treated with saline water (NaCl). A total of 3559 proteins were successfully assigned and quantified from the retinal samples (Table S5). A total of 21 proteins were significantly changed in content following aflibercept intervention in the CRVO model (Table 1). High contents of aflibercept VEGF receptor domains and the aflibercept fusion protein were observed in retinas treated with aflibercept, including the VEGFR-1 immunoglobulin domain (fold change = 46.4; p = 2.45 × 10 −6 ), the VEGFR-2 immunoglobulin domain (fold change = 5.07; p = 1.95 × 10 −8 ) and the fusion protein Ig gamma-1 chain C region (fold change 18.90; p = 3.27 × 10 −11 ). Changes in all other proteins were small with fold changes ranging between 0.70 and 1.19 ( Figure 4) ( Table 1). Aflibercept intervention in the CRVO model resulted in an upregulation of A-kinase anchor protein 8 (AKAP8) and a downregulation of endoplasmin (fold change = 0.91; p = 0.041), and pigment epitheliumderived factor (PEDF) (fold change = 0.84; p = 0.020). However, the slight changes in endoplasmin and PEDF detected by mass spectrometry were too small to be confirmed with the less precise techniques of immunohistochemistry ( Figure 5) and Western blotting ( Figure 6). Immunohistochemistry showed by eye a similar staining pattern of endoplasmin and PEDF regardless of aflibercept intervention ( Figure 5). Western blotting showed a slight downregulation of PEDF and endoplasmin following aflibercept intervention ( Figure 6A-D). However, the differences were not statistically significant as the standard deviations of Western blotting data were higher than observed with mass spectrometry data ( Figure 6E-G). When proteins were listed according to abundance regardless of the p-value, the largest changes were observed for aflibercept domains ( Table 2). The protein with the most pronounced downregulation was alpha-crystallin A chain (CRYAA), which was close to being significantly regulated (p = 0.07) ( Table 2). . Volcano plot. Log 2 of the ratio aflibercept/NaCl is plotted on the x-axis. On the y-axis, −log p-value refers to the logarithmized p-value from the t-test used to test if a protein was significantly changed. Statistically significantly changed proteins are located above the horizontal line, which denotes a significance level of 0.05. Components of aflibercept are not included in the volcano plot. PEDF: pigment epithelium-derived factor. DNAJ7C: DnaJ homolog subfamily C member 7. AKAP8: A-kinase anchor protein 8. XAB2: Pre-mRNA-splicing factor SYF1. MRPS7: 28S ribosomal protein S7, mitochondrial. RPS18: 40S ribosomal protein S18. RER1: Protein RER1. The changes observed by mass spectrometry in endoplasmin and PEDF following aflibercept intervention were too slight to be confirmed by Western blotting. (E-G) Horizontal lines of the plots denote the means of the quantitative data. Standard deviations were larger in data obtained with Western blotting compared with quantitative data obtained through proteomic analysis. Standard deviations of endoplasmin quantification with mass spectrometry and Western blotting were 0.14 and 0.43, respectively. Standard deviations of PEDF quantification with mass spectrometry and Western blotting were 0.23 and 0.76, respectively. Evaluation of Experimental CRVO Model Angiography confirmed that the model had angiographic similarities with CRVO in humans with the occlusion emerging from the optic nerve head, generating retinal capillary non-perfusion in all quadrants of the retina. Recanalization of CRVO was not observed in any of the animals. Proteome changes in the CRVO model were similar to a previous study of the model [14] confirming a high reproducibility at the molecular level in the model. Aflibercept Intervention in CRVO Results from the proteomic analysis indicated that aflibercept did not regulate multiple signaling pathways in the CRVO model. This is an important observation in terms of the safety profile of aflibercept. Thus, our data suggest that aflibercept did not regulate pathways, which could have negative side effects on the retina. Very high levels of aflibercept components were reached after 15 days of treatment, indicating high retinal concentrations of the compound in the CRVO model. Observed protein changes following aflibercept intervention were very small. Two proteins were selected for further validation, endoplasmin and PEDF, but their regulation was not confirmed with immunohistochemistry and Western blotting. A downregulation of CRYAA was observed following aflibercept intervention, but the change was not statistically significant (p = 0.07). Knock-out of CRYAA has been reported to inhibit ocular neovascularization in a murine model of oxygen-induced retinopathy [15], but more studies will be needed to establish a relation between CRYAA regulation and aflibercept treatment. Our data indicate that aflibercept had a narrow mechanism of action in CRVO. In a clinical setting, this may an important observation in cases when macular edema secondary to CRVO is resistant to aflibercept intervention. In cases when VEGF is not a major driving force in macular edema secondary to CRVO, aflibercept may have a limited effect due to a narrow mechanism of action. Our study has important implications for patients receiving aflibercept treatment. Our data suggest that aflibercept does not regulate a multitude of proteins or pathways that may be unwanted or result in side effects. Proteome analysis of the retina implies a number of limitations that may affect the outcome of the proteomic analysis [12]. We have previously shown that proteome changes in the retina often occur in specific retinal layers or cell types [16]. As a consequence, observed proteome changes may be moderate when the entire retina is collected for proteomic analysis instead of isolating specific cells or cell layers. Due to the multi-layered structure of the retina, proteome studies of retinal tissue are best supported by immunohistochemistry. More than 3000 retinal proteins were successfully assigned. However, the multilayered structure of the retina adds further complexity to the proteomic analysis in terms of detection of low abundance proteins, as protein abundances stretch over multiple orders of magnitude [12]. Animal Preparation The study was approved by the Danish Animal Experiments Inspectorate, permission no. 2019-15-0201-01651. Danish Landrace pigs were housed under a 12 h light/dark cycle, and general anesthesia, topical anesthesia with eye drops and dilation of the pupils were performed as previously described [17]. Experimental CRVO The study used an experimental model of CRVO as described previously [14]. We first verified the reproducibility of the CRVO model at the molecular level in seven Danish Landrace pigs (five animals were used for mass spectrometry and Western blotting, while two animals were used for immunohistochemistry). In these animals, CRVO was induced in the right eyes, while left control eyes received laser without inducing CRVO. In the right eyes, CRVO was induced close to the optic nerve head with a standard argon laser (532 nm) given by indirect ophthalmoscopy using a 20D lens. The laser energy was set to 400 mW with an exposure time of 550 ms. A total of 30-40 laser applications were used per occlusion. By applying the laser directly on retinal veins close to the optic nerve head, thrombotic material was directed towards the optic nerve head and the lamina cribrosa. Experimental CRVO was considered successful when stagnation of venous blood and development of flame-shaped hemorrhages were observed by ophthalmoscopy. In the left control eyes, a laser control without occlusion was created by giving the same amount of laser applications and energy level at the edge of the optic nerve head, but without inducing occlusion. CRVO was confirmed with fluorescein angiography, and the eyes were dissected 15 days after induced CRVO and saved for mass spectrometry and immunohistochemistry. Fifteen days after induced CRVO, the eyes were enucleated. The eyes were dissected on ice under a microscope. The anterior segment was removed. The vitreous body was aspired into a 5 mL syringe. In eyes intended for proteomic analysis, the neurosensory retina was peeled from the RPE/choroid complex with tweezers and stored at −80 • C. In eyes intended for immunohistochemistry, complexes consisting of neurosensory retina, RPE/choroid complex and sclera were excised for immunohistochemistry. The animals were euthanized immediately after enucleation. To test aflibercept intervention in CRVO, another 10 Danish Landrace pigs were used. In these animals CRVO was induced in both eyes as described above. An intravitreal injection of 0.05 mL aflibercept 40 mg/mL (Bayer, Leverkusen, Germany) was given in the right eyes, while left eyes received an injection of 0.05 mL sodium chloride 9 mg/mL (NaCl) (B. Braun, Frederiksberg, Denmark). Following the injections, chloramphenicol ointment 1% (Takeda Pharma A/S, Taastrup, Denmark) was applied in both eyes. Fluorescein angiography was performed three days after CRVO to confirm that CRVO was induced successfully. The eyes were dissected 15 days after CRVO as described above. Sample Preparation for Mass Spectrometry The reproducibility of proteome changes in the CRVO model was verified by comparing CRVO (n = 5) vs. laser control (n = 5) with tandem mass tag (TMT)-based mass spectrometry in a separate analytical run. Eyes from eight animals were used to compare the protein profile of CRVO + aflibercept (n = 8) vs. CRVO + NaCl (n = 8) with proteomic analysis by tandem mass tag (TMT)-based mass spectrometry. Isobaric labeling was performed with a 10 plex TMT kit from Thermo Scientific (Waltham, MA, USA). Sample preparation for TMT-based mass spectrometry was performed as previously described [14,16] with some modifications. For the experiment consisting of 16 samples, a standard was prepared by mixing equal amounts from each sample. Two groups of 10 samples, 8 experimental samples together with 2 standards, were labelled with the 10 plex kit. The standards were used for normalization of data. TMT labeling and high pH reversed phase peptide fractionation were performed as described in a previous article [18]. Then, 1 µg of fractions 2-8 was analyzed. Quantification with Tandem Mass Tag-Based Mass Spectrometry One microgram of each fraction was loaded for each run onto a Dionex UltiMateTM 3000 RSLC nanosystem coupled to an Orbitrap Fusion mass spectrometer (Thermo Scientific, Waltham, MA, USA) equipped with an EasySpray TM ion. Liquid chromatography and mass spectrometry with TMT synchronous precursor selection MS 3 mode was performed as follows. The labeled samples were loaded onto the trapping column (5 mm × 300 µm, C18 PepMap100, 5 µm, 100 Å, Thermo Scientific, Waltham, MA, USA) with the flow setting of 30 µL per min. The nanoflow was 300 nL per min for the separation of peptides on the analytical column (500 mm × 75 µm PepMap RSLC, C18, 2 µm, 100 Å, Thermo Scientific, Waltham, MA, USA). The applied buffers were buffer A (99.9% water and 0.1% formic acid) and buffer B (99.9% acetonitrile and 0.1% formic acid). The gradient was performed over 213 min with a gradient of buffer B ranging from 2% to 80%. The mass spectrometer was operated in the TMT SPS MS 3 mode with full Orbitrap scans in the mass range of 350-1500 m/z obtained at a resolution of 120,000 with an AGC target of 2 × 10 −5 and a maximum injection time of 50 ms. The mass spectrometer was set to trigger MS 2 acquisitions in each cycle using the linear ion trap with a CID collision energy at 35% and an AGC target of 2 × 10 4 with a maximal injection time of 75 ms. Precursor ions, in the mass range of 400-1200 m/z, were isolated in the quadrupole set with an isolation window of 1.2 m/z. Up to five reporter ions were detected in MS 3 with synchronous precursor selection performed in the Orbitrap in the mass range of 100-500 m/z with the HCD collision energy set to 65%, obtained at a resolution of 50,000 and an AGC target of 3 × 10 4 and a maximum injection time of 110 ms. A dynamic exclusion of 6 s was applied. With MaxQuant software version 1.6.6.0, accessed on 18 September 2019 and on 28 January 2021 (Max Planck Institute of Biochemistry, Martinsried, Germany; https:// maxquant.net/maxquant/), raw data files were searched against the Uniprot Sus scrofa and Homo sapiens databases using match between runs and with settings described in a previous work [16]. The data output from MaxQuant is available in the supplementary material (Tables S1 and S2). Filtration of Proteins and Statistics The reproducibility of the CRVO model was assessed through statistical analysis in Perseus version 1.6.6.0 (Max Planck Institute of Biochemistry, Martinsried, Germany; https: //maxquant.net/perseus/ accessed on 18 September 2019) as previously described [14], with the only exceptions that the number of randomizations was set to 250, the S 0 parameter was set to 0.1 and a false discovery rate of 0.05 was applied. To compare CRVO + aflibercept vs. CRVO + NaCl in Perseus software version 1.6.14.0 (Max Planck Institute of Biochemistry, Martinsried, Germany; https://maxquant.net/ perseus/ accessed on 18 September 2019), poorly assigned proteins were removed in Perseus as described in a previous article [14]. Proteins were required to be successfully assigned and quantified in 100% of the samples in each group. Quantitative values were log 2 transformed, and technical replicates were averaged. At least two unique peptides were required for successful identification. A Student's t-test was performed in Perseus to compare CRVO + aflibercept vs. CRVO + NaCl. Proteins were considered significantly changed in content if p < 0.05. Immunohistochemistry Eyes from two animals were used to verify the reproducibility of the CRVO model at the molecular level comparing CRVO (n = 2) vs. laser control (n = 2). Eyes from two animals were used to compare CRVO + aflibercept (n = 2) vs. CRVO + NaCl (n = 2). Complexes consisting of retina, choroid and sclera were fixated in formalin for 24 h. The formalin solution was removed. The tissue was then stored in a PBS solution at 4 • C until further use. Then, 4 µm thick sections were cut from NBF-fixed paraffin-embedded tissue blocks. Sections were mounted on FLEX IHC Slides (Dako; Glostrup, Denmark), dried at 60 • C, dewaxed and rehydrated through a graded ethanol series, and they were subsequently washed in 0.05 M Tris-buffered saline (TBS; Fagron Nordic A/S; Copenhagen, Denmark). Endogenous biotin reactivity was blocked with 1.5% hydrogen peroxide. Optimal epitope retrieval was performed using microwave heating for 11 min at full power (900 W), followed by 15 Sections were then incubated for 60 min with antibodies diluted in TNT Antibody Diluent (Dako, Glostrup, Denmark A/S). Visualization of the antigen-antibody complex was carried out with the PowerVision+ (Leica, Copenhagen, Denmark A/S) detections system according to the manufacturer's manual. DAB was used as a chromogen (K3468, Dako, Glostrup, Denmark). Immunostaining was followed by brief nuclear counterstaining in Mayer's hematoxylin (Fagron Nordic A/S, Copenhagen, Denmark). Finally, slides were washed, dehydrated and coverslipped using a Tissue-Tek Film coverslipper (Sakura Finetek; Alphen aan den Rijn, The Netherlands). Histology slides were scanned at 40× magnification using a NanoZoomer-XR (Hamamatsu Photonics; Hamamatsu City, Japan), and image acquisition was obtained using NDP.view2 software (NanoZoomer Digital Pathology; Hamamatsu Photonics; Hamamatsu City, Japan). Western Blotting The reproducibility of the CRVO model was verified by comparing galectin-3 levels in CRVO (n = 5) vs. controls (n = 5) using a primary polyclonal rabbit anti-galectin-3 antibody 1:100 (MBS3211803, MyBiosource, San Diego, CA, USA). Western blotting was used to quantify galectin-3 using beta-tubulin as a housekeeping protein loading control as previously described [14]. A Student's t-test was performed on logarithmized densitometric data. Proteins were blotted from the gels to the membranes using the NuPage system with the XCell II™ Blot Module (Thermo Fisher, Waltham, MA, USA) and transfer buffer with 25 mM Tris-base, 195 mM Glycine, 10% SDS and 96% ethanol. After activation and protein transfer, the membranes were blocked for 1 h at room temperature in ROTIBloc A151.1 (Carl Roth, Karlsruhe, Germany) and then incubated with primary antibodies diluted in ROTIBlock overnight. The primary antibodies used were a rat 1 µg/mL monoclonal anti-endoplasmin antibody (MBS439463, MyBioSource, San Diego, CA, USA) and a rabbit polyclonal 2 µg/mL anti-pigment epithelium-derived factor (PEDF) antibody (MBS2027143, MyBioSource, San Diego, CA, USA). A monoclonal anti-GAPDH antibody (Sc-32233, Santa Cruz, Dallas, TX, USA) was also used. On the following day, the membranes were washed in TBST (20 mM Tris, 150 mM NaCl and 0.1% Tween 20 adjusted to pH 7.4) and incubated for 1 h with corresponding secondary antibodies diluted in TBST including 1:20.000 Goat-anti-Rabbit-HRP P0448 (Dako, Glostrup, Denmark), 1:10,000 Goat anti-Rat-HRP Invitrogen 31470 (Thermo Fisher, Waltham, MA, USA) and 1:20,000 Rabbit Anti Mouse-HRP DAKO P0260 (Dako, Glostrup, Denmark). Membranes were then washed in TBST and developed using the SuperSignal Chemiluminescent S kit (Thermo Fisher, Waltham, MA, USA) according to the manufacturer's instructions followed by using the BioRad Chemidoc developer machine (Bio-Rad, Copenhagen, Denmark). ImageJ 1.8.0_172 was used to quantify the intensity of the bands. Statistical analysis was performed with a Student's t-test on densitometric data. Power calculations related to Western blotting were performed with STATA 16.0 (StataCorp, College Station, TX, USA) using the power calculation for the t-test comparing two independent means. Conclusions Proteomic analysis identified a total of 21 proteins that changed in content following aflibercept intervention in experimental CRVO. High retinal levels of aflibercept components were observed 15 days after aflibercept intervention. Thus, high retinal aflibercept concentrations are reached 15 days after aflibercept intervention in CRVO. Except from high levels of aflibercept components, the protein changes observed with aflibercept treatment were very small. The regulation of selected proteins was too slight be confirmed with immunohistochemistry and Western blotting. Our data suggest that aflibercept had a narrow mechanism of action in the CRVO model. In a clinical setting, this may an important observation in cases when macular edema secondary to CRVO is resistant to aflibercept intervention. From a safety perspective, it is an important finding that aflibercept treatment did not result in major regulations of multiple signaling pathways. Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/molecules27113360/s1, Figure S1: Additional immunohisto-chemistry confirming the reproducibility of the CRVO model, Table S1: Data output from MaxQuant for validation of the CRVO model, Table S2: Data output from MaxQuant for test of the aflibercept intervention, Table S3: All successfully assigned proteins from the validation of the CRVO model, Table S4: All significantly regulated proteins in the CRVO model, Table S5: All suc-cessfully assigned proteins in the proteome study of the aflibercept intervention.
2022-05-26T15:11:39.146Z
2022-05-24T00:00:00.000
{ "year": 2022, "sha1": "61bf38a66b611bf97ccd2b926b8a9bb6ac1cfeee", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/27/11/3360/pdf?version=1653380577", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f4754d7df9a9f57845ec00c8d0964bafcec11886", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
218582471
pes2o/s2orc
v3-fos-license
Microbe-dependent heterosis in maize Hybrids account for nearly all commercially planted varieties of maize and many other crop plants, because crosses between inbred lines of these species produce F1 offspring that greatly outperform their parents. The mechanisms underlying this phenomenon, called heterosis or hybrid vigor, are not well understood despite over a century of intensive research (1). The leading hypotheses—which focus on quantitative genetic mechanisms (dominance, overdominance, and epistasis) and molecular mechanisms (gene dosage and transcriptional regulation)—have been able to explain some but not all of the observed patterns of heterosis (2, 3). However, possible ecological drivers of heterosis have largely been ignored. Here we show that heterosis of root biomass and germination in maize is strongly dependent on the belowground microbial environment. We found that, in some cases, inbred lines perform as well by these criteria as their F1 offspring under sterile conditions, but that heterosis can be restored by inoculation with a simple community of seven bacterial strains. We observed the same pattern for seedlings inoculated with autoclaved vs. live soil slurries in a growth chamber, and for plants grown in fumigated vs. untreated soil in the field. Together, our results demonstrate a novel, ecological mechanism for heterosis whereby soil microbes generally impair the germination and early growth of inbred but not hybrid maize. MAIN In nature, all plants form close associations with diverse microbial symbionts that comprise a subset of the microbial species with which they share a habitat (4,5). As part of their host's environment, the host-associated microbial community (microbiome) can cause plasticity of important plant traits such as reproductive phenology, disease resistance, and general vigor (6)(7)(8)(9). However, genetic variation within plant species affects not only microbiome assembly, but also the phenotypic response to microbes. Disentangling these relationships is a critical step toward understanding how plant-microbiome interactions evolved and how they can be harnessed for use in sustainable agriculture (10). Here, we describe our observation that perturbation of the soil microbial community disrupts heterosis , the strong and pervasive phenotypic superiority of hybrid maize genotypes relative to their inbred parent lines. To our knowledge this is the first report of microbial involvement in plant heterosis, a phenomenon of immense economic value and research interest. In a previous field experiment, we observed that maize hybrids generally assemble rhizosphere microbiomes that are distinct from those of inbred lines. In addition, many microbiome features in F 1 hybrids are not intermediate to those of their parent lines, suggesting that heterosis of plant traits is associated with heterosis of microbiome composition itself (11). To determine whether the same patterns manifest in a highly controlled environment where all microbial members are known, we developed gnotobiotic growth bags for growing individual maize plants in sterile conditions (see Methods). We planted surface-sterilized kernels of two inbred lines (B73 and Mo17) and their F 1 hybrid (B73xMo17) in individual gnotobiotic growth bags containing autoclaved calcined clay hydrated with sterile 0.5x MS salt solution. The clay in each growth bag was inoculated with either a highly simplified synthetic community of seven bacterial strains known to colonize maize roots (12,13) (~10 7 CFU/mL) or a sterile buffer control. This system effectively eliminated contact between plants and external microbes ( Supplementary Fig. 1b); however, it is possible that some kernels may have contained viable endophytes that could not be removed by surface-sterilization ( Supplementary Fig. 1c). Genotypes and treatments were placed in randomized locations in a growth chamber under standard conditions (12-hr days, 27℃/23℃, ambient humidity). After four weeks, we harvested plants to investigate root colonization by these seven strains. Unexpectedly, we observed that the inbred and hybrid plants were indistinguishable with respect to root and shoot fresh weight when grown in uninoculated growth bags, yet showed the expected heterotic pattern when grown with the synthetic bacterial community ( Fig. 1a; Supplementary Table 1). This was due to a negative effect of the bacteria on both inbred genotypes rather than a positive effect on the hybrid. Although the synthetic community contained no known pathogens (13), it decreased the root weight of B73 and Mo17 seedlings by 48.4% [s.e.m. = 13.6%] and 60.8% [s.e.m. = 21.5%], respectively ( Fig. 1). In contrast, the synthetic community reduced root weight of hybrids by only 19.2% [s.e.m. = 13.6%]. As a result, the strength of midparent heterosis was reduced from 100% in nonsterile conditions to 14% in sterile conditions (permutation test P = 0.002); a similar pattern was observed for shoot weight ( P = 0.056; Fig. 1c-d). A separate experiment revealed that the synthetic community also lowered the germination rates for both inbred lines but not the hybrid ( Supplementary Fig. 2). Germination of B73 after 4 days was 10.7% lower [s.e.m. = 4.3%] in the presence of the synthetic community relative to the sterile control; for Mo17, the synthetic community decreased germination rates by 32% [s.e.m. = 5.8%] ( Supplementary Fig. 2). To determine whether natural, complex soil microbial communities also induce heterosis, we conducted a second growth chamber experiment with surface-sterilized kernels of the same three genotypes and a slightly modified protocol for gnotobiotic growth. We saturated the calcined clay medium in each growth bag with one of three treatments: a slurry derived from filtered farm soil, an autoclaved aliquot of the same slurry, or a sterile buffer control. Genotypes and treatments were arranged into randomized, replicated blocks in a growth chamber. We recorded the germination success or failure of each kernel and observed that the live soil slurry had a strong negative effect on germination of both inbred lines but not the hybrid (Fig. 2a). In the two sterile treatments, B73 and B73xMo17 germinated equally well. Mo17 still performed worse than B73xMo17, but the hybrid advantage was much less pronounced than it was in the live treatment. After one month we harvested all plants and measured fresh weights of roots and shoots. In growth bags that received the autoclaved slurry or sterile buffer treatments, all three genotypes produced root systems of equal biomass; in contrast, the hybrid's root biomass was 18.3% higher than the midparent average when grown with the live soil slurry, consistent with the expected pattern of heterosis (Fig. 2b-c; Supplementary Table 2). Very poor germination of Mo17 prevented statistical comparison of its biomass to the hybrid in the live slurry treatment. Shoot biomass displayed the expected heterotic patterns with the hybrid out-performing the parental inbred lines under all conditions ( Supplementary Fig. 3). Microbe-dependent heterosis in the field Next, we conducted a field experiment to assess whether this phenomenon, which we termed "microbiota-dependent heterosis" or MDH, occurs in real soil under farm conditions. We planted surface-sterilized kernels of the same three genotypes into adjacent rows with four soil pre-treatments to perturb soil microbial community composition: (1) steamed, (2) fumigated with the mustard oil allyl isothiocyanate (AITC), (3) steamed and fumigated with AITC, (4) fumigated with chloropicrin, and (5) untreated control ( Supplementary Fig. 4). All four treatments reduced the density of Pythium spp., a common phytopathogenic oomycete, relative to the untreated control (Supplementary Table 3); however, 2 weeks after treatment, counts of viable culturable bacteria were temporarily reduced only in the AITC + steam treatment, and only in shallow soil ( Supplementary Fig. 5). Additionally, amplicon sequencing of the V4 region of the 16S rRNA gene and the fungal ITS1 confirmed that these treatments shifted the composition of the bacterial and fungal soil microbiomes relative to the control ( Supplementary Fig. 6). The effects of the fumigation treatments persisted in the bulk soil for at least six weeks, and were also detected in the root microbiomes of juvenile plants at the end of the experiment ( Supplementary Fig. 7). We monitored seedling emergence and measured leaf number and plant height at 15 days after planting (d.a.p.) and again at 27 d.a.p. After this final in-field measurement, we uprooted all plants in the control, chloropicrin, and AITC + steam treatments and measured their root and shoot biomass. Perturbation of the soil microbial community using chloropicrin or AITC + steam weakened heterosis of root biomass ( Fig. 3; Supplementary Table 4). Additionally, all fumigation and steaming treatments decreased the strength of midparent heterosis for both height and leaf number ( Supplementary Fig. 8). In contrast, heterosis of shoot dry weight was not affected. Rates of germination success did not differ consistently among treatments; although chloropicrin accelerated the germination of Mo17, the final germination proportions were similar among treatments ( Supplementary Fig. 9). We note that AITC may influence plant development directly (14); however, the responses of each genotype to treatments involving AITC were generally congruent with responses to the non-AITC treatments. Discussion Our results suggest that interactions with soil-borne microbes are important for the expression of heterosis in maize. We observed microbe-dependent heterosis (MDH) in three independent experiments representing very different environmental contexts: in tightly controlled lab conditions with an inoculum of only seven bacterial strains (Fig. 1); in a growth chamber with a more complex microbial slurry derived from farm soil (Fig. 2); and in the field with or without soil fumigation ( Fig. 3; Supplementary Fig. 8). This repeatability suggests that the mechanism could be quite general with respect to the causal microbes, although much more work would be needed to test the full range of natural soil microbiome diversity and the full range of plant genotypes. In all of the cases presented above, MDH was driven not by beneficial microbes selectively boosting the performance of hybrids, but by soil-borne microbes selectively reducing the performance of inbred lines. This pattern is consistent with two possible, non-mutually-exclusive explanations. First, it may indicate that many or most soil microbes are weakly pathogenic to maize, and that hybrids are more resistant to them than are inbreds (the "Inbred Immunodeficiency hypothesis"). Second, it may reflect a costly defensive overreaction by inbreds, but not hybrids, to innocuous soil microbes (the "Inbred Immune Overreaction hypothesis"). Multiple previous studies have described how plants that are immunocompromised through either genetic or chemical means can suffer infections that are not apparent in their immunocompetent neighbors. For example, maize mutants deficient in the defense hormone jasmonic acid were unable to grow to maturity in non-sterile soil in the field or greenhouse (15). Similarly, Arabidopsis mutant lines lacking three defense hormone signaling systems displayed reduced survival in wild soil (16). Application of glyphosate to bean plants temporarily arrested their growth in sterile soils; in non-sterile soils, however, the plants died quickly due to root infection by Pythium and Fusarium species (17). Because glyphosate inhibits the biosynthesis of phenylalanine and chorismite-which are precursors of several important components of the defense response including lignin, salicylic acid and phytoalexins-the study authors suggested that glyphosate predisposes the treated plants to infection by opportunistic pathogens to which they would otherwise be resistant (18). If weak pathogens drive MDH, then this implies that superior disease resistance in hybrids is a key mechanism of heterosis. Somewhat surprisingly, the effect of heterosis on plant disease resistance has not been well characterized. In maize, heterosis has been observed for resistance to anthracnose leaf blight and southern leaf blight but not to anthracnose stalk rot (19,20). Heterosis for late blight resistance has also been noted in potato (21). In contrast, the Inbred Immune Overreaction hypothesis does not require soil microbes to be pathogenic, but instead links MDH to the well-documented tradeoff between growth and genetic disease resistance (22). For instance, innocuous soil microbes could trigger a costly defensive response in inbreds but not in hybrid maize. The most detailed work on heterosis of disease resistance supports this hypothesis: in the model species Arabidopsis, hybrids displaying heterosis for growth and yield also displayed a decreased level of basal defense gene expression and decreased concentrations of the defense signaling hormone salicylic acid (23)(24)(25)(26). However, despite their lower investment in constitutive defenses, the hybrids were not compromised in resistance to the biotrophic pathogen Pseudomonas syringae , nor in the inducible response to infection (26,27). All together, our results shed a new and unexpected light on the causes of heterosis, which have remained elusive despite over a century of investigation. They demonstrate the importance of ecological context for mapping genotype to phenotype, and generate new, testable hypotheses about the mechanisms of this widespread and critically important phenomenon. Many questions remain, and future work will require careful experimentation to delve into the molecular and physiological mechanisms of MDH and to assess the evidence for or against the Inbred Immunodeficiency hypothesis and the Inbred Immune Overreaction hypothesis. These new avenues of research have high potential to advance our understanding of heterosis in maize and many other crops, and to lead to new innovations for agricultural sustainability and productivity. Experiment 1 (December 2018). In a laminar flow hood, we placed kernels of each genotype into a sterile 7.5" x 15" Whirl-Pak self-standing bag (Nasco, Fort Atkinson, WI, USA) filled with 200 mL of autoclaved calcined clay ("Pro's Choice Rapid Dry"; Oil-Dri Corporation, Chicago, IL). Immediately prior to planting, seeds were surface-sterilized using a 3-minute soak in 70% ethanol (v/v) followed by a 3-minute soak in 5% bleach (v/v) and three rinses with sterile deionized water; we plated extra seeds on malt extract agar (MEA) to confirm that this protocol was effective ( Supplementary Fig. 1c). To each growth bag, we added 120 mL of either sterile 0.5x Murashige-Skoog basal salt solution (pH 6.0), or the same solution containing 10 7 cells/mL of a synthetic community (SynCom) of seven bacterial strains known to colonize maize roots (12). We planted 28 kernels of each inbred line and 14 of the hybrid, divided evenly between the SynCom and control treatments, 4.5 cm deep using sterile forceps. The growth bags were sealed with sterile AeraSeal breathable film (Excel Scientific, Inc., Victorville, CA, USA) to allow gas exchange and then placed in randomized positions in a growth chamber (Percival Scientific Inc., Perry, IA). No additional liquid was added after the growth bags were sealed. After one month of growth (12-hr days, 27℃/23℃, ambient humidity), we opened the growth bags, uprooted the plants, rinsed off adhering clay, and patted them dry before measuring fresh weight of shoots and roots. We applied two-way ANOVA to linear models of biomass with Genotype, Treatment, and their interaction as predictor variables. F -tests with Type III sums of squares were used for significance testing, and pairwise contrasts were performed using Tukey's post-hoc procedure. SynCom effects on germination . To test whether the SynCom affected germination, we conducted a 3x2x3 full factorial experiment manipulating plant genotype (B73, Mo17, and their F 1 hybrid), microbial inoculant (SynCom vs. sterile control), and nutrient content (water, Hoagland's solution, or MS). Five surface-sterilized kernels were placed onto filter paper in five petri dishes per genotype-inoculum-nutrient combination ( N = 90 petri dishes) and inoculated with 2 mL of the SynCom (diluted to 10 6 cells mL -1 in nutrient solution) or a sterile nutrient solution control. Petri dishes were incubated in the dark at 30℃ and germination rate was recorded for each dish after 4 days. We used the Kruskal-Wallis test for main effects of genotype, inoculum, and nutrient treatment and for an interaction between genotype and inoculum. Wilcoxon rank sum tests were used for pairwise contrasts; P -values were adjusted for multiple comparisons using the Benjamini-Hochberg false discovery rate (33). Experiment 2 (January 2019) . To determine whether natural soil microbial communities produced the same effect as the SynCom, we used the same gnotobiotic growth bags as in Experiment 1 to compare plant growth in (1) a live soil slurry, (2) an autoclaved soil slurry, and (3) sterile buffer. We collected soil in November 2018 from field G4C at the Central Crops Research Station (Clayton, NC, USA) and stored it at 4℃ until use. We mixed 200 g of this soil into 1 L of phosphate-buffered saline (PBS) with 0.0001% Triton X-100 using a sterile spatula. The suspension was allowed to settle, filtered through Miracloth (22-25 µm pore size; Calbiochem, San Diego, CA, USA), and centrifuged for 30 minutes at 3,000 x g . The resulting pellet was resuspended in 200 mL of sterile PBS and immediately divided into two aliquots of 100 mL each. One aliquot was autoclaved for 30 minutes at 121℃ to produce a "killed" slurry concentrate. Live and killed soil slurry concentrates were diluted (10 mL slurry per L of 0.5x MS) to produce the final slurry treatments. An additional control consisted of diluted sterile PBS (10 mL PBS per L of 0.5x MS). Kernels were surface-sterilized as described above, planted in 150 mL sterile calcined clay, and hydrated with 90 mL of one of these three treatments in the gnotobiotic growth bags described above ( N = 20 per treatment for B73 and Mo17; N = 15 per treatment for B73xMo17). Prior to planting, the kernels were weighed and distributed evenly to ensure that no systematic differences in seed size among the treatments. Bags were arranged into randomized, replicated blocks in a growth chamber in the Duke University Phytotron (12-hr days, 27℃/23℃, ambient humidity) and uprooted after one month of growth for measurement of shoot fresh weight and root fresh weight. Fisher's Exact Test was used to compare germination proportions between genotypes within each treatment. We applied two-way ANOVA to linear mixed-effects models of biomass with Genotype, Treatment, and their interaction as fixed predictor variables, and Block as a random-intercept term. F -tests with Type III sums of squares were used for significance testing of fixed effects, and pairwise contrasts were performed using Tukey's post-hoc procedure. Likelihood ratio tests were used for significance testing of random effects. Experiment 3 (September-November 2019) . To determine whether MDH could be observed under field conditions, we conducted an on-farm soil sterilization experiment at the Central Crops Research Station. Total bed width was 152 cm furrow to furrow, and beds were 20 cm high with a 76 cm width at the top. Five treatments were established in a complete block design: steam-only (1 hr, 5 bar); allyl isothiocyanate (AITC; 280 L/ha); AITC (280 L/ha) followed by steam (1 hr, 5 bar); non-treated control; and chloropicrin (320 L/ha Pic-Clor 60). AITC and chloropicrin were applied September 11th, 2019 through shank application in raised beds. After fumigation, raised beds were covered with black Totally Impermeable Film (TIF) plastic. Steam was applied Sept. 27th using a SIOUX SF-25 Natural Gas Steam Generator (SIOUX Inc., Beresford, SD). The steam generator has a net heat input of 1.01e 6 BTU/hr and an average steam output of 383 kg/hr. The steam generator was mounted on a flatbed trailer and connected to natural gas tanks, a 1,300 L water tank and a natural gas electrical generator ( Supplementary Fig. 4f). Steam was applied consistently for 1 hr at 5 bar, injecting steam in 12 cm depth under TIF plastic using custom-made steam-graded spike hoses ( Supplementary Fig. 4e). Temperature was monitored in different depths using HOBO U12 Outdoor/industrial data logger (Onset Computer Corporation, Bourne, MA). The maximum temperatures reached in the steam-only treatment were 100°C in 12 cm depth; in the AITC + steam treatment, maximum temperatures of 66°C were measured (Extended Data Fig. 4d). Kernels were hand-planted 4 cm deep into slits in the plastic (6" spacing between slits, with two seeds 3" apart on opposite ends of each slit), randomized within 4 blocks per treatment and 7 sub-blocks per block. To reduce seed-borne microbial load while in the field, we soaked kernels in 3% hydrogen peroxide for 2 minutes and rinsed in sterile diH 2 O immediately prior to planting. Plants were monitored for emergence three times (5, 8, and 12 days after planting) and height was measured twice (15 and 27 days after planting). After 27 days of growth, plants from three of the treatments (chloropicrin, AITC+steam, and control) were uprooted and oven-dried for measurement of root and shoot biomass. For the biomass data, we applied two-way ANOVA to linear mixed-effects models with Genotype, Treatment, and their interaction as fixed predictor variables, and Block and Sub-block as random-intercept terms. For the height and leaf number data, we applied three-way repeated-measures ANOVA with Genotype, Treatment, Date, and all interactions as fixed predictors, and Plant, Block, and Sub-block as random-intercept terms. F -tests with Type III sums of squares were used for significance testing of fixed effects, and pairwise contrasts were performed using Tukey's post-hoc procedure. Likelihood ratio tests were used for significance testing of random effects. Fumigation effects on soil microbial community viability (Experiment 3). To assess the effects of the fumigation treatments on the field soil, we collected soil samples weekly beginning immediately before planting and ending 4 weeks (56 time points) after planting, when plants were harvested. Four soil cores per week were collected from each treatment to a depth of 25 cm; each core was divided into subsamples taken from depths 3-5 cm and 17-20 cm and kept on ice for 4-5 h, then stored at 4°C overnight and used for bacterial counts the following day. Soil suspensions were prepared by mixing 1 g fresh soil in 9 ml of 0.95% NaCl. The suspension was then homogenized with a micro homogenizer (OMNI International, Inc., Kennesaw, GA, USA) at 12000 rpm for one 60 s cycle. After homogenization, serial dilutions were prepared up to 10 -5 . Samples were plated on R2A 1/10 and VxylG media (35) using the 6x6 drop plate method (36) for dilutions 10 -2 to 10 -5 . Plates were incubated in the dark at 25°C for 3 weeks. Colony forming units (CFUs) were counted at 3, 7, 14, and 21 days; we determined that 14 days was the best time to count colonies and thus we used only that timepoint to calculate CFU per g soil. Fumigation effects on bacterial and fungal microbiomes (Experiment 3). We used high-throughput amplicon sequencing to assess how the on-farm fumigation methods affected the bacterial and fungal communities in the soil at large. Bulk soil samples were collected from the treated blocks at three timepoints: one, four, and six weeks after the treatments were applied. After the final measurements of plant phenotype, the roots of a representative subsample of plants in the control, chloropicrin, and AITC+steam treatments were harvested for microbiome quantification. DNA was extracted from soil and root samples using the DNeasy PowerSoil kit (QIAGEN, Inc., Hilden, Germany) and used as a template for PCR amplification of the V4 region of the bacterial 16S rRNA gene and the fungal ITS1, following established protocols (11). The resulting 16S-v4 and ITS1 amplicons were then sequenced in parallel on the Illumina MiSeq platform (V2 chemistry, 250-bp PE reads) to census the bacterial and fungal components of the microbiome, respectively. Established bioinformatic pipelines were used to quality-filter, denoise, and assign taxonomy to the raw sequence reads (11). Sequences that were derived from plants or that could not be identified at the kingdom level were discarded; samples with insufficient data (<500 bacterial reads or <500 fungal reads) were removed from the dataset. Finally, amplicon sequence variants (ASVs) that were not detected at least 10 times (for fungi) or 25 times (for bacteria) in at least 3 samples were removed from the dataset. The final bacterial dataset included 75 samples with a median of 30109 reads per sample, comprising 1307 ASVs. Sequencing depth was lower on average for fungi; as a result the fungal dataset included 57 samples with a median of 6466 reads per sample, comprising 122 ASVs. To reduce stochastic variation due to differences in sequencing depth, we applied the variance-stabilizing transformation (37) to the resulting ASV counts; additionally, we calculated the standardized, log-transformed sequencing depth for each sample to use as a nuisance variable. To test whether fumigation treatments altered soil microbiome composition, we used permutational MANOVA to partition variance in the bacterial and fungal communities among several sources: sequencing depth, timepoint, treatment, and the interaction between timepoint and treatment. To test whether fumigation treatments were also detectable in plant-associated communities at the end of the experiment, we conducted another permutational MANOVA to partition root microbiome variation among sequencing depth, genotype, treatment, and the interaction between genotype and treatment. Statistical tests for changes in strength of heterosis: For all experiments, we performed permutation tests to assess whether the change in strength of heterosis between sterile and nonsterile treatments was statistically significant. First, we used the estimated marginal means from the linear models described above to calculate the midparent heterosis (MPH) for each trait in each treatment: Second, we calculated "Δ MPH " as the difference in MPH between nonsterile and sterile treatments. Positive values of Δ MPH indicate that heterosis was stronger in nonsterile conditions than in sterile conditions. Third, we re-calculated Δ MPH for 999 datasets that had been permuted with respect to microbial Treatment, creating a distribution of Δ MPH values that would be expected if Treatment had no effect on heterosis. Finally, we compared the observed Δ MPH to this distribution using a one-tailed test of the null hypothesis that heterosis is not stronger in nonsterile conditions. Data and code availability All raw data and original R code that support the findings of this study are freely available in a public repository ( http://doi.org/10.5281/zenodo.4107065 ). Raw sequence reads are available in the NCBI SRA, BioProject #PRJNA669388. Plating of roots and root imprints on potato dextrose agar (PDA) resulted in ample microbial growth for plants grown in non-autoclaved clay; in contrast, no growth was seen two weeks after plating for plants grown in gnotobiotic conditions. (c) Surface sterilization of kernels substantially reduced, but did not fully eliminate, seed-borne microbial load. Plating on malt extract agar (MEA) led to visible microbial growth from~1 out of 9 surface-sterilized seeds (middle row). No growth was observed after plating the water from the final wash of seed surfaces (bottom row), indicating that this growth was most likely a seed endophyte that survived the surface sterilization. ; blue rectangles show the 95% CIs for these EMMs. The red arrows show the 95% CIs for pairwise tests between genotypes in each treatment after correction for the family-wise error rate using Tukey's procedure; non-overlapping arrows indicate statistically significant differences (alpha=0.05). Detailed statistical results are provided in Supplementary Table 2. N= 20 per inbred genotype per treatment, N= 15 per hybrid per treatment. *** P <0.001 ; ** P <0.01 ; * P <0.05 ; P <0.1 ; ns P >0.1 (Dunnett's test of contrasts between each inbred line and the hybrid). (c) The strength of midparent heterosis (MPH) was calculated in each treatment using the EMM trait values. The observed difference in MPH between untreated control and fumigation treatments (∆MPH) was compared to the distributions of ∆MPH for 999 permutations of the data with respect to treatment, i.e., the distribution of ∆MPH if there were no effect of treatment. P ∆MPH < 0.05 supports the alternate hypothesis that heterosis is stronger in untreated control soil than in fumigated soil. Supplementary Figure 5 | The density of viable bacteria (colony forming units g -1 soil) was measured at two depths in all treated and untreated field soils. Beginning two weeks after treatments were applied, four samples per treatment were taken weekly for the duration of the experiment. log(CFU g -1 ) was measured using two different microbial growth media (R2A and Vxyl G) and was modeled as a function of Treatment, Depth, Week (continuous variable), and all interactions. Slopes of log(CFU g -1 ) are shown for each treatment. Dunnett's procedure was used to contrast the slope for each treatment to that of the untreated control; only the AITC + Steam treatment differed from the control at P = 0.05, and only in shallow soil when using Vxyl G media. N = 4 per treatment per depth per media per week. height were measured for all treatments; shoot biomass (c) was measured for three treatments. Black points show the estimated marginal mean (EMM) trait values for each genotype in each treatment (values averaged over two timepoints); blue rectangles show the 95% CIs for these EMMs. The red arrows show the 95% CIs for pairwise tests between genotypes in each treatment after correction for the family-wise error rate using Tukey's procedure; non-overlapping arrows indicate statistically significant differences (alpha=0.05). Detailed statistical results are provided in Supplementary Table 4. Effects on root and shoot biomass are presented in Figure 3. Effects on germination are presented in Supplementary Figure 9. *** P <0.001 ; ** P <0.01 ; * P <0.05 ; P <0.1 ; ns P >0.1 (Dunnett's test of contrasts between each inbred line and the hybrid) (d) The strength of midparent heterosis (MPH) was calculated for each trait in each treatment using the EMM trait values. The observed difference in MPH between untreated control and fumigation treatments (∆MPH) was compared to the distributions of ∆MPH for 999 permutations of the data with respect to treatment, i.e., the distribution of ∆MPH if there were no effect of treatment. P ∆MPH < 0.05 supports the alternate hypothesis that heterosis is stronger in untreated control soil than in fumigated soil. Figure 9 | In Experiment 3, we grew maize in the field from seeds planted into untreated soil, soil fumigated with chloropicrin, or soil fumigated with AITC and/or steamed. Bar heights show the proportion of seeds that successfully germinated for each genotype in each treatment (columns) at each of three timepoints (rows; d.a.p. = days after planting). N = 112 per genotype per treatment. Statistical inference is from Fisher's Exact Test. *** P <0.001 ; ** P <0.01 ; * P <0.05 ; P <0.1 ; ns P >0.1 Supplementary Supplementary Table 1 | Analysis of variance (ANOVA) for linear models of fresh weight of (a) roots and (b) shoots in a gnotobiotic growth experiment, in which sterile maize seeds were inoculated with a synthetic community of seven bacterial strains or with a sterile buffer control (Fig. 1). F -tests with Type III sums of squares were used for significance testing. and (b) shoots in in a gnotobiotic growth experiment, in which sterile maize seeds were inoculated with a live soil slurry, an autoclaved aliquot of the same slurry, or a sterile buffer control (Fig. 2). F -tests with Type III sums of squares were used for significance testing of fixed effects, and likelihood ratio tests were used for significance testing of the Block random effect. The Kenward-Roger method was used to estimate denominator degrees of freedom.
2020-05-12T13:09:29.592Z
2020-05-07T00:00:00.000
{ "year": 2020, "sha1": "d1eab65ccb07099ebf535ad931aeff0cb23607f4", "oa_license": "CCBY", "oa_url": "https://www.biorxiv.org/content/biorxiv/early/2020/10/20/2020.05.05.078766.full.pdf", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "d1eab65ccb07099ebf535ad931aeff0cb23607f4", "s2fieldsofstudy": [ "Biology", "Agricultural and Food Sciences", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
9890382
pes2o/s2orc
v3-fos-license
Intra-observer reliability in three-dimensional kinematic analysis of sacroiliac joint mobility [Purpose] Physical therapists, osteopathic practitioners, and chiropractors often perform manual tests to evaluate sacroiliac joint (SIJ) mobility. However, the available evidence demonstrates an absence of reliability in these tests and in investigations with kinematic analysis. The aim of this study was to verify the three-dimensional kinematic reliability in SIJ movement measurements. [Subjects] This cross-sectional study analyzed 24 healthy males, aged between 18 and 35 years. [Methods] Three-dimensional kinematic analysis was performed for measurements of posterior superior iliac displacement and greater trochanter (femur) displacement during hip flexion movement in an orthostatic position. The distance variations were measured from a reference point in 3 blocks. The intra-observer reliability was compared with the mean of three 3 blocks using the interclass correlation coefficient (ICC) and a 99% significance level. [Results] The measurements indicated a strong correlation among blocks: ICC = 0.94 for right side SIJ and ICC = 0.91 for left side SIJ. The mean displacement between the reference points was 7.7 mm on the right side and 8.5 mm on the left side. [Conclusion] Our results indicate that three-dimensional kinematic analysis can be used for SIJ mobility analyses. New studies should be performed for subjects with SIJ dysfunction to verify the effectiveness of this method. INTRODUCTION The sacroiliac joint (SIJ) plays an important role in the axial skeleton's load distribution for the lower limbs, because it is the transition point between the upper and lower body [1][2][3] . The SIJ movement pattern is complex because its anatomical configuration allows displacement in 3 planes and axes in a combined manner 4) . However, the amplitude of this movement is restricted to approximately 1 to 4° of rotation and 1 to 2 mm of translation 5,6) . These values may vary with age, gender, and weight or during pregnancy, and this variation has been the subject of study for the last two decades. The SIJ significantly contributes to different motor patterns of the trunk and lower limbs, and some of these patterns are highly complex, such as the marching movement. From a clinical point of view, the SIJ is a joint with considerable propensity for arthrokinematic motion alterations, and a small decrease in the range of motion (ROM) is likely to develop before important musculoskeletal dysfunctions occur 9,10) , such as back pain, hip pain, and pain radiating to the legs and inguinal region [11][12][13][14] . In the clinical setting, diagnosing disorders of the SIJ by physical examination, especially with regard to mobility, is difficult due to low levels of test reliability 1,9,15) . However, studies have highlighted the need for a combination of 3 or more provocative tests to confirm sacroiliac dysfunction 1,10,13,16,17) . Szadek et al. 13) claimed that Gaenslen's Thigh and Thrust tests individually are more reliable in the detection of sacroiliac dysfunction; however, several provocative tests should be performed to obtain a more accurate diagnosis. In such cases, the degree of mobility of the joint is neglected and is considered only with the presence of SIJ pain. Today, blockade by intra-articular injection of anesthetic is the gold standard method for the differential diagnosis of sacroiliac dysfunction from the symptomatological point of view 1,7,14,15,17) . However, substantial evidence is lacking regarding viable alternatives for evaluation and quantification of SIJ mobility, especially for applicability in clinical practice. Among the existing experimental models for SIJ movement analysis, the most reliable method for the evaluation of mobility is radiostereometry guided by fluoroscopy with contrast administration 1,6) . However, this is an invasive method, the findings can be difficult to interpret, and it is very expensive. There is no noninvasive gold standard mobility test for the SIJ. As with the provocative tests, positional and mobility tests have been the subject of investigation, and the empirical evidence suggests poor reliability 7,13) . Among the tests used for SIJ mobility assessment, the most widely used in clinical practice is the Gillet test. However, this test does not have sufficient reliability to be accepted as a good evaluation parameter 13,16,17) . Based on this information, the present study aimed to determine the reliability of three-dimensional kinematics during hip flexion in an orthostatic position as a quantitative method of evaluating sacroiliac mobility. SUBJECTS AND METHODS This cross-sectional study analyzed 24 males between the ages of 18 and 25 years in the Laboratory of Human Movement Analysis at Augusto Motta University Center (LAMH / UNISUAM). The inclusion criteria were as follows: no history of spine or lower limb surgery, asymptomatic, no central or peripheral nervous system motor impairment, and a body mass index (BMI) between 18.10 and 24.90 kg/m 2 . We excluded subjects with a real lower limb length discrepancy of more than 1 cm (confirmed by scanometry), patients presenting with pain for 6 months before the trial, and patients with allergic reactions to tape (for marker points). The subjects that did not complete the tests due to pain during the experiment were excluded from the studies. The subjects were invited to participate in this study, and after being accepted, they signed a consent form. The work was approved by the Ethics and Research Committee of UNISUAM (no. / 2012). A three-dimensional kinematic analysis system (Qualysis motion system, Qualisys AB, Gothenburg, Sweden) was used to analyze the movements of the subjects. The apparatus was composed of three infrared cameras arranged in a semicircle to record the movements of the reflective markers attached to the anatomical points of interest. The equipment was calibrated according to the manual on each day of sample collection, and the sampling frequency was 120 Hz. The subjects were asked to wear only a pair of Lycra shorts. In the first stage, we measured the anthropometric data and performed five irritative manual tests (thigh thrust, Gaenslen's, spring, Patrick, and sacral thrust tests). The goal of the first stage was to exclude subjects with SIJ dysfunction. (i.e., subjects with three or more positive test results were excluded). Next, reflective markers were fixed on the following anatomical points: posterior superior iliac spines and the higher trochanters and epicondyles of the femur. The subjects then stood in a bipedal standing position against a support bar and performed 3 hip and knee push-ups with each lower limb, and the movement was recorded by the Qualisys system. This procedure was repeated 3 times. The primary outcome measure was the displacement distance of the posterior superior iliac spine in relation to the higher contralateral trochanter during active hip flexion in the range of approximately 90° in the standing position. The statistical analysis was performed using SPSS 20.0 for Windows ® . The characteristics and sociodemographic data of the subjects were tested for normality using the Kolmogorov-Smirnov test and are presented as averages (X) and standard deviations (SD). The intraclass correlation coefficient (ICC) was used to compare the values between the blocks of sacroiliac mobility tests for the right and left limb with a confidence interval of 99% (p < 0.01). RESULTS The sample consisted of 24 individuals, and no subjects were excluded due to SIJ dysfunction (Table 1). Thus, the goal established by the sample calculation was achieved with a confidence level of 95%, a testing power of 80%, and a margin error of 20%. Table 2 presents the mean, median, and standard deviations of the measurements and the ICC of the intra-observer reliability. The averages of the 3 blocks were compared for each hemi-body separately. The results demonstrated a strong correlation between the blocks (ICC = 0.94 for the right SIJ and ICC = 0.91 for the left SIJ). Both assessments were significant (p < 0.01). DISCUSSION The assessment of SIJ mobility has great relevance in the clinical setting because SIJ mobility determines the appropriate therapeutic approach 1,9,15) . The difficulties in detecting biomechanical alterations in the SIJ during a physical examination are obvious, and manual tests have insufficient reliability for this purpose 10,13,[15][16][17]19) . The literature indicates that the main limitation of these tests is the examiner's inexperience because the evaluated movements are extremely subtle 8,13,14) . In this study, we found strong correlation coefficients for repeated measures of sacroiliac mobility. The threedimensional kinematics analysis was implemented as a tool to promote the enhancement of the measurement of motion replacing palpation used in manual testing 10,13,14,20) . Our results agree with the literature regarding the use of laboratory equipment as a possible alternative method of increasing the analysis accuracy. The current devices have sufficient precision to measure displacements of only a few millimeters 20,21) . Kinematic analysis is a method that has established validity and reliability in the literature for the evaluation of several human movement patterns [22][23][24] . However, it has not been commonly used to measure SIJ movements. Webster, Wittwer, and Feller 21) compared different three-dimensional motion analysis systems and found excellent coefficients of repeatability and excellent levels of intra-examiner agreement (ICCs ranged from 0.92 to 0.99). Bussey et al. 25) tested a kinematic analysis device that performs magnetic tracking of surface markers and reported results very similar to our results. However, Bussey et al. 25) evaluated another movement pattern of the lower limbs because their purpose was to detect sacroiliac mobility differences between males and females. The studies performed by Ahia et al. 26,27) aimed to determine the validity of the process of setting markers at anatomical reference points because this step is crucial for obtaining the data. Strong correlation values were observed in the ICCs (≥ 0.90) in studies with good methodological quality according to the inclusion criteria for systematic reviews established by the Quality Assessment of Diagnostic Accuracy Studies (QUADAS). One of the greatest advantages in using video kinematics for the evaluation of movements (which were previously tested only by palpation) is the potential to extract reliable quantitative data on the range of motion, which cannot be achieved through the execution of conventional manual tests, which provide only qualitative data 28,29) . Studies conducted on cadavers with radiographic analysis systems and contrast administration report that the mobility of the SIJ varies from 1 to 4° of rotation and 1 to 2 mm of translation 5,6,8) . These results differ from the findings obtained in this study, where the average displacement value was 8 mm. However, the values presented by the previous authors refer to the real mobility of the SIJ, and, in the present work, the distance variation between the posterior superior iliac spine (PSIS) and greater trochanter of the contralateral limb was estimated as an indirect estimate of SIJ mobility. To establish an accurate means of evaluation and feasibility in physical therapy practice [30][31][32][33] , our experiment mimicked the motor patterns already commonly used in evaluations through manual testing. Thus, the manual test findings can be enhanced without challenging clinical reasoning and interpretation inherent to the investigative essence of the test. Observation of the movement relationships between the anatomical reference points chosen not only provides information on the core mobility of the SIJ but also clarifies issues related to the biomechanical behavior of the entire lumbo-pelvic complex. Therefore, we analyzed the magnitude of the displacement between the reference points and did not analyze the individual structures through vector decomposition of their motion. One of the limitations of the present study is that there is no gold standard methodology established for the noninvasive assessment of SIJ mobility 18) . Thus, it was impossible for us to compare our findings with data from other studies, which would establish values related to the reliability of the method. Second, we did not investigate the inter-examiner reliability. Therefore, the need for further studies using the same methodology of analysis remains to gain a better understanding of the proposed method, especially when applied to symptomatic individuals. We conclude that threedimensional kinematic analysis is a good tool for the estimation of SIJ mobility. According to our data, the measurements demonstrated a strong correlation among the blocks of estimated SIJ range of motion, which confirms the method's intra-observer reliability. However, new studies must also be performed, especially for subjects with SIJ dysfunction. Thus, it may be possible to consolidate the evaluation of SIJ mobility through a three-dimensional kinematic analysis.
2018-04-03T06:04:43.074Z
2015-04-01T00:00:00.000
{ "year": 2015, "sha1": "303670dbfa522fa28b16654a7467b513382d5dd7", "oa_license": "CCBYNCND", "oa_url": "https://www.jstage.jst.go.jp/article/jpts/27/4/27_jpts-2014-641/_pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "303670dbfa522fa28b16654a7467b513382d5dd7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
4661397
pes2o/s2orc
v3-fos-license
The role of male contest competition over mates in speciation Research on the role of sexual selection in the speciation process largely focuses on the diversifying role of mate choice. In particular, much attention has been drawn to the fact that population divergence in mate choice and in the male traits subject to choice directly can lead to assortative mating. However, male contest competition over mates also constitutes an important mechanism of sexual selection. We review recent empirical studies and argue that sexual selection through male contest competition can affect speciation in ways other than mate choice. For example, biases in aggression towards similar competitors can lead to disruptive and negative frequency-dependent selection on the traits used in contest competition in a similar way as competition for other types of limited resources. Moreover, male contest abilities often trade-off against other abilities such as parasite resistance, protection against predators and general stress tolerance. Populations experiencing different ecological conditions should therefore quickly diverge non-randomly in a number of traits including male contest abilities. In resource based breeding systems, a feedback loop between competitive ability and habitat use may lead to further population divergence. We discuss how population divergence in traits used in male contest competition can lead to the build up of reproductive isolation through a number of different pathways. Our main conclusion is that the role of male contest competition in speciation remains largely scientifically unexplored [Current Zoology 58 (3): 493–509, 2012]. Introduction A major contemporary goal in speciation research is to investigate the mechanisms leading to population divergence and reproductive isolation (Coyne and Orr, 2004;Dieckmann et al., 2004;Price, 2007).However, one mechanism of selection, i.e. sexual selection through male contest competition, which can lead to fast evolutionary changes, has received surprisingly little attention in the context of speciation.The aims of this article are to review studies on the role of male contest competition in speciation and to pinpoint some major unexplored directions for future research. Natural selection leads to the evolution of traits that enhance their bearers' survival chances and reproductive output in their natural environment.Sexual selection also leads to the evolution of traits enhancing their bearers' reproductive output but in this case through access to mates, which can be viewed as a limiting resource for the sex with the highest potential reproductive rate (Andersson, 1994).Due to their higher investment in gamete size and therefore in offspring quality (sometimes further increased by pregnancy and maternal care), females often have a lower potential repro-ductive rate than males.This means that access to mates generally does not limit the reproductive rate of females, while the opposite is true for males (Trivers, 1972).For simplicity we will therefore focus our review on male contest competition over mates rather than on female contest competition over mates.However, some of the matters discussed in this article can also apply to female contest competition (see e.g.van Doorn et al., 2004) Both natural and sexual selection are known as important mechanisms underlying the process of speciation but natural selection has received by far the most scientific attention.Speciation driven by divergent natural selection, often referred to as ecological speciation (e.g.Schluter, 2000;Rundle and Nosil, 2005), has been identified as the underlying force driving whole adaptive radiations whereby a single ancestral species splits into a large number of new daughter species.Adaptive radiations have been documented in relation to entry into novel environments, for examples cichlids in volcano lakes (Elmer et al., 2010) and finches exposed to climate change (Grant and Grant, 1993), or in relation to the evolution of key innovations (i.e.evolution of traits that allow the use of novel resources, Berenbaum, 1983) whereby the ancestral species splits into daughter species that occupy a large number of previous unexplored niches.Strong competition over limiting natural resources can also by itself lead to splitting of populations.This is because individuals that are able to utilize resources that are not used by most individuals in the population are favored by selection, i.e. there will be disruptive selection (Dieckmann and Doebeli, 1999). Sexual selection has also been acknowledged as an important evolutionary force driving speciation (West-Eberhard, 1983;Price, 1998;Panhuis et al., 2001;Edwards et al., 2005;Ritchie, 2007;Kraaijeveld et al., 2011).There are at least four reasons to assume that sexual selection plays an important role in promoting genetic divergence between populations and the build up of reproductive isolation between them.First, competition over mates can be very intense (Andersson, 1994) and can lead to fast divergence in sexually selected traits as compared to naturally selected traits (but see e.g.Svensson and Gosden, 2007).Second, closely related species often differ more in sexually selected traits than in other phenotypic traits, which is expected if these traits are subject to strong selection within species.In addition, divergence in sexually selected traits may have been further reinforced through selection against interbreeding between closely related species (Dobzhansky, 1940;Coyne and Orr, 1989;Howard, 1993;Butlin, 1995;Hostert, 1997;Noor, 1999;Ortiz-Barrientos et al., 2009).Third, sexually selected traits display extreme variation across animal taxa (West-Eberhard, 1983;Price, 1998;Panhuis et al., 2001;Edwards et al., 2005;Ritchie, 2007;Kraaijeveld et al., 2011) probably as a consequence of sexual selection operating in more arbitrary directions than natural selection.Fourth, because females often base their choice of mate on sexually selected traits, population divergence in such traits may directly lead to assortative mating (e.g.Lande, 1981;Higashi et al., 1999).This latter argument is the main reason why sexual selection is often acknowledged as an important component preventing homogenizing gene flow between diverging populations when speciation occurs in sympatry or at secondary contact. Sexual selection through mate choice is unlikely to by itself lead to speciation (reviewed in Ritchie, 2007).Since sexual selection through mate choice is often unidirectional instead of bidirectional, the initiation and maintenance of a disruptive selection regime is rather unlikely.Many male display traits are conditiondependent and signal their bearers' ability to provide their mates with good genes or resources needed for reproduction (Andersson, 1994).Environment specific costs of mate choice may lead to different levels of sexual selection and divergence in male traits in allopatric populations (Maan and Seehausen, 2011), but would not ensure assortative mating at secondary contact.For example, if two populations diverge in male body size because female choice in favor of large body size acts as a weaker selection pressure in one population, males belonging to the population with large individuals would nevertheless exert a supernormal sexual stimulus for females from the other population at secondary contact (see Labonne and Hendry, 2010 for a similar case concerning color in guppies).However, Schluter and Price (1993) argued that female choice could diverge between allopatric populations when several male traits indicate the same abilities but the perception of these different traits varies across environments.Another recent model launched the idea that if only locally adapted males can develop large condition-dependent ornaments female discrimination against immigrant males would arise (van Doorn et al., 2009).Hence, female choice based on indicator traits could generate divergence in male traits or facilitate assorative mating under certain conditions. The initiation of a disruptive selection regime may appear more likely when based on arbitrary Fisherian run-away processes.Given sufficient initial genetic variation in female mate preferences several Fisherian run-away processes may even operate within the same sympatric population (Higashi et al., 1999).In combination with elements of female-female competition and male-male competition, Fisherian processes could cause disruptive frequency dependent selection (van Doorn et al., 2004).Hence, sympatric speciation through Fisherian sexual selection is theoretically possible but in practice rather unlikely because it requires very specific conditions (van Doorn et al., 2004).When speciation, at least partly, occurs with gene flow, there also need to be mechanisms that prevent the break down (due to recombination) of associations between traits causing selection against hybrids and the traits involved in pre-mating isolation (in this case between male display traits and female choice; Kirkpatrick and Ravigné, 2002;Gavrilets, 2004;Servedio et al., 2011).Thus, sexual selection through female mate choice is recognized as an important evolutionary force in the context of speciation, but there are several reasons for assuming that this force leads to speciation only under a limited range of conditions. Sexual selection is driven not only through mate choice.Male contest competition over access to females or resources needed to attract females (i.e.breeding territories) is recognized as an important mechanism of sexual selection (Andersson, 1994).Male contest competition favors the evolution of a wide range of traits including weapons such as horns and spurs that are directly used in combat, sexual size dimorphism, and conspicuous body coloration and calls that signal fighting ability.Many sexually selected traits have a dual function meaning that they are used both in male contest competition and for female mate choice (Berglund et al., 1996;Wong and Candolin, 2005).In many mating systems, male contest competition overrides the effect of female choice and these two mechanisms of sexual selection can work in different or opposing directions (Qvarnström and Forsgren, 1998;Candolin, 2004;Hunt et al., 2009).Intensive male contest competition may even lead to evolution of traits that increases male mating success but reduces female fitness, which in turn may trigger an evolutionary arms race between the two sexes whereby females evolve counter adaptations (Arnqvist and Rowe, 2002).Thus, in a particular population, male contest competition over access to mates could be: a) the main mechanism of sexual selection, b) a mechanism that sets the stage for which traits function as a target of female choice, or c) a mechanism that triggers a sexually antagonistic co-evolutionary armsrace.Still, the diversifying role of male contest competition has largely been ignored (but see e.g.Seehausen and Schluter, 2004).The underlying reason for this ignorance is probably not the assumption that male contest competition is a less powerful mechanism of sexual selection than female mate choice.The strong focus on female choice is rather a consequence of its intuitive role in causing assortative mating between diverging populations.Can male contest competition promote speciation while female choice for the same sexually selected character is absent or remains unchanged?In this review we will argue that the answer to this question is yes. We will focus our discussion on four main aspects of male contest competition and speciation: 1. Male contest competition and the evolution of novel traits. 2. The link between ecology and male contest competition. 3. Male contest competition and character displacement between populations. 4. Male contest competition and reproductive isolation. We conclude that sexual selection through male contest competition can play an important role in speciation.Based on this and other conclusions (see below), we would like to encourage more studies focusing on male contest competition as a diversifying force and on the interaction between different mechanisms of sexual and natural selection in the process of speciation rather than aiming at isolating their relative importance. Negative Frequency-dependent Selection and the Evolution of Novel Traits Speciation through sexual selection requires that at least one of the two splitting populations evolves a sexually selected trait that differs from the ancestral state.Many color patterns are largely affected by a relatively small number of genes (Mallet, 1989;Grant and Grant, 1997;Hoekstra et al., 2006).Mutations in genes may frequently provide new phenotypic variation in coloration and hence potential new targets for sexual selection (Price, 2002;Hofreiter and Schöneberg, 2010;Manceau et al., 2011).Female choice may favor such new color patterns due to preexisting biases in their sensory systems (Kirkpatrick, 1987), or through learning (Qvarnström et al., 2004).However, even if sensory biases or learning can result in female mate choice in favor of novel traits, these mechanisms can hardly maintain stable polymorphism.Stable trait polymorphism requires negative frequency-dependent selection, i.e. sexual selection favoring rare male phenotypes.There are some exceptional cases reported of female preferences for rare male phenotypes, e.g. in guppies (Farr, 1977) but rare phenotypes often suffer decreased mating success meaning that mate choice instead generates stabilizing sexual selection (Kirkpatrick and Nuismer, 2004).By contrast, male contest competition has been suggested to more often fulfill this requirement as males generally bias their aggression towards rivals that are most similar to themselves since such males are most likely to share important resources (van Doorn et al., 2004;Mikami et al., 2004;Seehausen and Schluter, 2004, see Fig. 1).This means that males with a rare new trait should receive less aggression from other males and thereby experience an advantage when establishing breeding territories.They are also less likely to become injured or to waste their time by fighting with other males instead of courting females (Huntingford and Turner, 1987).Thus, a bias in male aggression towards similar competitors has the potential to facilitate both the invasion of a new trait into the population and to promote stable coexistence of different phenotypes. Pioneering empirical work on the diversifying role of male contest competition through negative frequencydependent sexual selection has been done on African cichlids.The high diversity of haplochromine cichlids in Lake Victoria appears to originate from only a few ancestors less than 200 000 years ago (Nagl et al., 2000).Although these young cichlid species have diverged ecologically (Bouton et al., 1997;Seehausen and Bouton, 1997), the fact that there are striking differences in male nuptial coloration has lead to the conclusion that sexual selection has played a central role in this speciation process.Sibling species within genera, in fact, tend to be rather ecologically similar but strongly different in male nuptial coloration (Seehausen and Schluter, 2004).While sexual selection through female choice has been given much attention (e.g.Dominey, 1984;Seehausen and van Alphen, 1999;Knight and Turner, 2004), the role of male aggressive interactions for access to mates and/or resources necessary to attract mates was suggested more recently.Seehausen and Schluter (2004) examined the distribution of coloration in rock-dwelling haplochromine cichlids and found that the occurrence of species in habitat patches was negatively correlated with closely related species with similar color.Similar patterns of non-random distribution of males depending on their color pattern have been found in Lake Malawi (Young et al., 2009).Dijkstra et al. (2007) carried out aggression tests in response to blue and red phenotypes of the cichlid species complex Pundamilia (Fig. 2).The male cichlid fish used in these experiments had been caught from three different types of wild populations.During the experimental trials, the wild-caught fish were placed in a test aquarium and allowed to establish a territory.Territorial intrusions were then simulated through placing two transparent watertight tubes containing one red and one blue intruder in the aquarium together with the focal fish.The outcome of these trials differed depending on where the territorial fish had been caught in nature.As expected, blue male Pundamalia cichlids from populations with either a continuous color morph distribution or with a lack of intermediate forms displayed more aggressive behaviors towards intruders belonging to their own color morph than to the other intruder.By contrast, blue male cichlids from a population with intermediate forms but without a continuous distribution of color morphs biased their aggression towards red intruders.Why did not blue male cichlids from this latter location also bias their aggression towards males belonging to their own color morph?The authors suggested a possible explanation.Red male cichlids in general behave more aggressively than the blue ones and blue male cichlids may therefore have learnt from previous experiences of interacting and competing with such aggressive red males.This explanation is based on the assumption that blue males from the three different populations had experienced different levels of previous competition with red color morphs depending on where they had been caught in the wild.Dijkstra et al (2007) concluded that their results suggest that a new red morph should be able to invade a blue population but that a stable coexistence of phenotypes would require additional mechanisms.Hence, social dominance effects and learning seem to play an important role in haplochromine cichlid fish (Dijkstra et al., 2007;Dijkstra and Groothuis, 2011) implying that the frequency-dependent selection caused Fig. 2 Study systems where male aggression has been suggested to be involved in population divergence and speciation Top left: Pied flycatcher males.Left, black and white male.Right, Grey/brown male plumage coloration, which is more common and pronounced in sympatry with the closely related collared flycatcher.This is likely due to less aggressive interactions with collared flycatchers.Photo credit Javier Blasco Zumeta.Top right: Red and blue morph of Lake Victoria Pundamilia cichlid fishes.Colour differences, and differences in local species-and colour-morph composition affect male contest competition.Photo credit Peter Dijkstra.Bottom left: Damselflies in the genus Calopteryx showing variation in wing dots involved in male expression of aggression.Photo credit Erik Svensson.Bottom right: Strawberry poison frogs from two populations with extremely different colour-morphs.Colouration co-varies with male-male aggression, which could be important, both as a possible driver of phenotypic divergence, and through effects on reproductive isolation.by male contest competition often is asymmetric between sibling species.Thus, two important take home messages are that there may be intrinsic differences in aggressiveness between color morphs and that individuals may adjust their level of aggression towards particular color morphs depending on previous experiences. Experimental work on birds has yielded similar results, i.e. asymmetric color effects on winning contest competition.Recently, Pearce et al (2011) investigated the aggressive response to simulated territory intrusions in two competing species of birds.Male gouldian finches Erythrura gouldiae were found to be more aggressive towards conspecific intruders than towards heterospecific intruders and the red head-color morphs were more aggressive than the black morphs.By contrast, long tailed finches Poephilia acuticauda were more aggressive towards gouldian finches than towards conspecific models.An intrinsic difference in aggression between the two species is therefore a likely ex-planation for why long-tailed finches have an advantage in competition over the limited nest sites needed to attract females (Brazil-Boast et al., 2011). The links between sexually selected traits, male aggressive behavior and competitive ability become even more obvious when the sexually selected trait itself directly affects fighting ability and/or is used as a weapon, e.g.body size or horns and spurs.Should we then expect male contest competition to work in a unidirectional way with the most aggressive males always getting access to most of the females?Intensive interactions between males in dense populations may cause disruptive selection and favor males with alternative reproductive tactics, e.g. a competitive versus a sneaker tactic (Fig. 1).Such different tactics are generally associated with differences in whole suites of male traits associated with fighting ability including behavioral, morphological and physiological traits (Gross, 1996).A striking example is dung beetles that dig burrows un-derneath animal excrement in which they bury portions of the dung with eggs attached to them.The young will feed on the dung after emerging from the egg.How much dung the parents have put together for a male offspring will determine whether he will adopt a competitive or a sneaking tactic when reaching sexual maturity.This is because the size of the dung ball determines the adult size of the beetle and only males that have reached a certain threshold size will develop horns (Moczek and Emlen, 1999;Hunt and Simmons, 2000).The horns are used in contest competition with other males and large horn size is associated with increased reproductive success.Hornless males seek mating with females inside tunnels while avoiding aggressive interactions with guarding males (Moczekand Emlen, 2000).The two reproductive tactics used by male dung beetles reflect phenotypic plasticity but may represent the first step towards genetically determined polymorphism.There are several reported cases of genetically determined polymorphism (Zimmerer and Kallman, 1989;Shuster and Wade, 1991;Lank et al., 1995;Sinervo and Lively, 1996;Tuttle, 2003;Hurtado-Gonzales and Uy, 2009), which may in turn represent a first step towards speciation. In conclusion, the role of male contest competition in promoting and maintaining variation among and between populations remains a modestly explored scientific area.Since aggression from competitors is considered to be one major cost associated with having many competitors with similar fighting ability, male contest competition may result in disruptive selection favoring males with extreme strategies (i.e.often a dominant versus a sneaking strategy) and negative frequencydependent selection on discrete morphs.There are many possible directions for future research but we would like to pinpoint three questions that we find particularly interesting.While it is intuitive that sexually selected traits directly used in combat (e.g.horns) are associated with dominant strategies, a remaining question is to what extent there are pre-existing physiological biases making certain colors more likely to become associated with dominant strategies (e.g.red color may be perceived as more threatening than blue color).Another unexplored area of research is to investigate the possible role of learning from previous interactions with competitors in reinforcing asymmetries in aggression and/or in the outcome of contests.Finally, more research aimed at revealing the costs and benefits associated with different strategies (i.e.including differences in whole suites of male traits that influence fighting ability or sneaking ability) and how these depend on the social and ecological environment is needed. The Ecological Context Stable co-existence between species is facilitated by differences in niche use.A relevant question then becomes how may divergence in sexually selected traits used in male contest competition influence niche use and vice versa?When male contest competition causes negative frequency-dependent selection, it will facilitate the maintenance of polymorphism with little divergence in niche use.Males with more different sexually selected traits will be able to use more similar niches under sympatric conditions (see example of cichlids above). When divergence in traits used in male contest competition is associated with differences in aggression and/or fighting ability, this will also influence access to other resources.Aggressive interspecific interference has for example been shown to drive divergence in habitat use by two species of land snails: Euhadra quaesita and E. peliomphala.Members of the less aggressive species, E. quaesita, are more terrestrial when they live in allopatry as compared to when they live in sympatry with E. peliomphala (Kimura and Chiba, 2010).In general, when males fight over resources necessary to attract females, the outcome of such contests often determines in what type of environment their offspring are being raised in.Habitat imprinting can then promote the association between competitive strategy and habitat use across generations (see e.g.Vallin and Qvarnström, 2011). An association between divergence in sexually selected traits used in male contest competition and divergence in niche use is also expected because the costs and benefits of using a dominant versus less competitive strategy should vary across ecological contexts.To understand population divergence in male traits used in contest competition over access to mates a wide range of costs associated with possessing such traits needs to be taken into account.Individuals in allopatric populations most likely experience different selective regimes due to differences in the ecological and social environment, which in turn result in differences in the adaptive optima of traits used in contest competition (Fig. 3).It has been suggested that the level of sexual competition should be higher in stable and resource rich environments while harsh environments limit sexual selection, which seems intuitive since there is a high cost of contest behavior (Briffa and Sneddon, 2007;Chellappa and Huntingford, 1989).However, under stressful conditions, where the life expectancy is lower, an evolutionary switch to higher allocation of resources to increase reproductive success is also known to occur (Birkhead et al., 1999;Polak and Starmer, 1998).Thus, it may be hard to predict the direction of the evolutionary response of traits involved in male contest competition in stressful environments. One important stress factor that often varies considerably across both time and space is parasites.Aggressive behavior is known to be associated with high testosterone levels (e.g.Martinez-Sanchis et al., 2003) and a major physiological cost of high testosterone levels is impaired immune function.Several elegant studies have been performed in specific populations.Testosterone implants have been shown to increase parasite infection in red grouse, for example (Mougeot et al., 2004(Mougeot et al., , 2005)).Elevated levels of testosterone may also result in higher exposure and transmission of parasites through changes in behavior (Grear et al., 2009).Thus, aggressive behavior in itself may lead to higher exposure to parasites.In any case, the main prediction would be that high levels of parasites in the environment should select for reduced levels of aggression. Another commonly discussed negative effect of secondary sexual traits, which is also likely to differ between environments and populations, is the increased risk of predation.Engagement in intrasexual contests has been shown to increase predation risk (e.g.Jakobsson et al., 1995;Dunn et al., 2004), and thus, predation is assumed to limit both the degree of intrasexual contest competition and the degree of intersexual sig-naling.The relationship between predation pressure and aggressive behaviour has recently been studied in Strawberry poison frogs Dendrobates pumilio.Populations of this species in the Archipelago of Bocas del Toro in northwestern Panama have gained special attention because of their great divergence in body color (Summers et al., 2003;Siddiqi et al., 2004) and because the color present in this species is shown to be involved in predator interactions (Summers and Clough, 2001;Saporito et al., 2007).Populations range from cryptic to highly conspicuous (Wang and Shaffer, 2008, Fig. 2).Strawberry poison frogs show strong territoriality with high levels of aggression (Pröhl, 2005) but the main focus in studies on interactions between natural and sexual selection in this species has been directed to female mate choice (Summers et al. 1999;Reynolds and Fitzpatrick, 2007;Maan and Cummings, 2008).Based on the hypothesis that coloration differences in D. pumilio reflect predation avoidance strategies, Rudh et al (2011) argued that aposematic individuals should be relieved from the behavioral constraints that cryptic individuals suffer and that the evolution of aposematic coloration could therefore be facilitated through the joint action of natural and sexual selection.The authors suggest that due to the loss of aposematic coloration in the cryptically colored populations, the predation constraint on aggressive behavior is restored, which causes lower levels of behaviors that affect exposure to predators.This idea was supported by the finding that males in more conspicuous populations of D. pumilio show higher levels of exposure while calling (Rudh et al., 2011).Male calling in this species serves both as an intersexual signal to attract females and as an intrasexual aggressive signal to defend territories.In a study by Crothers et al (2011), males from an island population with aposematic red individuals were tested for intraspecific aggression.Using two-way experimental setups, the authors found that not only did more brightly colored males act more aggressively in general, but there was also a higher level of aggression directed towards males with artificially increased brightness.These findings suggest that even small differences in color stimuli may have significant effects on male aggression levels.Rudh et al (manuscript) compared male-male aggression in several populations with either cryptic or conspicuous individuals.Their results showed a higher level of aggression of individuals belonging to conspicuous populations than in individuals from cryptic populations. To conclude, there are many reasons why adaptation to different environments should commonly coincide with some divergence in male contest competition ability and vice versa.Males using dominant strategies often behave more aggressively, have higher testosterone levels and are in many cases larger than males using a sneaker strategy (Gross, 1996).The costs and benefits associated with these various traits are known to vary across ecological contexts.Although costs associated with male contest competition are documented in the literature, surprisingly little research has been performed on how such costs influence population divergences and the build up of reproductive isolation.Examining how divergence in traits used in male contest competition influence the build up of reproductive isolation therefore provides a major challenge for future studies. Male Contest Competition and Character Displacement The importance of disruptive selection caused by competition over limiting resources (Dieckmann and Doebeli, 1999) is not limited to the situation of speciation under constant sympatric conditions.Since the development of complete reproductive isolation is often a slow evolutionary process, many populations that start diverging in allopatry spend at least some time at secondary contact zones before they have developed complete reproductive isolation.Harmful interspecific interactions such as competition over mates and/or resources can be reduced by divergence in resource-use or in reproductive phenotypes, i.e. character displacement (Brown and Wilson, 1956).Character displacement is often further divided into reproductive character dis-placement (character displacement in traits associated with reproduction) and ecological character displacement (character displacement in traits associated with resource use).As character displacement favors the evolution of novel resource use or reproductive traits, it can both initiate and finalize the speciation process and is potentially a leading cause of adaptive diversification (Pfennig and Pfennig, 2009;Schluter, 2000).Displacement of reproductive characters through heterospecific competition between males is likely when; a) the level of pre-mating isolation is low and males belonging to the two different populations compete over females, b) females of both populations use the same types of resources for their reproduction, which males in turn compete over, or c) when there is misplaced aggression towards heterospecific males that resemble conspecific males and therefore are perceived as competitors (i.e.reproductive interference, Gröning and Hochkirch, 2008).Character displacement at secondary contact is facilitated if the characters involved have already to some extent diverged between the two populations in allopatry (Milligan, 1985;Schluter, 2000).Abundant standing variation also promotes character displacement (Pfennig and Pfennig, 2009), and further divergence in sympatry can then proceed rapidly as selection acts on phenotypes already present.Another general pattern is that the strength of selection to avoid reproductive interactions at secondary contact often differs between the two populations, resulting in asymmetric character displacement.Such asymmetries can be interpreted as a trade-off between the benefits of avoiding competition with heterospecifics and the costs of having a displaced character (Cooley, 2007;Pfennig and Pfennig, 2009). Several elegant studies have examined the role of male contest competition in driving reproductive character displacement in damselflies.Male damselflies defend small mating territories close to open water (Alcock, 1987).A compelling example of asymmetric reproductive character displacement driven by heterospecific male competition is the wing spot of the damselfly Calopteyrx splendens The size of the wing spots are smaller in males belonging to C. splendens populations that are living in sympatry with the dominant C. virgo, which display higher levels of aggression towards C. splendens with large black spots (Tynkkynen et al., 2004;2005;2006;Fig. 2).Selection against hybridization provides a non-mutually exclusive additional explanation for the observed pattern since wing coloration also functions as a cue in mate choice (Svensson et al., 2010).Anderson and Grether (2010) focused on the effect of competitor recognition failure in Hetaerina damselflies by manipulating conspecific intruders to resemble heterospecific males.This treatment lowered the aggressive response of territorial males in areas where such heterospecific males naturally occurred, while no reduction in aggression was observed in allopatric sites.These experiments imply an increased ability to avoid heterospecific aggression in sympatric populations.An exciting question is to what extent variation in competitor recognition (i.e.conspecific males) is a genetically determined trait or reflects learning.Since the relative species abundance changes within sites (Anderson and Grether 2010), learning from previous experiences is likely to be important. The relationship between reproductive character displacement and ecological divergence has recently been examined in a young Ficedula flycatcher hybrid zone.Collared flycatchers colonized the Swedish island of Öland in the Baltic Sea about 50 years ago and are now rapidly excluding their pied flycatcher predecessors from the preferred breeding sites (Qvarnström et al., 2009).The main mechanism driving this exclusion is that young male pied flycatchers fail to establish breeding territories as the density of male collared flycatchers increases (Vallin et al., 2012a), resulting in a shift in the breeding habitats of pied flycatchers from deciduous forest towards mixed or coniferous forest (Vallin et al., 2012b).Because the peak in food abundance differs between habitats, the spatial segregation is paralleled by an increased divergence in timing of breeding between the two species (Vallin et al., 2012b).Male pied flycatchers vary from black and white to brown and white throughout their range (Fig. 2), but the brown coloration is more common in areas where they co-occur with the black and white collared flycatchers (Drost, 1936;Roskaft and Järvi, 1992;Saetre et al., 1993;Alatalo et al., 1994;Huhta et al., 1997).Previous experiments have shown that interspecific aggression is relaxed for browner male pied flycatchers (Král et al., 1988;Gustafsson and Pärt, 1991;Saetre et al., 1993), and brown male pied flycatchers experienced higher relative fitness than black males when faced with heterospecific competition (Vallin et al., 2012b).Moreover, relatively brown male pied flycatchers were found to settle in habitats most different from the ones defended by collared flycatchers and to breed relatively late (Vallin et al., 2012b).Thus, pied flycatchers with the most divergent breeding strategy compared to collared flycatchers appear to be favored by selection in the hybrid zone.However, because divergent plumage characters pro-mote local co-existence between the two flycatcher species, it increases the risk of hybridization in this young hybrid zone (Vallin et al., 2012b).The findings from the studies on the Ficedula flycatchers lead to the idea that interspecific competition among males over resources necessary to attract females (i.e.suitable nest sites in high quality habitat) may in general give rise to a feedback loop between ecological divergence and reproductive character displacement (Fig. 4; Vallin et al., 2012b).This is because ecological divergence in allopatry is likely to influence aggressive competitive abilities.At secondary contact, the subdominant species is displaced into a poorer habitat, which in turn leads to further divergence in competitive ability. In conclusion, although most studies of reproductive character displacement have focused on the role of reinforcement, i.e. selection against hybridization driving divergence (Noor, 1995;Saetre et al., 1997;Nosil et al., 2003;Hoskin et al., 2005) there is an increasing awareness that other species interactions may also cause this pattern (Hoskin and Higgie, 2010).In particular, the importance of interspecific aggression is gaining increasing support (Seehausen and Schluter, 2004;Tynkkynen et al., 2004;2005;Grether et al., 2009;Anderson andGrether, 2010, Vallin et al., 2012b).However, these two selective processes are not mutually exclusive and we would like to encourage future research on their possible interaction. How Can Male Contest Competition over Mates Influence the Evolution of Reproductive Isolation? Speciation is defined as a split of one species into two or more new species that follow independent evolutionary pathways (Dobzhansky, 1935).In sexually reproducing organisms, the development of reproductive isolation is a prerequisite for the occurrence of this process.Therefore, finding out how population divergence through natural selection, sexual selection and/or genetic drift leads to the build up of reproductive isolation lies at the heart of research on the mechanisms of speciation.The two most straightforward pathways to population divergence driven by male contest competition is that there is disruptive selection in sympatry i.e. hybrids receive a disproportionate amount of attack from both species (Fig. 1), or divergence in allopatry caused by differences in the balance between the costs and benefits associated with different male competitive tactics across different ecological conditions (Fig. 3). Fig. 4 Feedback loop between reproductive character displacement and ecological character displacement at secondary contact between two young species exhibiting male resource defence breeding systems When allopatric populations experience different environments (a) some degree of asymmetry in male resource defence abilities is likely to evolve.At secondary contact (sympatry) such asymmetries in male resource defence abilities may facilitate reproductive character displacement and habitat segregation (b), which in turn cause an increased asymmetry in male resource defence ability (c). Since male contest abilities often include a number of behavioral, morphological and physiological traits, there may be strong selection against hybrids and backcrossed individuals with mismatched trait combinations. There are several potential evolutionary pathways to reproductive isolation when there is population divergence in male traits associated with fighting abilities in resource based breeding systems.Apart from causing post-zygotic reproductive isolation through selection against hybrids, divergence in ability to compete over resources used to attract females can also lead to prezygotic reproductive isolation as a by-product, e.g. through displacement in habitat choice (e.g.Feder et al., 1994;Schluter, 2000) or breeding time (Théron and Combes,1995;Hendry and Day, 2005).An example of habitat segregation driven by male contest competition over nest sites is provided by the flycatcher example mentioned above.Habitat segregation has increased reproductive isolation between the two flycatcher species (Vallin and Qvarnström, 2011); male pied flycatchers that are displaced into the less preferred coniferous habitat are also less likely to hybridize.A cross-fostering experiment of nestlings and eggs between the two flycatcher species suggested that learned habitat choice further strengthens the barrier between the two species (Vallin and Qvarnström, 2011).A link between divergence in coloration, level of aggression and segregation between calling and/or breeding sites has been suggested to increase the likelihood of assortative mating in D. pumilio (Rudh et al., 2011).Thus, divergence in traits used in male contest competition can lead to both preand post-zygotic reproductive isolation and these traits may hence sometimes function as "magic traits" (Gavrilets, 2004;Servedio et al., 2011). Another pathway to reproductive isolation is if divergence in traits used in male contest competition is linked with divergence in female choice based on these traits.There are several comprehensive review articles on the interaction between male contest competition and female choice in causing evolution through sexual selection (e.g.Berglund et al., 1996;Qvarnström and Forsgren, 1998;Wong and Candolin, 2005;Hunt et al., 2009).However, there has been a lack of attempts to investigate how the interaction between these two mechanisms of sexual selection may influence the speciation process.Below we outline a few possible scenarios, but we would like to emphasize that there is too little research done in this field to draw major conclusions. In some species, e.g. with large sexual size dimorphism, male contest competition has the largest impact on pairing patterns, leaving little or no scope for females to freely exert mate choice.Since the costs of heterospecific mating often are expected to be higher for females than for males, females should develop species recognition abilities faster than males (Wirtz, 1999).As a consequence, heterospecific pairings may be more common between young species when females are unable to exert free choice.In resource based breeding systems, male competition often determines among which males females can choose, e.g.making the two mechanisms of sexual selection work in a sequential manner.If male competition results in a sorting of males of different species into different habitat types, it may facilitate female choice of conspecific males (e.g.Vallin and Qvanström, 2011). When both male competition and female choice occur (in sequence or in parallel), a relevant question becomes whether the two mechanisms of sexual selection will work in the same direction on the male traits.As mentioned in the introduction, female choice is generally assumed to operate in a unidirectional manner.Does this mean that female choice should act against population divergence in sexually selected traits?Divergence in traits used in contest competition could quickly lead to population assortative mating if sexual imprinting has a large impact on pairing patterns.Sexual imprinting means that individuals learn characteristics of their parents that enable them to find a suitable mate during adulthood (e.g.Bateson, 1966;Clayton, 1993).One reason why sexual imprinting may play an important role in speciation is that it works as a one-allele mechanism.The same allele in each of the diverged populations promotes assortative mating based on the traits that separate the two populations.This means that development of assortative mating is not sensitive to gene flow (Irwin and Price, 1999;Servedio, 2009).However, this type of one-allele mechanism cannot explain population divergence in the first place (Servedio, 2009), making the combination of this mechanism and population divergence through male contest competition a particularly good candidate as an engine of speciation. Cross-fostering experiments using birds living in natural populations have shown that exposure to heterospecific parents during the nestling stage may result in a sexual preference for heterospecific individuals when adult (Harris, 1970;Fabricius, 1991), implying high importance for sexual imprinting in assortative mating under normal conditions.Recently, sexual imprinting was also found to promote assortative mating between benthic and limnetic sticklebacks (Kozak et al., 2011). Intensive male contest competition may lead to the evolution of traits that increase male mating success but reduce female fitness, which in turn may trigger an evolutionary arms race between the two sexes whereby females evolve counter adaptations (Arnqvist and Rowe, 2002).Population divergence driven by such arms races may lead to reproductive isolation as a side effect.Sexual conflict is defined as a conflict between the evolutionary interests of the two sexes (Parker, 1979) and can be further classified into intra-or interlocus sexual conflicts.An intralocus sexual conflict arises when a particular allele increases fitness when expressed in individuals belonging to one of the two sexes but decreases fitness when expressed in individuals belonging to the other sex (Rice and Chippendale, 2001).An interlocus sexual conflict arises when an adaptation in males causes reduced fitness in females and females respond by evolving counter-adaptations.This latter situation may result in co-evolutionary arms-races between the two sexes involving genes at different loci in each sex (Arnqvist and Rowe, 2005).Genes underlying male traits used in contest competition over females are likely to reduce fitness when expressed in females.The inter-sexual genetic correlation for fitness may therefore be expected to be lower in species with more extreme sex-roles (Qvarnström et al., 2006).An evolutionary solution to this situation is that genes underlying traits used in male contest competition, such as large body size and horns, are only expressed in males resulting in pronounced phenotypic sexual dimorphisms.Evolution driven by male contest competition may also cause a number of inter-locus sexual conflicts.Apart from triggering tight intersexual co-evolutionary arms-races, increased adaptation to contest competition may occur at the expense of other traits in males, such as reduced paternal care.Females may, in turn, be selected to reduce their litter/clutch size.Sexual conflict arising through adaptations related to male contest competition should be expected to cause fast evolutionary changes in the genetic regulation of traits (reflected by sexual size dimorphisms) and in whole suites of traits in both sexes. The likelihood of evolution of reproductive isolation between populations experiencing different levels of male contest competition should therefore be relatively high.Theoretical modeling has shown that sexual conflict can promote speciation by leading to fast changes in traits involved in reproduction such as egg-sperm proteins (e.g.Gavrilets, 2000), but there should be many more unexplored pathways to reproductive isolation driven by sexual conflict. The role of intrinsic genetic incompatibilities in the process of speciation is debated.This is because the evolution of hybrid sterility is a slow process compared to the rate of speciation in several taxa (Price and Bouvier, 2002;Mendelson, 2003).Can male contest competition influence the speed by which genetic incompatibilities accumulate?Intrinsic sources of hybrid dysfunction are assumed to arise through epistatic interactions between genes from different genomes (Dobzhansky, 1940;Muller, 1940).A general pattern is that there is a greater fitness reduction in hybrids of the heterogametic sex, i.e.Haldane's rule (Haldane, 1922).The faster male hypothesis builds on the idea that Haldane's rule is a consequence of intensive sexual selection on males leading to a faster incompatibility of male limited traits (Wu and Davies, 1993;Wu et al., 1996).These male traits are primary sexual traits and not secondary sexual traits such as horns and spears that have a direct function in contest competition.However, investigating the role of male contest competition in driving divergence in male competitive tactics (which often also include differences in primary sexual traits such as sperm characteristics) would be highly relevant.Recent developments in genomics and high-throughput technologies are opening novel possibilities to reveal links between divergence between natural populations and the build up of genetic incompatibilities (Rice et al., 2011). In conclusion, there are many potential ways in which male contest competition can influence population divergence and the build up of reproductive isolation.For example, in resource based breeding systems, pre-zygotic isolation through habitat segregation may arise as a side effect of biases in male competitive abilities between populations.There are however also possible ways by which male contest competition can prevent pre-zygotic isolation.Male contest competition can sometimes limit female ability to choose conspecific mates.How the two main mechanisms of sexual selection (i.e.male contest competition and female choice) interact during the speciation process in general remains a largely unexplored research area.We would also like to encourage future research on the role of male contest competition in the evolution of post-zygotic isolation.Aggression between males can lead directly to selection against hybrids if hybrids receive a disproportionate amount of attack by males of both parental species.Population divergence in traits used in male competition can also lead to selection against hybrids as side effects.Future investigations of the relative importance of divergence in traits used in male contest compared to population divergence in other traits are needed if we want to understand the role of male competition in the speciation process. Conclusions and Future Prospects The whole speciation process, from the initial start of divergence between populations to the evolution of compete reproductive isolation between them, often takes a very long time.Researchers interested in the mechanisms driving the process of speciation are therefore, based on the current stage of population divergence that they study, faced with the problem of either having to predict the future and/or reconstruct history. The role of male contest competition in promoting and maintaining diversity remains a little explored scientific area.Most studies carried out so far have focused on the effects of male aggression towards similar competitors at early stages of population divergence (Seehausen and Schluter, 2004;Dijkstra et al., 2007;Young et al., 2009).The major conclusions are that aggression towards similar competitors may facilitate both the establishment of new color morphs and the maintenance of polymorphism.However, there are also possible ways by which such negative frequency-dependent selection may prevent population divergence.It would, for example, promote successful immigration of males between populations that have started to diverge in allopatry, since rare phenotypes are favored.The homogenizing effects of gene flow could then prevent population differentiation (Mayr, 1963;Felsenstein, 1981;Coyne and Orr, 2004).Future studies on the effects of male aggression towards similar competitors at different stages of population divergence and under different geographical conditions are needed. As populations adapt to their environment, complexes of traits may change non-independently of each other, either due to genetic correlations between these traits or because they have synergistic effects on fitness.The more traits that diverge between populations, the more likely is the build up of reproductive isolation between them (Nosil et al., 2009).One question that we raised here was how adaptation to different ecological environments coincides with divergence in traits used in male contest competition and vice versa.Since aggressive abilities are likely to trade-off against a number of other crucial abilities such as parasite resistance, protection against predators and general stress tolerance, we expect whole suites of traits to diverge non-randomly across populations experiencing different ecological environments.We would like to encourage more future studies examining how divergence in traits used in male contest competition, as compared to other traits, influences the build up of reproductive isolation.However, the key to understanding the role of male contest competition in speciation probably lies in the examination of how the different selection pressures (including natural selection and the two main mechanisms of sexual selection) interact during the different stages of population divergence. When populations diverge in sympatry, harmful interspecific interactions such as competition over mates and/or resources can be reduced by divergence in resource-use or in reproductive phenotypes, i.e. character displacement (Brown and Wilson, 1956).The importance of interspecific aggression in driving character displacement is gaining increasing support (Seehausen and Schluter, 2004;Tynkkynen et al., 2004;2005;Grether et al., 2009;Anderson andGrether, 2010, Vallin et al., 2012b).One particularly interesting open research question is how character displacement through interspecific aggression influences reinforcement, i.e. selection against hybridization driving divergence in mating traits. Male contest competition over females (or over resources needed to attract females) can facilitate speciation by causing direct selection against hybrids or by leading to fast population divergence, which in turn may lead to reproductive isolation as a side effect through a number of different pathways.However, we would also like to stress that male aggression under certain conditions may also limit assortative mating and/or population divergence.In this review, we have outlined some examples of how sexual selection through male competition can influence speciation.There is plenty of empirical evidence showing that male contest competition can cause rapid evolutionary changes, but how these evolutionary changes influence speciation remains surprisingly unexplored. Fig. 1 Fig. 1 Disruptive selection in a continuous trait (Left) and relationship between male-male aggression and the frequency of pre-existing color-morphs (Right) Due to high cost of male contest competition among the most common phenotypes (purple), extreme phenotypes (blue and red) are favored, changing the population from a unimodal to a bimodal trait distribution.These color-morphs could have evolved either through disruptive selection of a continuous trait (see left) or through a discrete phenotypic shift caused by e.g. a single mutation causing a change in coloration.The cost of intrasexual antagonism is expected to be inverse to the frequency of a given color-morph if there is greater aggression among males of similar phenotypes.Such negative frequency-dependent selection predicts stable coexistence of two or more color morphs. Fig. 3 Fig. 3 Fitness effects and trait optima for a trait involved in male contest competition over mates Suggested fitness effects caused by natural (grey lines) and sexual selection (black lines) of a given trait value.The first graph shows a reference scenario of natural and sexual selection with the trait optima (0).The second and third graph each show two scenarios of changed fitness effects through changed sexual and natural selection respectively.The second graph shows (A) a scenario where the level of the trait value does not alter fitness effects through sexual selection and (B) an increased benefit of a higher trait value.These situations could represent e.g.differences in population densities where (A) low densities lead to male competition not affecting the reproductive success and (B) the opposite.The third graph shows scenarios of increased (C) and decreased (D) cost from a higher trait value, through natural selection.This situation could represent e.g.increased (C) or decreased (D) predation pressure on conspicuous behaviors or ornaments.Trait optima for each scenario are shown by arrows and corresponding letters in the grey box.
2018-04-06T15:18:16.427Z
2012-06-01T00:00:00.000
{ "year": 2012, "sha1": "35bf331ab71ce2485768f6df60b1c4e3e0d8aa4e", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/cz/article-pdf/58/3/493/32968159/czoolo58-0493.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "35bf331ab71ce2485768f6df60b1c4e3e0d8aa4e", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
212620015
pes2o/s2orc
v3-fos-license
Self-Compassion Demonstrating a Dual Relationship with Pain Dependent on High-Frequency Heart Rate Variability One previous study indicated the significance of trait self-compassion in psychological well-being and adjustment in people with chronic pain. Higher-frequency heart rate variability (HF-HRV) was found to be closely associated with self-compassion and pain coping. The current study was therefore designed to investigate the relationship between self-compassion and experimental pain as well as the impact of HF-HRV. Sixty healthy participants provided self-reported self-compassion and underwent a cold pain protocol during which HF-HRV was evaluated. Results demonstrated a dual relationship between self-compassion and pain, dependent on the level of HF-HRV during pain exposure. Specifically, self-compassion was associated with lower pain in the condition of higher HF-HRV, while there was an inverse relationship between self-compassion and pain when HF-HRV was lower. Our data indicate the significance of HF-HRV in moderating the association between self-compassion and experimental pain. Introduction Self-compassion generally entails the capability to be kind and caring toward oneself in times of suffering, failure, or perceived inadequacy [1]. A large number of studies have established the protective influence of self-compassion on psychosocial distress, including social anxiety [2,3], burnout [4], and trauma [5]. Beyond this evidence, recent studies have indicated the significance of self-compassion in pain experience [6,7]. One study found that a greater ability to show self-compassion was associated with a lower negative effect and catastrophizing in people with chronic pain [6]. However, the number of studies to support the benefits of self-compassion in pain is highly limited. More studies are therefore required to improve our understanding on the role of self-compassion in pain experience. Moreover, the literature has indicated a close relationship between self-compassion and heart rate variability (HRV) as well as its role in pain coping. Self-compassion was found to be associated with increased HRV in the context of a stress, in which HRV, especially high-frequency HRV (HF-HRV), was thought to reflect parasympathetic activity and the regulatory control over sympathetic arousal [8]. In contrast, recent evidence demonstrated that pain could suppress HRV [9]. ese studies therefore indicate the potential of increased HRV in linking self-compassion with pain reduction. e current study was designed to investigate the relationship between self-compassion and pain as well as the impact of HF-HRV on this association. Healthy participants provided self-reported self-compassion and then underwent a cold pain protocol. Mediation and moderation models were considered. In the case of mediation, we hypothesize that self-compassion is associated with higher HF-HRV, which in turn reduces pain experience. Meanwhile, in the case of moderation, self-compassion is assumed to be associated with lower pain in the condition of higher level of HF-HRV. Both of the two cases would implicate the role of HF-HRV in the relationship between self-compassion and pain and possibly provide insight into the therapeutic role of self-compassion in chronic pain. Materials and Methods 2.1. Participants. Sixty healthy, pain-free, right-handed adults participated in this study. In order to reduce expectancy effects, participants were told that the aim of the study is to examine heart rhythm to cold water. ECG data from three participants were contaminated, and data from 57 participants were therefore analysed (27 males and 30 females, age range: 19-33 years, Mean � 20.28, SD � 2.38). Exclusion criteria included use of psychoactive medication or a history or current diagnosis of a psychiatric disorder, as assessed by the Mini International Neuropsychiatric Interview (MINI) [10]. All study participants provided informed consent, and the study was approved by the Ethics Committee in the China West Normal University. is study was conducted in accordance with the Declaration of Helsinki. Experimental Design and Procedure. Participants recruited to this study underwent a single-session design protocol. Following consent, participants were asked to fulfill the Self-Compassion Scale (see below Self-Compassion Scale). Participants were then set up with the ECG recording system, which was followed by a 3-minute cold pain exposure (see below Pain Stimulation). Self-Compassion Scale (SCS). e 26-item Self-Compassion Scale was used to measure individual differences in self-compassion [11]. It is composed of six subscales: selfkindness, self-judgment, common humanity, isolation, mindfulness, and overidentification. e total score was created by calculating the grand mean score of subscales after reversing the coding responses to the negatively worded items. Participants were asked to indicate how they typically act toward themselves in difficult times using a five-point Likert scale (from 1 "never" to 5 "almost always"). SCS showed well-established psychometric characteristics with an internal consistency of 0.92 [11]. Chen et al. [12] reported Cronbach's alpha (0.83) and test-retest reliability (0.89) of the Chinese version. ECG Recording. A BITalino (r) evolution Board Kit BT (BITalino, Portugal) was used to record ECG (http://bitalino. com/en/). ree Ag/AgCl electrodes were used, with two electrodes being attached to the bilateral clavicle area within the rib cage, respectively, and one electrode to the lower edge of left rib cage. Data were recorded using OpenSignals (r) evolution software (v.2017, BITalino, Portugal) at a sampling rate of 1,000 Hz. Experimental Protocol. Participants underwent a 3minute cold pain protocol which was divided into six consecutive 30 second blocks. In a 30 second block, participants viewed a fixation cross for 25 seconds and then rated "pain intensity at the moment" on a scale of 0-10 (0 � no pain; 10 � worst pain imaginable) in 5 seconds (PowerPoint, Microsoft Corporation). In order to avoid socially desirable behaviour [13], participants wrote the pain ratings on a piece of paper that could not be seen by the experimenter. Pain Stimulation. A recent study demonstrated that an iced bottle can induce ongoing cold pain [14]. In the current study, participants were asked to hold a 0.5 L plastic bottle with iced water (− 1°C) for 3 minutes. is protocol was used in a previous study by our group [15]. Participants were told to put the volar surface of the nondominant hand on the surface of the bottle and not to squeeze or avoid it, to minimize the variability of touching. e nondominant hand was selected according to the pain literature [16]. A fresh iced bottle was used for each participant for consistency. Data Analysis. ECG data during pain exposure were analysed as illustrated in Figure 1. e Pan-Tompkins algorithm was used to identify the R points from the QRS complex (Figure 1(a)) [17]. Artefacts were visually checked and edited according to the published guidelines [18]. e original R-R Intervals (RRIs) were calculated and then linearly interpolated to 4 Hz to obtain evenly sampled signals (Figure 1(b)) [19,20]. In order to remove the slow drift, interpolated RRI waves were high-pass filtered with the cutoff frequency of 0.02 Hz [19] (Figure 1(c)). Filtered RRI waves were then used to calculate HRV using the timevarying autoregressive (TVAR) model which can capture the dynamics of HRV [21] (Figure 1(d)). In particular, the TVAR model is suggested to be able to provide accurate estimation of the power spectrum [22], and it has been used in the investigation of beat-to-beat spectra during ongoing pain [19]. e model order was set to 12 according to the literature [22]. HF-HRV was expressed as the relative value of high-frequency component (0.15-0.4 Hz) in proportion to the total power minus the very low-frequency component (0-0.04 Hz) [23]. e relative values of HF-HRV are suggested to emphasize the controlled and balanced behaviour of the sympathetic and parasympathetic branch of the autonomic nervous system (ANS) [23]. Statistical Analyses. Correlation analyses were initially conducted among self-compassion, HF-HRV, and pain using SPSS (version 23; IBM Corp, Armonk, NY). e area under the curve (AUC) of pain and HF-HRV during the 3minute pain was calculated using the linear trapezoidal rule. e AUC approach was employed as it provides a summary measure of pain or HF-HRV dynamics across a specified time window. A mediation model was not performed as there was no significant association between self-compassion and pain. A moderation analysis was conducted using PROCESS with the bootstrapping method [24]. Specifically, the model was set to "1" (i.e., conditional effect), and self-compassion, HF-HRV, and pain were specified as the independent, moderator, and dependent variable, respectively. e bias-corrected and accelerated (BCa) bootstrap estimates were based on 5,000 bootstrap samples. As this was a cross-sectional design, in a supplementary analysis, we exchanged self-compassion, HF-HRV, and pain within a moderation model. Descriptive and Correlational Analysis. Participants reported a total self-compassion score of 3.38 (SD � 0.44) (Figure 2(a)). Figure 2(b) demonstrates the dynamics in pain ratings. A one-way ANOVA on pain ratings indicated that pain kept increasing by the end of the first minute (Time 2 vs. Time 1, P Bonf � 0.001), remained high in the second minute (Time 4 vs. Time 1, P Bonf > 0.05), and then decreased by the end of pain exposure (Time 6 vs. Time 1, P Bonf � 0.002). Correlational analyses found no significant association among self-compassion, HF-HRV, or pain (P s > 0.05). Figure 2(c) demonstrates the HF-HRV dynamics across the pain exposure. HF-HRV was found to moderate the relationship between self-compassion and pain ratings (ΔR 2 � 0.15, F 1 , 53 � 9.69, P � 0.003). e moderation analysis further revealed that self-compassion was associated with increased pain (P � 0.019) when HF-HRV was lower (≤− 1 SD), while self-compassion was associated with lower pain (P � 0.046) when HF-HRV was higher (≥1 SD) (Figure 2(d)). e supplementary analysis revealed no other significant models (all P s > 0.05). Discussion e current study was designed to investigate the association between trait self-compassion and experimentally induced pain as well as the role of HF-HRV in this relationship. Our results demonstrated a dual relationship between selfcompassion and pain, dependent on the level of HF-HRV during pain administration. Self-compassion was associated with lower pain when HF-HRV was relatively high. Meanwhile, self-compassion was inversely related to pain experience in individuals with lower HF-HRV. Our data indicate the particular importance of HF-HRV in moderating the relationship between self-compassion and pain. Our data demonstrated a moderation effect of HF-HRV in the association between self-compassion and pain experience. One previous study found that, in people with chronic pain, trait self-compassion was associated with lower negative affect and higher ability to be compassionate (i.e., lower catastrophizing and rumination) using an attribution protocol [6]. In another study, self-compassion was a significant predictor of lower level of pain catastrophizing and pain disability among patients who have persistent pain and who are obese [25]. In line with these findings, our data Pain Research and Management demonstrate the relationship between self-compassion and pain experience dependent on the level of HF-HRV. Findings in the current study indicated that selfcompassion was associated with pain experience dependent on the level of HF-HRV. More interestingly, our results indicated a double dissociation between selfcompassion and pain (Figure 2(d)). Self-compassion means to treat oneself with kindness, acceptance, and a sense of common humanity in times of suffering [1]. Our findings highlight the importance of HF-HRV in moderating the association between self-compassion and pain. Pain serves to protect the body whereby the energy resources are allocated by the autonomic nervous system (ANS) [26]. Pain can activate the sympathetic branch while suppressing the parasympathetic branch of the ANS [9]. Meanwhile, HF-HRV is believed to be closely and strongly associated with cardiac vagal tone (i.e., parasympathetic tone) which reflects the regulatory control over sympathetic arousal [9]. erefore, our findings may be more related to the role of HF-HRV in the regulation of pain-related arousal. Similarly, recent studies showed that increased HF-HRV was associated with decreased pain experience in a mindfulness meditation or simply a compassionate self-talk protocol [27,28]. Overall, we present interesting findings suggesting the particular significance of HF-HRV in moderating the relationship between self-compassion and pain experience. It is noted that our data did not support the role of HF-HRV in mediating the influence of self-compassion on pain. A moderation model is different from a mediation one, with the latter being able to provide more information on the causal relationship between variables [29]. Nonetheless, our findings support the role of HF-HRV in linking self-compassion with pain experience. ere are other potential approaches to investigate the association between self-compassion and pain experience beyond HRV. Electroencephalogram (EEG) and functional imaging studies have tried to reveal the mechanisms of nociceptive transmission. EEG evidence has suggested that pain may suppress alpha activity but increase gamma activity which underlies the nociceptive transmission and integration, respectively [30]. In addition, pain is suggested to be mediated by a "spinothalamocortical" pathway [31]. Findings from these imaging methods would enrich our understanding on the role of self-compassion in pain experience. Moreover, the literature has indicated a close relationship between self-compassion and coping strategies (e.g., emotion regulation and cognitive reconstructing) as well as their impact on health outcomes [32]. Future studies may wish to investigate the role of coping strategies in the association between self-compassion and pain. We acknowledge some limitations in the current study. We recruited an easy sample with a relatively narrow age range, which limits the conclusions to be generalised to other age ranges, such as old adults. Indeed, age plays a role in both self-reported self-compassion [33] and pain experience [34]. Findings in this study therefore need to be further examined in other age groups. Other specific physical conditions or medications that could have influenced pain or HRV were not considered. Nonetheless, the participants were screened using MINI [10] and were free of pain. We presented results from healthy participants in the current study, which warrants further investigation in people with chronic pain. Purdie and Morley [6] demonstrated the importance of selfcompassion in psychological well-being and adjustment in people with chronic pain. Future studies could further examine the relationship between self-compassion and pain experience in chronic pain populations as well as the moderating impact of HF-HRV. Although a short-term HRV measurement was used in this study, long-term recordings and HRV measurements need to be considered [23]. Other variables that potentially influence HRV measurements (e.g., core body temperature and circadian rhythm) were not controlled. In addition, we used an iced bottle [14] to induce cold pain instead of a cold pressure test [35]. Condensation is expected on the surface of the bottle. However, this is not expected to affect the results as we have carefully controlled the timing to bring out the bottle from the freezer. To conclude, HF-HRV moderates the relationship between self-compassion and pain experience. Trait selfcompassion has a dual association with experimentally induced pain dependent on the level of HF-HRV during pain administration. ese findings may have implications for pain management. Changing the mindset in a more compassionate fashion toward oneself may be effective in pain coping. Moreover, our data provide empirical evidence for the development of compassionate interventions in the management of chronic pain [25,36]. Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest e authors declare no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
2020-02-20T09:17:02.348Z
2020-02-18T00:00:00.000
{ "year": 2020, "sha1": "476adee96124343baf95e7d61487a0133308c517", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/prm/2020/3126036.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8909023768b356404eeeb4960c96ab63c7bb9a6e", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
102012546
pes2o/s2orc
v3-fos-license
Withaferin A Suppresses Anti-apoptotic BCL2, Bcl-xL, XIAP and Survivin Genes in Cervical Carcinoma Cells Purpose: To investigate the effect of withaferin A on the suppression of the anti-apoptotic genes, BCL2, Bcl-xL, XIAP and Survivin), in cervical carcinoma cells. Methods: Annexin V-FITC/propidium iodide (PI) staining was used for the investigation of cell apoptosis. RNA RNeasy Kits was used to isolate RNA and Omniscript RT to reverse and transcribe the mRNA. Quantitative real-time polymerase chain reaction (qPCR) was performed using Taq PCR Master Mix Kit. Results: Withaferin A (WFA) treatment reduced mRNA and protein levels of antiapoptotic genes in MCF-7 and HeLa cervical carcinoma cells. Suppression of BCL2, Bcl-xL, XIAP and Survivin induced a significant anti- proliferative effect. Treatment with WFA at a concentration of 20 μM, decreased cell viability and induced apoptosis. In MCF-7 cells, knockdown of BCL2, Bcl-xL, XIAP and Survivin caused 4-fold enhancement in apoptosis rate and 53 % decrease in cell viability. Conclusion: WFA significantly leads to knockdown of antiapoptotic genes and is, therefore, a promising treatment strategy for cervical cancer. INTRODUCTION Cervical cancer is the most frequently observed cancer in women throughout the world. It is estimated that every year 500,000 new cervical carcinoma cases are detected globally and 80 % of them are from developing countries [1,2]. Currently, surgery, radiation and chemotherapy are used for the treatment of cervical cancer. Intracavitary brachytherapy is a technique used to deliver high radiation doses to tumor site without exposing normal tissues to radiation [3,]. Presently, the combination of external beam radiotherapy and intracavitary brachytherapy is considered to be the standard treatment strategy for cervical cancer. Although high cure rates are reported at the early stage of the disease using definitive radiotherapy, in locally advanced cervical cancer cases the cure rates are poor. The 5-year overall survival of only 66 % are reported at the advanced stages [5]. Thus, the discovery of molecules with roles in the treatment of locally advanced cervical cancer is needed. Maintenance of homeostasis in normal tissues and selective removal of damaged and infected cells is achieved by the process of apoptosis [6].Tumor cells are bestowed with the ability to escape apoptosis [7]. It is reported that tumor cells express higher concentration of antiapoptotic genes including BCL2, Bcl-xL, XIAP and Survivin which enables them to evade apoptosis. Among the antiapoptotic genes, BCL2 and Bcl-xL help to escape apoptosis by inhibiting cytochrome release from the mitochondria followed by failure to activate caspases [8]. Caspases on activation induce apoptosis in which proteins essential for cell function and stability are cleaved [9]. EXPERIMENTAL Cell lines and cell culture The human cervical carcinoma cell lines MCF-7, HeLa and ME-180 were purchased from American Type of Culture Collection (ATCC, Manassas, VA, USA). The cells were cultured under standard conditions at 37 o C in humidified atmosphere containing 5 % CO 2 . Reagents and chemicals Withaferin A (WFA) was purchased from Sigma Aldrich (St. Louis, MO, USA) and dissolved in dimethyl sulfoxide (DMSO) to a concentration of 100 µM as a stock solution. Rabbit antihuman Caspase-3, mouse antihuman Bcl-2, and β-actin were purchased from Cell Signaling (China). Cell viability assay Cervical carcinoma cells were incubated with various concentrations of WFA at 37 o C for 72 h. MTT viability assay was performed according to manual protocol (Roche Diagnostics). The absorbance was measured at the wavelength of 595 nm. Apoptosis analysis Annexin V-FITC/propidium iodide (PI) staining (Annexin V-FITC Apoptosis Detection Kit I, BD Biosciences, Heidelberg, Germany) was used to examine the apoptosis in cervical carcinoma cells. After 48 h of WFA treatment flow, cytometry (FACScan, BD Biosciences) was used to examine the cells. The quadrant analysis of Annexin V-FITC/PI plots was performed to determine the percentage of early (Annexin V-FITC positive, PI negative) and late (Annexin V-FITC positive, PI positive) apoptotic cells. For this purpose, WinMDI2.8 software was employed. Reverse transcriptase PCR For the isolation of total cell RNA RNeasy Kits (Qiagen) was used. The RNA sample (500 ng) was used for reverse transcription to cDNA using Omniscript RT (Qiagen). The cDNA was then employed for quantitative real-time PCR (qPCR) using Taq PCR Master Mix Kit (Qiagen) according to the manual protocol. Western blot analysis The cells after WFA treatment were lysed in ice cold lysis buffer supplemented with protease inhibitors. The cell lysate was subjected to SDS-PAGE separation. Proteins were transferred to polyvinyl idenedi fluoride (PVDF) membrane (GE Healthcare, Freiburg, Germany) and then incubated with the primary antibodies. The antibodies were used against BCL2, Bcl-xL, XIAP and Survivin (Sigma-Aldrich, St. Louis, MO, USA) and anti-β-actin was used as the loading control (Sigma-Aldrich, St. Louis, MO, USA). The polyclonal anti-rabbit immunoglobulin HRP-linked antibody and polyclonal rabbit anti-mouse immunoglobulin HRP-linked antibody were used as the secondary antibodies. For visualization, we used the Enhanced chemiluminescence Kit (GE Healthcare). Antiapoptotic gene expression in MCF-7, HeLa and ME-180 cervical carcinoma cell lines The expression levels of the four antiapoptotic genes in MCF-7, HeLa and ME-180 cervical cancer cells was analyzed by quantitative PCR analysis. In all the tested cervical carcinoma cell lines, BCL2, Bcl-xL, XIAP and Survivin were expressed at higher level (Table 1). Among the three tested cell lines, MCF-7 and HeLa cells expressed all the four genes which are antiapoptotic at significantly higher levels thus, were selected for further studies. Effect of WFA on expression of genes involved in anti-apoptosis Among the range of WFA concentrations from 5 to 50 μM used to analyse the effect on antiapoptotic gene inhibition, the effect was significant at 30 μM concentration. Although, 20 μM WFA concentrations markedly reduced the mRNA levels by 52-59 % after 24 h treatment, the inhibition rate was increased to 81-86 % at 30 μM concentration of WFA ( Figure 1). Thus, a marked antiapoptotic gene inhibition was observed after WFA treatment in MCF-7 and HeLa cell lines. Molecular effects of WFA on inhibition of genes involved in antiapoptosis WFA at 30 μM concentration significantly inhibited the mRNA expression levels of antiapoptotic genes in the cervical carcinoma cell lines after 48 h treatment ( Figure 2). BCL2, Bcl-xL, XIAP and Survivin were decreased respectively by 32 39 42 and 36 % in MCF-7 cells. The results from western blot analysis showed protein reduction after 48 h of WFA treatment in both MCF-7 and HeLa cervical cancer cell lines (Figure 3). Cellular effects of WFA treatment The WFA-mediated inhibition of anti-apoptotic genes led to a significant decrease in cervical cancer cell viability (Figure4). The cervical carcinoma cell viability was decreased by 39 and 46 % in MCF-7 and HeLa cells respectively after treatment with WFA for 96 h. The rate of cell apoptosis was also significantly increased on treatment with WFA. In MCF-7cells, there was a 4-fold increase in the rate of apoptosis ( Figure 5). In HeLa cells apoptosis was increased by 3.5 fold ( Figure 5). However, no changes were observed in the cell cycle distribution in MCF-7 and HeLa cells on treatment with WFA. DISCUSSION In tumor tissues the expression of various antiapoptotic genes is found to be markedly higher and plays an important role in inhibiting the induction of apoptosis [23][24][25]. Thus, antiapoptotic gene inhibition can be a potent strategy for antitumor therapy. It is believed that the inhibition of one antiapoptotic gene can be compensated by the expression of other genes, therefore, inhibition of all the major antiapoptotic genes can have the greatest effect. In the present study, treatment of MCF-7 and HeLa cervical carcinoma cells with 30 μM of WFA resulted in the suppression of mRNA corresponding to antiapoptotic gene. WFA significantly inhibited the mRNA and protein levels of anti-apoptotic genes and exhibited strong anti-proliferative effects on MCF-7 and HeLa cervical carcinoma cells. In MCF-7 and HeLa cervical carcinoma cells the proliferation was reduced by 39 and 46 %, respectively following WFA treatment for 96 h. Tumor cells are bestowed with the ability to avoid the process of apoptosis [7]. It is reported that the tumor cells express higher level of BCL2, Bcl-xL, XIAP and Survivin antiapoptotic genes which enables them to evade apoptosis. The rate of cell apoptosis was also significantly increased on treatment with WFA. In MCF-7 cells, there was a 4-fold increase in the rate of apoptosis. In case of the HeLa cells the proportion of apoptotic cells was enhanced by 3.5 fold on treatment with WFA. However, in both MCF-7 and HeLa cells WFA treatment could not induce any alteration in the progression of cell cycle. Thus, the current study demonstrates that WFA induces suppression of antiapoptotic gene expression which can be a vital importance for the treatment of cervical cancer. CONCLUSION Thus withaferin A significantly inhibits cervical cancer via knockdown of antiapoptotic genes and is thus a potential therapeutic agent for cervical cancer.
2019-04-07T13:03:46.068Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "9b59273dc6daaa3cadaacfbe8afc46d8962e9634", "oa_license": "CCBY", "oa_url": "https://www.ajol.info/index.php/tjpr/article/download/128047/117597", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "3e7c89817569ea37b10464f71e379f7664c26319", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Chemistry" ] }
112500852
pes2o/s2orc
v3-fos-license
variable Design and FPGA Implementation of Variable Cutoff Frequency Filter based on Continuously Variable Fractional Delay Structure and Interpolation  Abstract —This paper presents the design and FPGA implementation of interpolated continuously variable fractional delay structure based filter (ICVFD filter) with fine control over the cutoff frequency. In the ICVFD filter, each unit delay of the prototype lowpass filter is replaced by a continuously variable fractional delay (CVFD) element proposed in this paper. The CVFD element requires the same number of multiplications as that of the second-order fractional delay structure used in the existing fractional delay structure based variable filter (FDS based filter), however it provides fractional delays corresponding to the higher-order fractional delay structures. Hence, the proposed ICVFD filter provides wider cutoff frequency range compared to the FDS based filter. The ICVFD filter is also capable of providing variable bandpass and highpass responses. We use two-stage approach for the FPGA implementation of the ICVFD filter. First, we use pipelining stages to shorten the critical path and improve the operating frequency. Then, we make use of specific hardware resource, i.e. RAM-based Shift Register (SRL) to further improve the operating frequency and resource I. INTRODUCTION Variable finite impulse response (FIR) filters (FIR filters whose frequency response can be changed based on the desired specifications) are widely used in digital communications. The frequency response of an FIR filter can be changed by completely changing its coefficients or by modifying the impulse response using various operations. In the programmable digital filters [1]- [4], the desired frequency responses are obtained by updating all the filter coefficients which are stored in the memory. This is a very simple approach, and in general, the variable coefficient filters are optimum in a sense that the filter length for the particular frequency response specifications is the minimum. However, when the frequency response of the filter needs to be changed frequently, large number of memory access operations make updating routines of these filters time consuming. Other approaches proposed in the literature [5]- [12] modify the impulse response of the fixed-coefficient Manuscript received Nov. 12, 201512, , accepted Dec. 11, 2015. Authors are with the School of Computer Engineering, Nanyang Technological University, Singapore. (e-mail: sumedh1@e.ntu.edu.sg, asvinod@ntu.edu.sg). prototype filter by controlling fewer parameters, without the need of updating all the filter coefficients. In the interpolation approach [5], each delay of the fixedcoefficient filter structure is replaced by M delays to obtain a multiband response and then the desired band is extracted using a masking filter. In the coefficient decimation method (CDM) [6], the impulse response of the fixed-coefficient filter is modified by retaining every D th coefficient of the filter and either replacing the remaining coefficients by zeros or by completely discarding them. The cutoff frequency of the coefficient decimated filter can be an integer multiple of the cutoff frequency of the prototype filter. Even though the interpolation and CDM techniques are simple to implement (as they need only multiplexers to vary M or D) and the filters realized using [5] and [6] have low complexities, they provide only coarse control over the cutoff frequency due to the discrete nature of the controlling parameters (M and D). A very fine control over the cutoff frequency of the filter can be obtained at the cost of increase in the complexity of the filter structure. In [7], an all-pass transformation based variable filter is realized by replacing the unit delay of the prototype filter by the first-or second-order all-pass structure. Even though the prototype filter in [7] is a linearphase filter, the resultant filter is not a linear-phase filter due to the use of all-pass transformation. As opposed to the allpass transformations, the frequency transformation based filters preserve the linear-phase property of the prototype filter [8]. However, the transition bandwidth of the frequency transformation based filter can be significantly wider than that of the prototype filter. The spectral parameter approximation (SPA) technique [9], [10] makes use of weighted combination of the fixed-coefficient FIR sub-filters to generate the desired frequency response and provides absolute control over the cutoff frequency of the filter in the desired range. However, the complexity of the SPA technique is higher than all the other approaches. In [11], a set of fixed-coefficient filters is used where each filter takes care of only specific part of the variable frequency regions. This technique requires large number of filters when the desired cutoff frequency range is large. In [12], the unit delay element of the filter is replaced by the fractional delay structure (FDS). In this FDS based filter [12], a single parameter (d) varies the value of the fractional delay; thereby modifying the sample values and the length of the impulse response of the filter, resulting in a variable Design and FPGA Implementation of Variable Cutoff Frequency Filter based on Continuously Variable Fractional Delay Structure and Interpolation Technique digital filter with fine control over the cutoff frequency. A second-order modified Farrow structure [13] is used to replace the unit delay of the prototype filter in [12]. Therefore, cutoff frequency of the filter varies according to the value of the fractional delay (1 ≤ 1+d < 2). However, the cutoff frequency, fc, can be varied only in the limited range given by fc_mod/2 < fc ≤ fc_mod ≤ 0.2 where fc_mod is the cutoff frequency of the prototype filter. (Please note that all the frequency values mentioned in this paper are normalized with respect to half the sampling frequency, i.e. π.) It is observed that the FDS based filter provides unity magnitude response and constant phase response only for the cutoff frequencies in the lower range of the Nyquist band [12]. The second-order modified Farrow structure can provide the unity magnitude response and constant phase response only for the low frequencies (approximately up to 0.2) [13,14], which results in degradation in the response of the FDS based filter for higher cutoff frequencies. Therefore, the maximum cutoff frequency obtained from the FDS based filter can be approximately 0.2. In [12], CDM is used to increase the cutoff frequency range of the FDS based filter. Therefore, the cutoff frequency range can be fc_mod/2 < fc ≤ 2fc_mod ≤ 0.2. However, the prototype filter needs to be overdesigned, i.e. its order should be increased, in order to compensate for the passband ripple, stopband attenuation and transition bandwidth degradation which is inherent to the CDM. In this paper we present the design and FPGA implementation of modified fractional delay structure based filter to overcome the lower limit on the cutoff frequency of the FDS based filter. The proposed interpolated continuously variable fractional delay structure based filter (ICVFD filter) uses the continuously variable fractional delay (CVFD) structure. This CVFD element provides wider delay range, equivalent to the delay range obtained from the higher-order fractional delay structure, without increasing the number of multiplications required. The ICVFD filter uses the CVFD element and the interpolation technique and provides a continuous control over the cutoff frequency of the filter. The ICVFD filter is capable of producing variable lowpass, bandpass and highpass filter responses. The rest of the paper is organized as follows. Section II presents the details of the CVFD element and the ICVFD filter. A design example and comparison of the ICVFD filter with the existing variable filters is also presented in Section II. Section III presents the details of pipelining and use of specific hardware resources for implementing the ICVFD filter. The FPGA implementation results are presented in Section IV. Finally Section V concludes this paper. A. CVFD Element The CVFD element provides the delay, Dp = p + 1 + d (1) where the fractional delay equal to 1+d is provided by the second-order modified Farrow structure, and the variable number of p unit delays are added using a multiplexer, as shown in Fig. 1. The fractional delay range of this CVFD element can be changed online as follows. For p = 0, the fractional delay range is 1 ≤ Dp < 2, whereas the fractional delay range changes to 2 ≤ Dp < 3 for p = 1, and so on. For the other fractional delay structures, the multiplication complexity of the fractional delay structure (i.e. the number of multiplications required) increases with the range of fractional delay to be obtained [13][14][15]. However, the CVFD element provides the fractional delay same as the higherorder fractional delay structure, at the same multiplication complexity as that of the second-order fractional delay structure. Further, as the second-order modified Farrow structure has the least multiplication complexity among the second-order fractional delay structures, the proposed CVFD element is capable of providing the fractional delay equivalent to the fractional delay provided by higher-order fractional delay structures for the least multiplication complexity possible. Another advantage of the CVFD element is that the fractional delay range can be changed online. For the modified Farrow structure based fractional delay structures [13], only the value of fractional delay can be changed online, and not the fractional delay range. The fractional delay range depends on the order of the structure, and therefore, the structures of different orders are required to change the fractional delay range. The second-order fractional delay structures can provide the fractional delay range of 1 to 2 only, and the third-order fractional delay structures can provide the delay range of 2 to 3 only. Hence, these are not suitable when the fractional delay range needs to be changed in the FDS based filter. Similar to the proposed CVFD element, the fractional delay structure proposed in [15] is capable of changing the fractional delay range on-the-fly. However, as mentioned previously, multiplication complexity of the CVFD element is less than that of the fractional delay structure in [15]. It may be possible to use fractional delay structures based on other implementation strategies [14] which can change the fractional delay range online, but the multiplication complexity of such structures is higher, and therefore the multiplication complexity of the FDS based filter increases. In the CVFD element, the fractional delay range can be changed online without any additional multiplication complexity for the FDS based filter. B. ICVFD Filter The proposed ICVFD filter is the combination of the FDS based filter in which the CVFD element replaces the unit delay of the filter and the interpolation technique as shown in Fig. 2. Note that the input signal is fed to the filter and not to the CVFD element. The CVFD element is just used to replace the unit delay of the filter. The interpolation of the CVFD element by factor M results in a delay, Dc, given by Dc = p + (1+d) × M (2) Therefore, the cutoff frequency and the transition bandwidth of the ICVFD filter, fc_ICVFD and tbwICVFD respectively, are given by where fc_mod is the cutoff frequency of the prototype (modal) filter and tbwmod is the transition bandwidth of the prototype filter. Second-order modified Farrow structure C. Variable Filter Responses obtained from ICVFD Filter 1. When p = 0 In the ICVFD filter, the cutoff frequency of the filter is controlled using three parameters viz. d, M and p. When p = 0 and M = 1, the ICVFD filter is equivalent to the FDS based filter. When p = 0 and M is varied, the ICVFD filter produces variable lowpass, bandpass and highpass filter responses. The cutoff frequencies of the bands in the multiband response are given by fAi ± fc_ICVFD, where fc_ICVFD is defined in (3), with parameter p = 0 for the fractional delay Dc, i.e. the cutoff frequencies are fAi ± {fc_mod/(1+d)}/M, where fAi are the center frequencies of the bands in the multiband response given by, The transition bandwidth of the bands in the multiband response is given by (4). The desired band can be extracted by using a suitable masking filter. The cutoff frequencies corresponding to these fractional delays can be obtained in the ICVFD filter without any increase in the multiplication complexity of the prototype filter structure compared to the FDS based filter. (As the multiplication complexity of the CVFD element is the same as that of the second-order modified Farrow structure, for the same filter order, the total number of multiplications required for the fractional delays remains the same for the ICVFD filter and the FDS based filter.) D. Properties of ICVFD Filter 1. Passband ripple and stopband attenuation In the ICVFD filter, all the filter coefficients of the modal filter are used for the filtering operation. Hence, unlike the CDM [6], no degradation occurs in the passband ripple or the stopband attenuation of the resultant ICVFD filter response compared to the prototype filter response. To illustrate this point, the magnitude responses of the prototype filter and the ICVFD filters (for two different parameter settings) are shown in Fig. 5. The zoomed and cropped versions of the responses are also shown in the inset. 2. Transition bandwidth As seen from (4), the transition bandwidth of the ICVFD filter is always less than or equal to the transition bandwidth of the prototype filter. 3. Linear phase Unlike the all-pass transformation based filter in [7], the ICVFD filter maintains the linear-phase property in its passband region. The magnitude-phase response plots of the ICVFD filter for two different parameter settings are shown in Fig. 6. Phase delay plots are shown in the inset of figures. 4. Cutoff frequency range By proper choice of the fractional delay value and the interpolation factor, the cutoff frequency of the ICVFD filter can be varied anywhere below the cutoff frequency of the prototype filter (i.e. fc_ICVFD ≤ fc_mod). One limitation of the proposed ICVFD filter is that fc_ICVFD ≤ fc_mod ≤ 0.2. This is because of the inherent limitation of the fractional delay structure that it provides unity magnitude response and constant phase delay only up to the normalized frequency of approximately 0.2 [13][14][15]. Beyond this range, the magnitude and the phase delay start deviating from the desired values. E. Comparisons The FDS based filter (for d ≥ 0.85) as well as the ICVFD filter require a low complexity masking filter for suppressing the undesired bands in the filter response. The comparison of the ICVFD filter and the FDS based filter (without and with CDM) is presented in Table I, for generating the lowpass filter responses. The SPA technique [9] and the technique in [11] are also considered for the comparison. Transposed direct form filter implementation is considered in each case. The desired final specifications are peak to peak passband ripple = 0.1 dB, stopband attenuation = -45 dB, and transition bandwidth = 0.1. All the filters considered for this comparison are designed to satisfy these specifications. Magnitude (dB) Normalized Frequency (xπ rad/sample) Phase (radians) Similarly, the total number of multipliers, adders, and multiplexers required for each of the filters considered for comparison are presented in Table I. Note that when FDS based filter is combined with the CDM technique, a higher order modal filter, and therefore, more resources are required in order to satisfy the final transition bandwidth and stopband attenuation specifications. A 16x16 bit multiplier, a 16 bit adder, a 4:1 mux, a 2:1 mux, and a 2-input NAND gate are synthesized on a TSMC 65nm process using the Synopsis Design Compiler. The area of each component is normalized by the area of NAND gate. The total gate-count calculated from these normalized values represents the area of the filter in terms of the equivalent number of NAND gates. The number of multipliers, adders, multiplexers and the total gate-count calculated as explained above is presented in Table I for each of the filters. The (±x) values in Table I indicate the percentage increase or decrease in the total gate-count for the respective filters when compared with the ICVFD filter. As can be observed, when compared to the FDS based filter, the ICVFD filter offers wider cutoff frequency range and narrower transition bandwidth at the cost of only moderate increase in the area. Alternatively, the FDS based filter with CDM requires 102% more area when compared to the ICVFD filter, for comparable cutoff frequency range and transition bandwidth. III. HARDWARE REALIZATION OF ICVFD FILTER In order to realize the ICVFD filter on FPGA, we optimize the filter design in two steps, viz. use of pipelining (for improving operating speed and reducing the resource utilization) and utilization of FPGA specific feature (to improve the operating frequency further). A. Pipelining for Hardware Implementation The structure CVFD element is shown in Fig. 1, along with its critical path (shown as 'dash and dot' line with blue color). As can be seen, the critical path of the CVFD element extends from its input to the output. Therefore, as shown in Fig. 2 with 'dash and dot' line with blue color, the critical path of the modal filter of the ICVFD filter consists of a fixed-coefficient multiplier h0, N number of CVFD elements and N adders, where N is the order of the modal filter. As the ICVFD filter consists of the interpolated modal filter and a fixed-coefficient masking filter (used to extract the desired band from the multiband response), its critical path extends from its input of the modal filter to the output of the masking filter. Such a long critical path makes the hardware implementation of the filter design infeasible without any pipelining. To improve the operating frequency, we add two levels of pipelining stages. First, in order to break the long critical path from input to output, a unit delay has been added between the interpolated modal filter structure and the masking filter. After this first level of pipelining, the critical path is found to be from the input to the output of the interpolated modal filter. Therefore, to break this critical path, one pipelining delay can be added after each of the CVFD elements (second-level pipelining with one unit delay). In order to shorten the critical path further, instead of adding one unit delay after every CVFD element, two pipelining delays are added inside each of the second-order modified Farrow structure (second-level pipelining with two unit delays). As the variable multipliers, i.e. the multipliers with one input as d, are the most computationally intensive blocks, these two pipelining delays are inserted in order to separate these blocks. Additional delay elements wherever required are added in the filter structure, such that the overall filter functionality remains unaffected. B. Hardware Realization of Variable-Length Delays There are two variable-length delay structures in the ICVFD filter, viz. M variable delays inside the second-order modified Farrow structure (due to the interpolation) and p variable delays (as required in (1)). A straightforward way to implement such variable delays is to use multiple unit delay elements and select the appropriate number of delays using a multiplexer, with a select line with appropriate input for M or p. However, for FPGAs, multiplexers are costly in terms of both resource utilization as well as propagation delay. As the hardware implementation of the delay element in the filter structure is done by a register, selection of variable number of delay elements (for M as well as for p) can be realized by using the addressable shift registers. We make use of Xilinx's IP core of RAM-based Shift Registers (SRLs) [16] to implement variable-length delays. The IP provides variable-length shift registers, which can be used as variable-length delay elements, with reduction in the propagation delay as well as resource requirement. IV. IMPLEMENTATION RESULTS The filter models were created considering the specifications mentioned in Section II-E. These filter models were created using MATLAB Simulink and Xilinx System Generator. The filters are implemented in the Xilinx Virtex 6 xc6vlx760-1ff1760 FPGA, using Xilinx ISE 14.6. A. Effect of Pipelining Filter implementation without any pipelining and after the first level of pipelining (i.e. separating the modal filter and masking filter) results in a very long critical path, resulting in infeasible designs. The estimated clock period after synthesis for ICVFD filter design with no pipelining and after first level of pipelining is more than 1000 ns. If one pipelining delay is added after every CVFD element (second-level pipelining with one unit delay), the ICVFD filter implementation becomes feasible with the post-placeand-route (post-PAR) maximum operating frequency of 30 MHz. Use of two pipelining delays inside each of the second-order modified Farrow structures (second-level pipelining with two unit delays) significantly improves the post-PAR maximum operating frequency to 58 MHz. The implementation results for the interpolated modal filter structure with second-level pipelining with one unit delay and the interpolated modal filter structure with secondlevel pipelining with two unit delays are presented in Table II. Filter with second-level pipelining with two unit delays requires 84% more slice registers compared to the filter with second-level pipelining with one unit delay. However, due to the compact packing of the logic, it results in reducing the requirement of LUTs and slices, and improving the post-PAR maximum operating frequency. As the overall area requirement is determined by the number of slices, use of two unit delays for pipelining actually results in reducing the area requirement by 19% and improving the maximum operating frequency by 91%. B. Effect of SRLs Pipelining improves the maximum operating frequency as well as reduces the number of occupied sliced. To improve the operating frequency and reduce this area requirement further, variable-length delay structure can be realized using SRL instead of multiple delays and a multiplexer. Two ICVFD filter (interpolated modal filter + masking filter) models are created in MATLAB Simulink using the Xilinx System Generator block. One model utilizes multiple unit delay elements and multiplexer and the other utilizes addressable shift registers, which can then be realized as SRLs while generating Verilog implementation. The results are summarized in Table III. Use of SRLs results in reducing the requirement of slice registers by 39%. This results in compact packing of logic and better routing which improves the post-PAR maximum operating frequency by 7%. Use of SRLs also results in small (2%) improvement in overall area requirement (number of occupied slices). C. Comparison with FDS based Filter Similar to the pipelining of the ICVFD filter mentioned above, FDS based filter model with second-level pipelining with two unit delays was created. The FPGA implementation results of this FDS based filter are summarized in Table IV, along with that of the ICVFD filter (for delay elements and multiplexer based design). The specifications are same as that considered for the comparison in Section II-E. The FDS based filter with CDM and filters based on the techniques in [9] and [11] are not considered for this comparison due to their high complexity. As can be observed from Table IV, the FPGA implementation results (increase in number of occupied slices) are in agreement with the theoretically estimated (increase in gate-count) results. The post-PAR minimum period of the ICVFD filter is slightly more than that of the FDS based filter, due to the more complex structure of the CVFD element used in the ICVFD filter when compared to the fractional delay structure used in the FDS based filter. V. CONCLUSIONS In this paper a continuously variable fractional delay (CVFD) element is proposed, which is used to replace the unit delay in the prototype filter of the proposed interpolated continuously variable fractional delay structure based filter (ICVFD filter). The CVFD element provides wide fractional delay range at the minimum complexity possible, and is capable of changing the fractional delay range on-the-fly. When compared to the existing fractional delay structure (FDS) based filter, the proposed ICVFD filter has dynamically variable, wider cutoff frequency range. It is also capable of providing variable bandpass and highpass filter responses. The ICVFD filter is suitable for obtaining the variable narrowband responses, especially in the lower region of the frequency spectrum. Two-stage approach for the FPGA implementation of the ICVFD filter was presented. It was shown that the FPGA implementation results of are in agreement with the theoretical comparison of the ICVFD filter and the FDS based filter.
2019-04-14T13:02:09.776Z
2015-12-17T00:00:00.000
{ "year": 2015, "sha1": "bc8c93f7e9f46e7e8db7f17ebce4b74e127aebbd", "oa_license": "CCBYSA", "oa_url": "http://ijates.org/index.php/ijates/article/download/132/101", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "f886079119b354d908b190498e2650291e9e43bd", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Mathematics" ] }
211135482
pes2o/s2orc
v3-fos-license
Phase Diagram for a Lysyl‐Phosphatidylglycerol Analogue in Biomimetic Mixed Monolayers with Phosphatidylglycerol: Insights into the Tunable Properties of Bacterial Membranes Abstract Ion pairing between the major phospholipids of the Staphylococcus aureus plasma membrane (phosphatidylglycerol – PG and lysyl‐phosphatidylglycerol – LPG) confers resistance to antimicrobial peptides and other antibiotics. We developed 3adLPG, a stable synthetic analogue which can substitute for the highy‐labile native LPG, in biophysical experiments examining the membrane‐protecting role of lipid ion pairing, in S. aureus and other important bacteria. Here we examine the surface charge and lipid packing characteristics of synthetic biomimetic mixtures of DPPG and DP3adLPG in Langmuir monolayers, using a combination of complementary surface‐probing techniques such as infrared reflection‐absorption spectroscopy and grazing‐incidence x‐ray diffraction. The resultant phase diagram for the ion paired lipids sheds light on the mixing behavior of lipids in monolayer models of resistant phenotype bacterial membranes, and provides a platform for future biophysical studies. Ion pairing between the major phospholipids of the Staphylococcus aureus plasma membrane (phosphatidylglycerol -PG and lysyl-phosphatidylglycerol -LPG) confers resistance to antimicrobial peptides and other antibiotics. We developed 3adLPG, a stable synthetic analogue which can substitute for the highy-labile native LPG, in biophysical experiments examining the membrane-protecting role of lipid ion pairing, in S. aureus and other important bacteria. Here we examine the surface charge and lipid packing characteristics of synthetic biomimetic mixtures of DPPG and DP3adLPG in Langmuir monolayers, using a combination of complementary surfaceprobing techniques such as infrared reflection-absorption spectroscopy and grazing-incidence x-ray diffraction. The resultant phase diagram for the ion paired lipids sheds light on the mixing behavior of lipids in monolayer models of resistant phenotype bacterial membranes, and provides a platform for future biophysical studies. The aminoacyl lipids produced by a wide range of bacteria are becoming increasingly recognized as clinically relevant virulence factors, due to the role they play in phenotypic adaptations to the physical and biochemical stressors which confer intrinsic defence against infection. [1,2] The most widely studied of these lipids is lysyl-phosphatidylglycerol (LPG), for which the genomic regulation and biosynthetic pathways in Staphylococcus aureus, have recently been elucidated in some detail. [3,4] Data from microbiological assays, has shown that an increased proportion of LPG in S. aureus membranes correlates with resistance to both host defensive peptides [5] and membrane-active therapeutic antibiotics. [6] The mechanisms facilitating such resistances are assumed to involve the tuning of target membrane physical properties, notably those of interfacial charge and lipid ordering, which are influenced by the LPG content in the bacteria membranes. [7] Biophysical investigations into these phenomena have been hampered by the labile nature of native LPG, which is readily hydrolysed under mild conditions, therefore exposing any such study to the risk of artefact. [8,9] To this end, stable LPG analogues have been synthesized. One such analogue, lysyl-phosphatidylethanolamine (LPE) exhibited an inhibitory effect on antimicrobial peptide activity when incorporated into vesicles containing phosphatidylglycerol (PG). [10] A second analogue, 3-aza-dehydroxy lysyl-phosphatidylglycerol (3adLPG), facilitated enhanced membrane ordering and antimicrobial peptide resistance when mixed with PG under mildly acidic conditions, [11] which are known to promote PG/LPG ion pair formation in model bacterial membranes. [12] In order to gain a better understanding of the influence of lipid ion pairing on membrane structure and interfacial properties, we conducted a high-resolution study of biomimetic mixtures of dipalmitoyl-3adLPG and DPPG in Langmuir monolayers. The natural phospholipid composition ( Figure 1A) observed in different S. aureus strains is dominated by PG ( � 40-70 %) and LPG ( � 30-55 %) with saturated and iso-branched fatty acids. Cardiolipin (CL) is present to 4-9 %. [7] To simplify the lipid composition for the purposes of our biophysical investigations, only binary mixtures of DPPG and DP3adLPG ( Figure 1B) were studied. Since CL is fully deprotonated at physiological pH [13] and is known to promote negative curvature, [14] its exclusion from the monolayer model removed the possibility that it might cause distortions in the chain packing. The use of lipids possessing only palmitoyl chains allowed us to focus on the charge interaction between the head groups, using total reflection x-ray fluorescence (TXRF), and to determine a phase diagram for condensed monolayers using grazing incidence xray diffraction (GIXD). In all of our experiments, premixed lipid solutions were spread as Langmuir monolayers at the gas/liquid interface on a pH 7.4 solution with the addition of 1 mM CsBr for the TRXF experiments. In earlier work, [15,16] we encountered the problem that the ultrapure water used in the subphase contained traces of calcium. To avoid the competition of divalent calcium ions with the monovalent cesium ions for interaction with the negatively charged lipid phosphate groups, [17] 50 μM EDTA was added. At pH 7.4, DPPG is assumed to be fully ionized [18,19] whereas DP3adLPG can be either zwitterionic or positively charged, in the same way as the native LPG ( Figure 1C). [9] To make comparisons between the charge states of the different DPPG/DP3adLPG mixed monolayers, TRXF measurements [17] were performed at a surface pressure of 30 mN · m À 1 (Figure 2), the physiological lateral pressure in membranes, [20] when all of the different compositions were in the condensed phase ( Figure 3A). Figure 2A and B show the L α and L β bands of the subphase Cs + and the K α signal of Br À , both of which exhibited high intensities, allowing quantitative evaluation of the integrated values ( Figure 2C). The pure DPPG monolayer has a high negative charge which attracts of Cs + ions to the interface. The addition of DP3adLPG at a 0.33 mole fraction reduces the charge and therefore the amount of attracted Cs + . At x DP3adLPG = 0.5 a charge neutral monolayer would expected if both amino groups of DP3adLPG were protonated and full ion pairing occurs. [12] However, the residual negative charge of the monolayer proves that this is not the case. Assuming that DPPG is fully deprotonated at pH 7.4 ( Figure 1B), [15,16] at x DP3adLPG = 0.5 half of the total lipid molecules would be expected to carry a net negative charge. The residual negative charge in the system could only result from the presence of a mixture of positively charged DP3adLPG (phosphate deprotonated and both amines protonated) and the zwitterionic species (phosphate deprotonated, ɛ amine protonated and α amine uncharged) ( Figure 1C). The small irregular shapes of the condensed phase domains observed in the 1 : 1 DPPG/DP3adLPG monolayer (Supporting Information Figure S2) are also indicative of a charge imbalance in the lipid mixture. The extrapolated point of zero charge is at x DP3adLPG = 0.561 (Supporting Information Figure S1A). Examining the evolution of the bromide signal as a function of x DP3adLPG supports the same interpretation ( Figure 2C), where monolayers with an x DP3adLPG > 0.5 attract Br À to the interface in proportion with DP3adLPG concentration. At x DP3adLPG = 0.5 and lower, the signal is negative because of the repulsive forces between the negatively charged monolayers and bromide which reduces the bromide concentration at the interface below the buffer value. Extrapolating the decrease in Br À intensity (Supporting Information Figure S1B) suggests that the interface would be neutralised at x DP3adLPG = 0.524, a value slightly lower than that estimated from the Cs + signal. Averaging both values implies that approximately 15 % of the DP3adLPG was zwitterionic, with the remaining 85 % carrying a net positive charge. Using this ratio of the different protonation states, the Henderson-Hasselbalch equation gives a predicted DP3adLPG α-amine pK a of 8.15. Due to the sensitivity of the TXRF measurements, it can be readily seen ( Figure 2) that when one charged lipid is in the minority, its corresponding counter ion is repelled from the interface. This clearly indicates the formation of a neutral ion paired compound [21] between the PG and 3adLPG, when the latter carries a net positive charge ( Figure 1C). In this respect, 3adLPG is both structurally and functionally analogous to the native bacterial lipid. [9,12] aureus. The major components are phosphatidylglycerol (PG), lysyl-phosphatidylglycerol (LPG), and cardiolipin (CL). [7] The numbers represent the pK a values according to literature for LPG. [9] B) The components of the model system for biophysical investigations: 1,2-dipalmitoyl-sn-phosphatidylglycerol (DPPG) and 1,2-dipalmitoyl-sn-3-aza-dehydroxy lysyl-phosphatidylglycerol (DP3adLPG). C) Predicted charge states of LPG at pH 7.4 according to the pK a value of the α-amine given in A (calculated using the Henderson-Hasselbalch equation). The extent to which ion pairing between DPPG and DP3adLPG influenced their packing behaviour was probed using a number of monolayer techniques in addition to GIXD. Langmuir isotherms show that both the pure lipids and their various mixtures exhibit a first-order phase transition from the liquid expanded (LE) to a liquid condensed (LC) phase (Figure 3A). The transition is characterized by a pronounced LE/LC coexistence region (see fluorescence microscopy images in Supporting Information Figure S2 and infrared reflectionabsorption spectroscopy experiments in Figures S3 and S4). The transition pressures of the mixtures are clearly lower compared with those of the pure lipids ( Figure S5), with the DPPG/ DP3adLPG 2 : 1 mixture exhibiting the lowest value. This suggests that the 2 : 1 mixture forms condensed phases more readily than the other mixtures, a phenomenon which has hitherto not been observed for native LPG, as a previous study on PG/LPG mixed monolayers used lipids which did not undergo first-order transitions. [12] The in-plane structure of the condensed phases was determined at the Angstrom level by GIXD. The monolayer of DPPG is characterized by three diffraction peaks above the horizon (Q z > 0) in the wide-angle region (at high Q xy ) at all the lateral pressures investigated. Three Bragg peaks are typical for an oblique lattice structure with tilted chains ( Figure 3B). The Bragg peak positions, their full-widths at half-maximum (FWHM) and all lattice parameters obtained for DPPG, DP3adLPG and the different mixtures at 20°C and different surface pressures are listed in Supporting Information Tables S1-S5. The crosssectional chain area of A 0 = 19.7 Å 2 indicates reduced rotational freedom, but it is slightly larger compared with the reported values of DPPG on water containing 1 mM CsBr. [18] Plotting 1/ cos(t) vs. the lateral pressure and extrapolating to zero tilt angle allows the determination of the tilting transition pressure. [22] The extrapolated value (69.6 mN · m À 1 ) is too high to be experimentally determined. Interestingly, this value is much larger compared to the one determined on a water subphase (50.5 mN · m À 1 ). [18] This shows clearly that on water (pH~5.8) with 1 mM CsBr, DPPG is only partially ionized. However, the ionization degree depends not only on the subphase pH but also on the charge state of a mixing partner. In mixtures with DHDAB (positively charged molecule), the ionization degree of DPPG is interestingly higher compared with the pure monolayer. [18] For this reason, we chose to conduct our experiments at pH 7.4, in order to avoid the presence of protonated DPPG. DP3adLPG exhibits a higher LE/LC transition pressure compared to DPPG ( Figure 3A). The head group is quite large leading to a larger molecular in-plane area of 48.0 Å 2 (at 30 mN · m À 1 ) compared to 45.2 Å 2 for DPPG. The lattice structure is the same (three diffraction peaks of an oblique lattice structure) but the cross-sectional area of the chains is with 20.0 Å 2 larger and typical for a rotator phase, as a result of accommodating the bulky head group. Two of the DPPG/DP3adLPG mixtures, 1 : 1 and 1 : 2, exhibit the same oblique in-plane lattice structure. However, the mixture DPPG/DP3adLPG 2 : 1 displays only two diffraction peaks characteristic of a rectangular in-plane lattice. The chains are tilted in the direction of the nearest neighbour (NN). Based on isotherm and GIXD data, a putative phase diagram has been constructed ( Figure 4A). The exact dimensions of the two-phase co-existence regions are unknown and would require many more mixtures to be studied what was clearly not the aim of this work. But it indicates the formation of a congruent melting compound in the binary system. The two lipids DPPG and DP3adLPG are completely miscible in the liquid-like LE phase. However, the miscibility is not ideal as the molecular area vs. mole fraction ( Figure S6) demonstrates. The molecular area in the mixtures is clearly much smaller compared to the expected one of ideal mixtures or completely de-mixed systems indicating preferred interactions between the two unlike compounds. This suggests that even in the fluid state, the lipids are associated through head group-driven ion pairing, which may explain the ordering effect these associations confer on fluid phase bilayers. [11] In the condensed phase of the monolayer, the congruent melting compound DPPG/ DP3adLPG 2 : 1 forms an L 2 phase separated from the oblique phases on both sides of the phase diagram by small miscibility gaps. This indicates that in addition to the proportion of DP3adLPG regulating the surface charge through ion pairing with DPPG, it also alters chain packing and monolayer condensation, but in a somewhat unexpected way. Although the near surface neutrality of the 1 : 1 mixture demonstrates a higher degree of ion pairing and thus presumably a more condensed and stressor resistant combination, [11,12] it is the 2 : 1 mixture which, although more anionic, appears to have the greater propensity to form a stable condensed phase. How this proportion of the lipids might affect bacterial physiology in nature requires further study, however, it should be noted that for a number of S. aureus strains, when grown at physiological pH, the proportion of anionic to cationic lipids in their membranes is approximately 2 : 1. [7,23] Despite the differences in lattice geometries between the 2 : 1 and the other DPPG/DP3adLPG mixtures, in all cases the tilt angle of the chains decreases with increasing lateral pressure ( Figure S7). Comparing the tilt angles at 30 mN · m À 1 displays an interesting behaviour of the mixtures ( Figure 4B). The tilt angle is quite constant over a wide composition range showing that the thickness of the membranes does not depend on the composition. Therefore, despite alteration in membrane charge and condensation potential, altering DP3adLPG content should not affect the thickness of bilayers in which it is combined with DPPG. It should be noted that due to the differences in the calculated α-amine pK a of native LPG (pK a 7) and DP3adLPG (pK a 8.15) the pH conditions modelled in this study would be comparable to the native lipid at pH 6.25 (according to the Henderson-Hasselbalch equation). In S. aureus, the proton gradient across the plasma membrane ensures that the outer leaflet has a pH of 6.25-6.65, [24] which means that our monolayer model provides an accurate mimetic for the environment-facing half of the lipid bilayer. Thus, 3adLPG is not only useful in biomimetic model membranes used to assess antimicrobial peptide mechanisms against S. aureus, [11] but should also fulfil the need for more biorelevant lipid environments for the biophysical study of bacterial membrane proteins. [25] It is increasingly recognised that in order to improve our knowledge of the preventative and therapeutic interventions which can be made in host/pathogen interactions, a multidisciplinary approach is required. Bacterial membrane biophysics can play a very important role in this field, by contributing to research elucidating the mechanisms of susceptibility and resistance to novel therapeutics such as antimicrobial peptides, [26] and facilitating structure and functional studies on membrane-associated bacterial virulence factors. The validity of such biophysical research relies on the biological relevance of the model systems employed, which becomes more important as they necessarily increase in complexity, to improve their biomimetic proximity. One way to ensure this would be to use lipids extracted from bacteria. [25] However, for studies which require membranes of defined composition whose physical properties may be tuned to suit specific purposes, especially those related to S. aureus and other aminoacyl lipid containing pathogens, 3adLPG may prove to be a valuable tool. Experimental Section Experimental details are given in the Supporting Information. Figure 4. A) Phase diagram (lateral pressure π versus the mole-fraction of DP3adLPG) of the DPPG/DP3adLPG system. The phases observed by GIXD are obl (oblique), L 2 (orthorhombic with NN tilt). The first-order transition pressures (*) from the disordered LE to an ordered LC phase are determined by pressure-area isotherms. B) Tilt angle t at 30 mN · m À 1 (the lateral pressure in biological membranes [20] ) versus the mole fraction x DP3adLPG .
2020-02-18T14:01:39.007Z
2020-02-17T00:00:00.000
{ "year": 2020, "sha1": "3d4a27c3659717c9db2d8f51a5fd6b8d3d69a5bd", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/cphc.202000026", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "3f68e156072fb92c23898435adb2aee66cb9b9aa", "s2fieldsofstudy": [ "Chemistry", "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
252744758
pes2o/s2orc
v3-fos-license
Posterior Reversible Encephalopathy Syndrome Following Cadaveric-Donor Kidney Transplantation; A Challenging Diagnosis A 19-year-old girl with haemolytic uraemic syndrome (HUS) and hypertension underwent a diseased donor kidney transplantation. She developed two episodes of generalised tonic-clonic convulsions on the second postoperative day. Posterior reversible encephalopathy syndrome (PRES) was diagnosed based on the history and imaging. PRES was likely as it is associated with factors which co-exist with chronic kidney disease. Perioperative MMF, tacrolimus and prednisolone were prescribed by the nephrologist. Her serum tacrolimus level was normal at the time of convulsions. Other causes of seizures such as hypoglycaemia, electrolyte abnormalities, infection and intracranial haemorrhages were excluded. Elevated blood pressure associated with severe visual impairment was noted during the second episode of convulsions. Clinical diagnosis was confirmed by magnetic resonance imaging (MRI). She had a complete recovery without residual neurological deficits. Her blood pressure was controlled at the time of discharge and she had a well-functioning graft. Timely detection and institution of early treatment led to a successful recovery. Introduction Posterior reversible encephalopathy syndrome (PRES) is a neurological disorder characterized by visual disturbances, headache, seizures, vomiting, hypertension, and altered level of consciousness. Diagnosis is based on clinical presentation and imaging. With the availability of high-quality imaging modalities, PRES is reported more frequently. 1 The pathophysiology of PRES is unclear. The commonest reason implicated is an inability of the posterior cerebral circulation to autoregulate in response to acute changes in blood pressure. Cerebral hyperperfusion with resultant disruption of the blood brain barrier results in vasogenic oedema, usually without infarction. In 70-90% of cases, vasogenic oedema is confined to the occipital and parietal regions of the cerebral hemispheres. Even though it is termed reversible, in some it can be permanent. 2 Resistant hypertension secondary to chronic kidney disease, HUS, solid organ transplant and immunosuppressive therapy were the most likely contributory factors for development of PRES in this patient. Left radial artery was cannulated under sedation and monitoring established. Anaesthesia was induced with 50 mcg of fentanyl, 50mg of propofol and 30 mg of atracurium. Meropenem 500mg was administered before the skin incision. She was intubated and ventilated with a tidal volume of 6ml/kg ideal body weight, rate of 12/min with a PEEP of 5mmHg. Ultrasound guided unilateral transversus abdominal plane block was performed with 20 ml of 0.25% bupivacaine in addition to 3mg of morphine. Left internal jugular vein was cannulated under ultrasound guidance. Intraoperative MAP was maintained above 100 mmHg with an infusion of noradrenaline. One liter of Ringers lactate and 500ml of 4% albumin was administered intraoperatively. Methylprednisolone 500mg was infused over 30 min before completing the venous anastomosis. 20% mannitol 0.5g/kg and 30mg of frusemide was administered prior to removal of the vascular clamps. Reperfusion period was uneventful. Total blood loss was 300ml. She was extubated at the end of surgery. MAP was maintained above 90mmHg without noradrenaline. She was transferred to the intensive care unit for observation. Intravenous fluids, tacrolimus 1.5mg bd and prednisolone 20mg once daily was prescribed as per the unit protocol. Case presentation She maintained a urine output of >1ml/kg/hour. On postoperative day two, she developed sudden onset tonic-clonic seizures which resolved spontaneously within 30 seconds. Her blood pressure, capillary blood sugar and electrolytes were normal. Tacrolimus was withheld. Two hours later, she complained of visual impairment followed by convulsions and was treated with 2.5mg of diazepam. During this episode her blood pressure was 180/110mmHg. An infusion of labetalol was commenced. She had post-ictal drowsiness but was arousable, conscious and rational. She was prescribed amlodipine 2.5mg twice daily, and prazosin 1mg three times daily. Levetiracetam was prescribed by the neurologist. Serum tacrolimus level was subtherapeutic (1.4mcg/L). Tacrolimus was re-commenced by the nephrologist once the seizures were controlled. Contrast-enhanced computer tomography brain was normal. Magnetic Resonant Imaging (MRI) of brain showed subtle focal changes in the gyri such as diffusion restriction in bilateral posterior parietal lobes ( Figure 1). Grey-white differentiation was preserved. Magnetic Resonant Angiography (MRA) and Magnetic Resonant Venography (MRV) was normal. She recovered rapidly without residual neurological deficit. There was no evidence of infection and CRP was <6ng/ml throughout her hospital stay. Therefore, lumbar puncture and EEG were not indicated. Discussion Posterior reversible encephalopathy syndrome (PRES) is a neurological disorder characterized by visual disturbances, seizures, altered level of consciousness, headache, nausea and vomiting and unique patterns in brain imaging especially in diffusion-weighted MRI. 1 It was first described by Hinchey et al in 1996 and is frequently diagnosed with the availability of modern imaging techniques. 2 PRES following renal transplantation and immunosuppression is not uncommon. Globally 0.4% occur following solid organ transplantation. 3 Incidence of PRES following liver transplantation is 0.59% vs post kidney transplantation 0.35%. 4 There were no reported cases in Sri Lanka following cadaveric renal or liver transplant. This case posed a diagnostic dilemma. PRES is associated with sudden increase in blood pressure. This was not evident in this patient following the first episode of convulsions. It is not uncommon among normotensive and hypotensive patients. 2 Furthermore, it can be associated with HUS. Exact pathophysiology is unknown. Many theories have been postulated. Failure in cerebral autoregulation resulting in vasogenic oedema is an accepted theory. Posterior cerebral circulation is more vulnerable due to poor auto regulation and relative lack of sympathetic innervation. 5 Sustained increase in MAP above 150-160mmHg is beyond the auto regulatory range, which disrupts the regulatory mechanisms resulting in endothelial damage causing hyperperfusion. Patients with chronic high blood pressure are more prone to develop PRES. 2 This patient had refractory hypertension on a background of HUS. Hypertension-hyperperfusion theory is further supported by clinical and radiological improvement of PRES following prompt treatment of high blood pressure. 5 We believe this patient had a similar mechanism which resolved within minutes after commencement of antihypertensive medication. Neuropeptide theory describes PRES induced by inflammatory mediators released during transplantation due to cerebral vasospasm, ischaemia and endothelial damage. 3 Tacrolimus and cyclosporin are well-known to cause PRES. Despite subtoxic levels, endothelial damage can still be evident. 6 Tacrolimus could have been a potential trigger despite sub therapeutic serum levels. Severe anaemia resulting in inadequate endothelial oxygenation can predispose to PRES. This patient had a preoperative haemoglobin of 9.4g/dl and it was an unlikely contributory factor. 2 Symptoms can be acute or subacute and nonspecific. Varying degree of encephalopathy ranging from mild confusion to stupor and coma is reported in 28-94%. In early course of the disease, seizures are common with an incidence of 74-87%. Dull and diffuse headache occurs in 50% of patients and 39% of patients report visual disturbances. Papilledema can be appreciated in patients with hypertension. 7 Aphasia, hemiparesis and opisthotonus are atypical and uncommon presentations. 8 There is no gold standard diagnostic test. Generalized and focal slowing in EEG are common patterns seen in PRES though not diagnostic. Neuroimaging includes CT, MRI and MRA. T2 weighted and FLAIR sequences on MRI are known to be more sensitive 3 but was absent in this patient. The classic patterns show vasogenic edema involving parieto-occipital regions mostly involving the cortical and sub cortical areas of the brain. It is usually bilateral and symmetrical. 3 Management is mainly supportive. Early recognition and removal of precipitating factors is key. Control of blood pressure requires intravenous infusion of antihypertensive medication. Target should be a 25% reduction from the baseline. First line drugs include labetalol, nimodipine and nicardipine while hydralazine and sodium nitroprusside are considered second line therapy. Hypomagnesemia is a frequent occurrence in acute phase of PRES. Magnesium sulphate is beneficial when associated with eclampsia/ preeclampsia and hypomagnesemia. 9 PRES following solid organ transplantation carries a mortality of 19%. Varying degree of functional impairment is seen in 44%. PRES of hypertensive aetiology and multiple comorbidities are poor prognostic factors. 3 Conclusion PRES if diagnosed and treated early can have a benign course. A high index of suspicion is required especially in patients undergoing renal transplantation with multiple risk factors as highlighted in this case report. Identifying the high-risk patient preoperatively followed by perioperative control of blood pressure, ensuring normal blood biochemistry and close monitoring of serum tacrolimus levels can help diagnose this condition early and prevent fatal outcomes. The authors declare no conflicts of interest.
2022-10-07T15:16:04.294Z
2022-10-05T00:00:00.000
{ "year": 2022, "sha1": "a6f95226f9445de1c0257978b68c73ef0d1f7c01", "oa_license": "CCBY", "oa_url": "http://slja.sljol.info/articles/10.4038/slja.v30i2.9060/galley/6645/download/", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c70f921241f7117149b8d409924cfb5e0a68721e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
196685650
pes2o/s2orc
v3-fos-license
Expression patterns of GHRL, GHSR, LEP, LEPR, SST and CCK genes in the gastrointestinal tissues of Tibetan and Yorkshire pigs : The aim was to characterize the expression patterns of several genes in the gastrointestinal tracts of Tibetan pigs (TP) and Yorkshire pigs (YP) and to explore their correlation with digestion and growth difference of the two breeds. The body weights and growth of YP and TP were studied at 6, 12 and 24 weeks of age, and their plasma levels of ghrelin (GHRL), leptin (LEP), somatostatin (SST) and cholecystokinin (CCK) were determined by enzyme linked immunosorbent assay (ELISA). Blood and gastrointestinal sections (stomach, duodenum, jejunum, ileum, caecum and colon) were collected and assayed for mRNA expression of the six genes ( GHRL , ghrelin receptor ( GHSR ), LEP , leptin receptor ( LEPR ), SST and CCK ) by reverse transcription-qPCR (RT-qPCR). TP generally had higher mRNA expressions of GHSR , LEP, LEPR, SST and CCK genes compared to YP, and expressed lower levels of the GHRL gene in most tissues of the digestive tract. In both breeds, plasma levels of the expressed proteins were more closely correlated with the feed intake and growth than with mRNA levels of the target genes. Our data indicate that TP possess special gene expression patterns in the gastrointestinal tract compared to YP, which is consistent with its unique feed intake and adaptation to harsh environment. Different porcine breeds have different gene expression patterns that are correlated with their different phenotypes (Zhang et al. 2013;Shen et al. 2014).The ingestion of nutrients triggers numerous changes in gastrointestinal (GI) peptide hormone secretions that affect appetite and eating, especially in the stomach (Steinert et al. 2013).The development of the GI tract is closely related to GI peptides that affect feed intake, energy and glucose homeostasis, as well as immune functions (Monteiro and Batterham 2017). https://doi.org/10.17221/40/2018-CJAS The Tibetan pig (TP) is an indigenous pig of the Qinghai-Tibet plateau of China and is a rare and valuable genetic resource.In addition to strong resistance against disease compared with other pig breeds, it shows good adaptation to poor quality feed and has excellent meat quality and taste (Li et al. 2012).However, little is known about the molecular regulation of its unique digestion traits. Ghrelin (GHRL), leptin (LEP), somatostatin (SST) and cholecystokinin (CCK) are important gastrointestinal hormones, with many studies suggesting that they have multiple physiological functions, stimulating a wide array of nutrition-related processes such as regulation of ingestion, digestion, and absorption of nutrients (Krejs 1986;Barb et al. 2001;Little et al. 2005;Dong et al. 2009).These hormones have been found to be associated with growth and carcass traits by affecting food intake and energy balance in many animals of economic importance.The active proteins are found in both endocrine cells and neurons of the peripheral nervous system or the central nervous system.However, previous studies have focused on the protein levels in main secretory cells, and little is known about the expression profiles of these genes in the GI tract of the TP. Therefore, for the sake of employing the TP as a genetic breeding resource for the development of hybrid pigs with excellent meat quality and strong disease resistance, the growth, digestive and immunity characteristics of this animal are worth of further intensive investigation.The present work mainly evaluated the transcriptional expression of GHRL, LEP, ghrelin receptor (GHSR), leptin receptor (LEPR), SST and CCK, in the gastrointestinal tissues of Tibetan pigs (TP) and Yorkshire pigs (YP).We also conducted an analysis of the growth characteristics of both breeds and their plasma levels of GHRL, LEP, SST and CCK hormones. Animals and samples. Thirty-six healthy purebred pigs, eighteen half-sib Tibetan and eighteen half-sib Yorkshire, were provided by the Chengdu Research Base of Sichuan Academy of Animal and Husbandry Science.The parents of the test TP were second generation following introduction from the Qinghai-Tibet plateau.Each breed consisted of three age groups: 6-week-old weanlings, 12-week-old young pigs and 24-week-old adult pigs, with three males and three females in each group.They were maintained under the same environmental and feeding conditions, i.e., with the same housing, feedstuff and water supplies (Table 1). Experimental procedure.At specific times, the animals were weighed (Tables 2 and 3) and then euthanized by electric shock.Blood samples were collected in EDTA tubes, plasma was separated by centrifugation and stored at −20°C and blood cells were retained for RNA isolation.Small tissue specimens were excised immediately from identical positions of the following organs: stomach (pyloric region mucosa), duodenum, jejunum and ileum, caecum and ascending colon.After washing in PBS, the samples were individually homogenized and snap-frozen in liquid nitrogen until required for total RNA determination. Maintenance of the animals and euthanizations were performed according to Chinese animal welfare laws and regulations and approved by the Institutional Animal Care and Use Committee at Sichuan University under permission No. SCUBC20160603. Quantification of plasma hormone levels.Plasma GHRL, LEP, SST and CCK were determined using a commercial porcine-specific ELISA kit (Senbeijia, China) according to the manufacturer's instruc- Total RNA extraction and reverse transcription.Total RNA from pig tissues and whole blood cells collected by centrifugation were extracted using RNAiso Plus Reagent (TaKaRa, China) according to the manufacturer's instructions under sterile conditions, and purity was measured by A 260 /A 280 ratio to be 1.8-2.0 using a NanoDrop ND-8000 spectrophotometer (Thermo Fisher Scientific, USA).The integrity of the RNA was verified to be qualified for RT-PCR by denaturing agarose gel electrophoresis (18s and 28s RNA bands were clear).RNA preparations were diluted to 500 ng/µl and rapidly reverse-transcribed using TransScript ® One-Step gDNA Removal and cDNA Synthesis SuperMix kit (TransGen, China) according to the manufacturer's instructions. Gene expression by qPCR.Nine pairs of primers were designed (Table 4) to be optimal using Primer Premier 5.0 software and NCBI Primer-BLAST (http://www.ncbi.nlm.nih.gov/tools/primer-blast/) and were commercially synthesized (Invitrogen, USA).Where possible, primers were designed over introns.A specific validation of primer pairs was conducted by melting curve analysis and by PCR products sequencing.SYBR Green PCR assays were performed on an iQ5 iCycler iQ™ Real-Time PCR Detection System (Bio-Rad, USA).For each 15 µl of SYBR Green PCR reaction, 1.0 µl cDNA, 0.75 µl sense primer (100 µM), 0.75 µl anti-sense primer (100 µM), 7.5 µl SYBR Green PCR Master Mix (Bio-Rad) and 5 µl PCR-grade water were mixed.The parameters for real-time PCR were as follows: a pre-run at 95°C for 3 min, 40 cycles with a 5 s denaturation step at 95°C, followed by an optimal annealing temperature (Table 4) step for 10 s and a 72°C extension step for 10 s.Fluorescence was measured immediately after the end of each extension step. Gene expression relative to reference genes (RPL4, PPIA and YWHAZ) was performed in order to correct for the variance of RNA input in the reactions in porcine tissues of different ages (Uddin et al. 2011).The amplification efficiency of each set of primers (listed in Table 4) was determined by running a log dilution series of purified conventional PCR products and approximating to 100%.Each sample was amplified in triplicate.A no-template control (NTC) was also included in each assay. Statistical analysis.Relative gene expression compared to reference genes was calculated by the 2 -∆∆Ct method as described previously (Livak and Schmittgen 2001).Statistical analyses RESULTS Growth performance.The average daily feed intake and average daily weight gain were significantly higher in YP than in TP at three time points (Tables 2 and 3).However, TP had significantly better feed/gain rate than YP before 12 weeks of age, and the average daily feed intake and daily gain of TP decreased after 12 weeks of age. Plasma hormone levels.Plasma hormone levels such as GHRL in the 2 breeds, LEP and SST in YP, and CCK in TP tended to increase with age (Table 5).Notably, there was a downward trend of LEP and SST levels in TP at 24 weeks and of CCK in YP at 12 weeks. Compared with YP, the LEP plasma levels in TP were significantly higher at all 3 time points, https://doi.org/10.17221/40/2018-CJAShowever, SST was borderline higher at 12 and 24 weeks, as was CCK at 12 weeks.TP had higher levels of GHRL than YP at 6 weeks, while this was reversed at the later ages. GHRL and GHSR expression.The mRNA of GHRL was highly expressed in the blood cells, stomach, duodenum, jejunum, and in the lower colon and caecum in both breeds (Figure 1A) .In https://doi.org/10.17221/40/2018-CJAS6-week-old weanling pigs, GHRL mRNA levels in TP were significantly higher than in YP in stomach, duodenum, jejunum and caecum.Compared to YP, only the blood cells, caecum and colon levels were significantly higher in the 12-week-old Tibetan piglets.In 24-week-old adults, only the duodenum levels were significantly higher than in YP.Overall, however, GHRL expression in the digestive tissues was significantly higher at 6 weeks in TP than in YP, but this situation was reversed in the older animals. GHSR mRNA was expressed more highly in the duodenum and jejunum than in the other samples (Figure 1B).Expression of the GHSR mRNA in TP was significantly higher than in YP in most tissues, except at 6 weeks.Overall, the relative expression of the GHSR in the two breeds was found to vary with age. LEP and LEPR expression.LEP expression levels in blood cells were much higher than in other tissues of the two breeds (Figure 1C).The LEP expression levels in TP were significantly higher than in YP at 12 and 24 weeks, especially in the caecum.Overall, the LEP gene expression levels followed a pattern similar to GHSR when comparing the two breeds at the different ages. Compared to the other tissues, it is noteworthy that the highest levels of LEPR mRNA were mainly in the small intestine and caecum of the two breeds, and a near lowest mRNA abundance of LEPR was found in their blood cells (Figure 1D). SST and CCK expression.SST was expressed at the highest levels in the blood cells and the lowest in the large intestine of the two breeds (Figure 1E).The SST expression levels were significantly higher in most tissues of TP than in YP, except at 6 weeks of age. A noticeably higher CCK expression level was detected in the small intestine than in the other tissues in both breeds, and the lowest expression level was detected in the stomach (Figure 1F).In 6-week-old animals, the mRNA expression of CCK in YP was significantly higher than in TP in the ileum.However, this trend was reversed in most tissues of the older animals, except in the jejunum at 24 weeks. DISCUSSION In our case, we were only able to obtain 18 half-sib but not all-sib TP for analysis, since the reproduc-tive rate of TP is limited, the average number of Tibetan piglets per litter being only 5~8.So we chose to use six Tibetan piglets per cote.They did, however, have the same male parent.We used a set of three stably expressed reference genes previously determined to confirm the final quantitative PCR results reliably.The same method was used in our previous study for the normalization of mRNA expression in samples collected from the various tissues of the two breeds (Cheng et al. 2015).Therefore, we are confident that our results are reliable. The average daily feed intake and average daily gain of TP had decreased significantly by 24 weeksthe age of sexual maturity (Gong et al. 2009).At the same time, there was a correlation between plasma levels of GHRL, LEP and CCK.GHRL is important for growth performance and, with LEP, regulates the sex hormone levels (Shintani et al. 2001;Yadav and Deo 2013).Furthermore, the lower CCK at 24 weeks is also an important reason for the lower growth performance by affecting feed intake (Matson et al. 2015).Plasma hormone levels of both breeds correlated more strongly with the growth performance of the two breeds than the transcript expression pattern.The former is systemically affected by many factors, while the latter is mainly regulated within the local gastrointestinal tract. Our experimental results have shown that the transcriptional abundance of GHRL, LEP and their receptor genes, as well as SST and CCK, was influenced by age and tissue type, and the expression levels varied considerably between TP and YP.Previous research on β-defensins and Toll-like receptors 1-10 revealed similar expression patterns in different tissues of pigs (Qi et al. 2009;Uddin et al. 2013;Jiao et al. 2017). Transcripts of GHRL were most abundant in the stomach, less present in the small intestine and least in the large intestine.This result is consistent with the pattern previously described in pigs at different ages (Vitari et al. 2012).In addition to stimulating the GH secretion, GHRL has various physiological functions: it strongly stimulates feeding and increase in body weight while blocking LEPinduced feeding reduction (Shintani et al. 2001).Our results indicate that the GHRL expression level in 6-week-old TP is higher than in YP except in blood cells; however, at 12 and 24 weeks, this ratio in TP and YP is reversed.A possible reason for this is that https://doi.org/10.17221/40/2018-CJASTP may have stronger appetite than YP during the weaning period; hence, the growth rate of TP is higher during weaning but lower later.At the ages of 12 and 24 weeks, the GHRL expression levels in the YP stomachs were significantly higher than in TP, which may be associated with the important physiological effect of GHRL in the gastrointestinal tract on gastric acid secretion and gastrointestinal motility, resulting in more efficient digestion and higher body weight.These results suggest that attention should be given to the manipulation of GHRL during the later weaning period if we wish to improve on the economically valuable traits in TP.GHRL and the GHSR are highly conserved across all vertebrate species examined, which indicates that the two genes have important physiological functions and are indispensable (Dong et al. 2009;Kaiya et al. 2013).GHRL had a higher expression pattern in YP; however, its receptor level was higher in most TP samples.GHRL has a widespread distribution in various tissues to stimulate GH release by GHSR, but the different expression patterns in tissues suggest that it may exert other regulatory activities via different receptors (Fujimiya et al. 2011;Kitazawa et al. 2011). The expression pattern of LEP in most tissues of the two breeds is almost the reverse of that of GHRL but with the same general trend as GHSR. The LEP expression levels in TP were higher than in YP in blood cells at 6 weeks.However, this trend was reversed in most tissues of the two breeds at older ages.These results are consistent with the opposing effects of GHRL and LEP in gastrointestinal emptying and food intake (Shintani et al. 2001;Vitari et al. 2010).They may explain why TP have lower appetite-inducing GHRL and higher LEP levels in most tissues than YP, thereby helping the former to maintain stronger viability in a harsh feed environment.In both young and adult animals, the lower GHRL levels of TP possibly mirror their slow growth rate due to low feed intake and metabolism.The higher LEP levels may have a relationship with the role of LEP in immunity and female reproduction (Yadav and Deo 2013;Perez-Perez et al. 2015).Our results show that the lowest mRNA abundance of LEPR, but the highest of LEP, appeared in the blood cells samples of the two breeds.A possible reason for this discrepancy may be because LEPR, a member of six class I cytokine receptor super-family isoforms, is primarily found in the hypothalamus and is involved in satiety response (Perez-Montarelo et al. 2013).Another possible reason is that LEP, acting as an endocrine and paracrine regulating factor, is involved in the peripheral short-term regulation of food intake (Zieba et al. 2008). The data reported here show that the highest SST expression occurs in the blood cells of both breeds.The gene expression of SST in tissues at different ages exhibited a trend similar to those of the GHSR and LEP.SST exerts a powerful suppressive effect on gastric emptying, gallbladder contractility and propulsive activity of the small and large intestines (Den Bosch et al. 2009).The expression patterns are consistent with the habits and growth characteristics of the two breeds, and may explain why TP have a rapid growth rate before weaning and a relatively slow growth rate thereafter.TP have a higher expression level of SST than YP except in the stomach, jejunum and colon at 6 weeks, which would result in a greater inhibition of gastric acid secretion, intestinal motility and digestive enzyme secretion (Corleto 2010). CCK is a gut hormone and neuropeptide the function of which is considered to be an antagonist of GHRL in its effect on appetite and metabolism (Matson et al. 2000;Little et al. 2005).The current data show that CCK expression levels in small and large intestinal tissues were higher in TP than in YP.This indicates that TP have higher stimulation of the exocrine pancreas and exhibit stronger gallbladder contractions to promote the secretion of pancreatic enzymes and bile acids as well as to control gut motility and gastric emptying for digestion.This may be why TP have an excellent ability to digest harsh food within a hostile plateau environment.Additionally, as a satiety factor, TP will benefit from higher CCK levels in their adaptation to a cold environment and more limited nutrients. CONCLUSION Our experiment has compared the hormone levels of GHRL, LEP, SST and CCK in plasma and the transcript expression patterns of GHRL, GHSR, LEP, LEPR, SST and CCK in the gastrointestinal tracts of each eighteen half-sib purebred YP and TP.The plasma hormone levels had a greater positive correlation with the growth performance of the two breeds than with the transcript expression patterns.Transcript expression levels of GHSR, https://doi.org/10.17221/40/2018-CJASLEP, LEPR, SST and CCK were higher in all samples of TP than in YP at older ages, with lower levels of GHRL transcript in the main tissues.Our results indicate that, compared to YP, TP have different expression patterns of these genes in the gastrointestinal tract, as well as different plasma levels of these hormone proteins.These observations contribute to an understanding of the genetic basis of the unique digestion of the TP with a view of developing hybrid pigs with a better growth performance while retaining the excellent quality of TP meat and the adaptation of this breed to limited feed.Nevertheless, our studies of gene expression patterns in the two breeds are limited and a more definitive analysis will require more animals and further research. Figure 1 . Figure 1.Relative mRNA level of GHRL (A), GHSR (B), LEP (C), LEPR (D), SST (E) and CCK (F) genes in the tissues of Yorkshire and Tibetan pigs at different ages.The mRNA level was normalized against a set of three internal control genes (RPL4, PPIA and YWHAZ), and the relative index was determined against the transcript level in the caecum of Yorkshire pigs at 6 weeks of age blood = blood cells; data are presented as means ± SEM (*P < 0.05) Table 1 . Diet formulation and nutrition content Table 2 . Weights of Yorkshire pigs (YP) and Tibetan pigs (TP) at different ages P-values were calculated between Tibetan and Yorkshire pigs at the same time point Table 3 . Growth performance of Yorkshire pigs (YP) and Tibetan pigs (TP) at different ages ADFI = average daily feed intake, ADG = average daily gain, F = feed, G = gain, (-) = no results for 6-week-old weanlings values are expressed as means ± SD https://doi.org/10.17221/40/2018-CJASwere carried out using Graph Pad Prism 6 for Student's t-test between the two breeds.Differences between means were considered significant at P < 0.05. Table 4 . Primer sequences for qPCR Table 5 . Hormone concentrations of plasma in Yorkshire pigs (YP) and Tibetan pigs (TP) at different weeks of age
2019-07-16T23:01:15.201Z
2019-06-17T00:00:00.000
{ "year": 2019, "sha1": "be4b8f8c623948fe41e9d335cd25e7a99832ff19", "oa_license": "CCBYNC", "oa_url": "https://www.agriculturejournals.cz/publicFiles/40_2018-CJAS.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b74f3ffb54102fe538d07d97ce364073a848e04b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
221819450
pes2o/s2orc
v3-fos-license
Emerging spintronics phenomena and applications Development of future sensor, memory, and computing nanodevices based on novel physical concepts is one of the significant research endeavors in solid-state research. The field of spintronics is one such promising area of nanoelectronics which utilizes both the charge and spin of an electron for device operations. The advantage offered by spin systems is in their non-volatility and low-power functionality. This paper reviews emerging spintronic phenomena and the research advancements in diverse spin based applications. Spin devices and systems for logic, memories, emerging computing schemes, flexible electronics and terahertz emitters are discussed in this report. I. Introduction Conventional sensor, memory, and computing electronics exploit the charge of an electron for their operations. However, along with charge, an electron is also characterized by its spin angular momentum or spin. It is the spin of an electron that manifests in the form of magnetism that we see in magnetic objects of the macro world. In the information technology age, magnetism has found industry applications in the massive digital data storage. The field of spintronics is centered on electron's spin in conjunction with its charge. As we near the end of a several decade scaling of CMOS technologies due to fundamental physical limitations, utilizing the degree of spin freedom might be a natural choice for next generation technologies. An external energy source is not required for maintaining a particular spin-or magnetic-state in a spintronic device. This property makes spintronic devices non-volatile and low power consuming, hence attractive for the emerging era of mobile, wireless and scaled down electronic applications. The validation of spintronics as an imperative field to pursue was propelled with the discoveries of giant-magnetoresistance (GMR) [1,2] and tunneling magnetoresistance (TMR) [3,4]. In these findings it was put forward that a thin magnetic multilayer has a higher (lower) resistance when the magnetization of individual layers is aligned anti-parallel (parallel) to each other. Eventually, these structures were incorporated in the read head of hard disks, where their scalability has resulted in ultrahigh storage density. Presently, the read heads are based on the magnetic tunnel junction (MTJ) in which the spacer layer between the two ferromagnetic layers is typically an oxide such as MgO. While in a magnetic disk storage, the read-MTJ is toggled by the magnetic field emanating from the bits on the disk, a more efficient and scalable way to switch the magnetization in an MTJ-bit was found to be using the spin-transfer torque (STT) [5][6][7][8][9][10]. A STT involves a transfer of angular momentum from a moving electron to the magnetic atom. STT based devices have found recent commercial applications in magnetic random access memories (MRAM). Apart from their use in computing electronics, magnetic devices are also highly relevant in a wide variety of sensors. Keeping in view the wide applications of the magnetics/spintronics till now, there has been an aggressive research effort in this field for improving the existing and development of the emerging applications. In this review, we will focus on the advancement of spintronics research in some of the major areas of technical and practical significance. In Section II, spintronic devices for logic computation will be discussed. Following this, in Section III, we will focus on spin devices for MRAM. In particular, the mechanism of spin-orbit torque (SOT) and subsequent research on the SOT devices will be detailed. Advancement in the spintronic devices and systems for alternative computing methodologies and optimization machines is part of Section IV. Progress on spin devices for flexible electronics and THz emitter are discussed in Section V and VI, respectively. II. Spintronics for logic At present, most of the computational tasks are performed by the microprocessors. These computing units use miniaturized solid-state elements i.e. metal-oxide-semiconductor (MOS) to transfer and process the information. The information is transferred by the electronic charge and is stored in form of distinct voltage levels. The information processing capacity of a typical processor has been continuously increasing since its inception in early 1970's. This has been majorly due to aggressive scaling of the underlying complementary MOS (CMOS) technology which resulted in packing numerous functions in a given processor area [11]. However, the processor performance improvement has recently plateaued, partially due to increasing power density. Spintronic devices offer a potential solution to this problem by offering reduced power consumption owing to their non-volatile nature [12][13][14]. The Datta and Das transistor is among the very first proposed spin logic device [15,16]. The suggested structure as illustrated in Fig. 1(a) consists of a spin polarizer and an analyzer (a magnet) connected by a non-magnet which has spin-orbit coupling (SOC). The application of an electric field on the SOC channel results in precession of spin of the electrons that are injected from the polarizer. The precession of spin is a consequence of the Rashba effect which dictates the manifestation of electric field into a magnetic field in a moving electron's frame of reference [17]. The phase of the injected spin can therefore be controlled by a gate voltage applied on top of the channel. On arrival at the analyzer, if the electron's spin is in-phase (out-of-phase) with the analyzer's magnetization, it will result in a low (high) channel resistance corresponding to "On (Off)" state of the transistor. However, the use of phase makes it difficult for implementing logic circuits, because the phase is very sensitive with a continuous variable and not a discrete binary one. Therefore, phase information may be more suitable for sensors which deals with analog output values. Till date a fully functional spin field-effect transistor (spin-FET) has not been realized due to non-ideality of the polarizer and analyzer layer, and the scattering process in solids which randomizes the spins [16]. However, there have been few demonstrations of gate induced spin precession and their subsequent detection in two-dimensional electron gas (2DEG) systems. For example, Koo et al. have detected spin precession in an InAs high-electron mobility channel [18]. NiFe electrodes were used as the spin injector and detector in this configuration ( Fig. 1(b)). As shown in Fig. 1(c), an oscillatory modulation of channel conduction was observed when the magnetization of the polarizer and analyzer were aligned along the channel direction (black curve). It should be noted that with the magnetization aligned along an in-plane direction transverse to the channel there was no spin precession as the Rashba field and injected spins are collinear (red curve). It was later shown that spins injected using circularly polarized lights can also be modulated using a top gate and be detected using the spin Hall effect (SHE) [19]. Recently, spins injected electrically using a magnetic layer were detected in a similar way using the inverse spin Hall effect (ISHE) [20]. While the spin-FET emulates the functionality of a conventional MOSFET, numerous other spin logic devices which exploit alternate magnetic properties have been realized. One such scheme uses the magnetic domain wall (DW) and its motion to transfer information [21][22][23][24]. A magnetic DW is the interface between two oppositely aligned magnetic regions which moves either along or opposite to an applied magnetic field. The motion of DW around a corner in the presence of a rotating magnetic field has been used to implement NOT gate as shown in Fig. 2(a) [25]. A nanowire is patterned in the form of a cusp with its either side serving as input and output terminal of the NOT gate. The magnetization direction with respect to the DW motion direction represents the two logic states. A magnetic field is rotated in anti-clockwise direction as shown in Fig. 2(a). A DW that is present at a point P with the magnetization aligned along the +x direction on its left side (representing input of logic "1") moves along the first corner of the cusp to the point Q when the magnetic field is rotated from the +x to +y direction. In the next cycle of magnetic field rotation from the +y to -x direction the DW moves to the point R. However, on arrival at the point R, the direction of the magnetization of the left of DW is now towards -x direction due to the continuity of the magnetization in the cusp. A logic "1" input is thereby converted to logic "0" in a half-cycle of magnetic field rotation. The rotating magnetic field in effect serves as a clock signal. Input and output traces of the NOT gate obtained using the magneto-optic Kerr effect (MOKE) are shown in Fig. 2(b). It should be noted that there is a propagation delay of half clock cycle from input to output as explained previously. Shift registers were also implemented by pattering many such cusps adjacent to each other. Several other nano-patterning schemes have been used to implement NOT gate [23], AND gate [21], buffer [26], and SHIFT registers [24]. The DWs in these devices are either moved with the help of magnetic field or using spin current. A big disadvantage of the DW based logic is the need for external magnetic field to move the DW, which also prevent individual controllability of each DW in a device with multiple DWs. This hinders their scaling for practical applications. Ideally, the DW based logic gates can be designed such that the DWs can be moved by currents instead of magnetic fields. Another drawback of the DW based logic is the size of DW which can be anywhere between 7 to 100 nm. Therefore, the device containing these DW will be even larger. In comparison, the CMOS technology is already mass producing the 7-nm node technology and approaching the 5-nm node in the coming years. The DWs are also prone to pinning due to inhomogeneity of the patterned channel, which can cause reliability issues. Another scheme of implementing spintronic logic is via switching of bistable magnetic element. The two stable states of a magnet represent binary information. Behin-Aein et al. proposed one such implementation in which spin information can be transferred using spin current from one magnetic element to another [27]. In their proposed scheme shown in Fig. 3 (a), a voltage Vsupply is used to apply spin-torques to position the output magnetic bit in a neutral state (highenergy) which lies between the two stable states (low-energy). An application of a Vbias signal which is relatively small compared to Vsupply transmits the information from the input magnet to the output magnet through the channel. A semiconductor spin channel can be used as an interconnect as it supports a longer spin coherence length. In the presence of both Vsupply and Vbias the output magnet switches in either of the stable states depending on the state of the input bit. Information from few of these input magnets can be combined to implement logic functions like AND/OR. Figure 3(b) shows two cascading gates with two variable inputs and a fixed middle input. If the middle input is aligned along the logic "1" direction, the gate functions as an OR gate as the net spin current which is determined by the superposition of the spin currents from the 3 inputs will be in logic "1" direction if either of the variable input is "1". On the other hand, if the middle input is fixed in the "0" direction, the gate emulates AND functionality as the net spin current will be along logic "1" direction only when both the variable inputs are "1". The output terminal of the gate receives information when Vsupply is applied on it. The information is transferred to the next gate on an application of Vsupply in the next clock cycle. While the scheme of magnetic logic was proposed almost a decade back, an experimental demonstration of such a device has not been shown yet. The proposal relies on ideal behaviors of the magnets, spin currents and the spin channels. While a metallic channel is an ideal choice for interconnect due to their low resistance, the small spin coherence length in metals will result in a loss of information before it is transferred from one magnet to another. Although a semiconducting channel supports a longer spin coherence length, the resistance mismatch between the ferromagnet and semiconductor makes the spin transfer efficiency very low. In addition, the superposition of spin currents coming from various inputs will highly depend on the interconnect length. Therefore, interconnects have to be designed precisely to obtain desired functionality, which may lead to scalability issues for a large number of logic array operation. Bhowmik et al. demonstrated that the spin current generated using the SHE in a nonmagnetic element such as Ta and Pt can perform the clocking function [28]. In their work, the magnetization state of an input magnet was used to control the final state of three nano-magnets as illustrated in Fig. 3(c). When the input magnet is set in the up (down) magnetization direction, the final state of the three bits stabilizes in down-up-down (up-down-up) configuration. The three nano-magnets which have dipole coupling between them change their state only when a charge current is passed through the underlying Ta layer, hence the clocking function. The current-induced magnetization switching using SOT (refer to Section III for details on SOTs) occurs above a certain threshold value of current density. This property has been also used to build logic functions. A SOT device with two inputs in form of current acts as an AND gate. The final magnetization state of the magnet represents the output. The individual value of each current input is maintained below the threshold switching current of the device in order to achieve the AND functionality [29]. The direction and magnitude of the assist magnetic field was used to construct other logic functions such as OR, NAND, and NOR. For example, in order to implement an OR gate, the magnitude of assist field was increased such that the threshold switching current decreases below the individual input current values. This results in switching of the magnetization for all the input values except when both the inputs are zero, thereby emulating an OR gate. SOT devices which show a voltage control of magnetic anisotropy (VCMA) [30] have also been used to implement spin logic. The VCMA in these devices modulates the threshold switching current of the magnet [31] (Fig. 4). A multifunctional OR/AND gate was implemented using a single device depending on the initial magnetization states as shown in Fig. 4(c,d). The switching current and gate voltage acts as the two input parameters, while the output was measured using the anomalous Hall resistance (Rxy). These logic devices, however, have limited fan-out as the readout is carried using the anomalous Hall resistance. Again, device to device variation normally results in different threshold currents for different devices. The proposed device also requires an assist magnetic field which makes scalability difficult. Spin waves which represent propagating disturbance in a magnetic material have also been proposed as a viable means to construct spin logic. The spin wave acts as information carrier and their phase can be varied by a magnetic field produced by current passing through a wave guide [32,33]. Spin waves from different sources add up constructively or destructively depending on input current values to realize Boolean functionalities. However, since the phase of spin wave is prone to disturbance from magnetic inhomogeneity and imperfections, it has been proposed to use the wave amplitude instead [34,35]. The nonreciprocity of spin wave's magnitude for opposite wave propagation direction can be exploited to implement a simple invertor or pass gates [34]. A larger value of nonreciprocity corresponds to a larger readout margin. It was shown that a Ta/Py bilayer system shows a giant nonreciprocity factor (the ratio of spin wave amplitude at positive and negative field) of ~ 14 (60 in the frequency domain) for the Ta thickness of 8.2 nm as shown in Fig. 5 [36]. The spin wave-based logic is still in a very preliminary stage. In comparison to charge currents, spin waves are a very weak information carrier. For sustaining reliable and longdistance transfer of information using spin waves requires specific magnetic materials. The demonstration of spin wave-based logic network is yet to be seen due to these limitations and requirements. Magnetic logic devices utilizing skyrmions [37][38][39], which are topological spin states, have been proposed. Skyrmions can either be driven by the magnetic field or spin currents while they can be manipulated by dynamically modulating properties of the magnetic films using the electric field [40]. In one such scheme, this skyrmion behavior was utilized to simulate a skyrmion transistor. In the proposed transistor the skyrmions were driven by a spin current from one end of the spin channel to another. A gate in between the channel was used to annihilate the skyrmion by modulating the anisotropy of magnetic film under it, resulting in transistor "off" operation [41]. A hybrid structure based on skyrmion and DW has been put forward [42]. The DW and the skyrmion can be interconverted by designing magnetic channels of different widths. By designing specific nanostructures that duplicate, merge or annihilate skyrmions logic gates functionalities can be achieved [43]. While the skyrmion logic devices are promising, they are still in a conceptual and simulation stage. An experiment demonstration of their functionality and scalability is awaited with interest. As we see in this Section, there have been multiple proposal and demonstration of spin logic devices and systems, however, these implementations still have a long way to go before they compete with CMOS logic. The normal propagation delay of any conventional CMOS logic gate is generally few ps. In comparison, the delay or speed of typical spin devices is limited to ns or GHz, respectively. The advantage offered by spintronics is, however, their non-volatile nature which can save a tremendous amount of power during data processing, as circuit blocks in execution pipeline but not being executed can be switched off. Interconnects is a big issue in all spin logic circuits. An ideal interconnect should be able to transfer spin information over long distances. However, in reality, spin interconnects made up of metals have relatively short spin coherence lengths making them unsuitable for spin logic circuits. In contrast, the interconnects in current silicon logic circuits can even run from one end of the chip to another and over many layers. This is due to the fact that an electric charge is a conserved quantity, but a spin is not, therefore it is not easy to transfer spin information for a long distance. Scalability is another important feature for any logic device. While a MTJ, which is the building block of spintronic memory is highly scalable even comparable to CMOS gates. The spin logic devices that have proposed and demonstrated till now fall short on this criterion. For example, the DW based logic devices that rely on the presence of DWs have dimension ranging from tens to hundreds of nanometers. The majority of works on spin logic till now have been focused on single device demonstrations. The performance of these devices is still limited in terms of on/off ratio, speed, and scalability in their present form. For example, the on/off ratio of a thin-film-transistor panel is more than 100, which is regarded as a poor performance CMOS device. On the other hand, one of the best spin filter, an MgO tunnel junction has an on/off ratio less than 10. It is also essential to realize an interconnected network of spin logic devices to show significant advantage of a spin computing system over CMOS. The work on spin logic devices and systems is still in a very preliminary and exploratory stage. The viability of an all spin-logic network is yet to be demonstrated. A more viable alternative would be exploring hybrid CMOS-spin logic. III. Spin devices for memories The existing computing memory hierarchy has tremendous performance gaps in terms of the speed and density/cost which leaves abundant scopes for improvements. For example, the write speed of cache, main and eFLASH memory is around 5 ns, 30 ns and 0.1 ms, respectively [44,45]. Emerging memory technologies which have an intermediate speed between the cache and the main memory or that between the main memory and eFLASH are promising for bringing performance improvement in the future computing systems. In addition, future applications like Internet of Things (IoT) demand for fast computing at the edge in an energy-efficient manner. Spintronic in form of MRAM offers one such potential memory solution due to its non-volatility and high speed of operation [12,13,46]. The basic unit of a MRAM [47] is a MTJ [48] which stores digital information in the form of two bistable magnetization states of a thin magnetic layer. These magnetization states can be read-out using the resistance value of the MTJ. The writing of MTJ involves switching the magnetization from one stable state to another by crossing a high energy barrier between them. Ideally, it is desired to cross this energy barrier with a minimum input energy. At the same time, the energy barrier should not be too small so as to allow undesired magnetic switching due to the thermal energy from the surrounding. The MRAM research aims to develop device solutions that bring the balance between the conflicting requirements of a low switching energy and high thermal stability. The first generation of MRAM also known as toggle MRAM used magnetic fields generated by the current in metal lines as shown in Fig. 6(a) to switch the MTJ. This writing method is both high energy consuming and non-scalable. The second generation MRAM which is based on STT (STT-MRAM) [49] operates on the principal of the transfer of angular momentum from a spin polarized electron to the atoms of the magnetic bit [5,49,50]. In a STT-MRAM illustrated in Fig. 6(b), the spin polarized current is generated by passing a charge current through a fixed magnetic layer. While offering tremendous advantage over the toggle MRAM, the STT-MRAM still requires a large write current and has a potential endurance issue due to the breakdown of the MgO barrier layer which separates the two magnetic elements (reference and free layer) of the MTJ [51]. In 2010, Miron et al. demonstrated that a spin current generated by passing charge currents through a non-magnetic heavy metal layer is capable of manipulating the magnetization of an adjacent magnet [52]. Their work formed the basis of the spin-orbit torque MRAM (SOT-MRAM) [52][53][54][55]. For STT operation, a nanopillar type MTJ is required to observe the current induced switching effect, which is very challenging in a typical academic institute, however, SOT devices even with the width of micro-size can still demonstrate the current induced effect due to a very thin film thickness involved in lateral current injection. This relaxed device patterning requirement attracted various academic institutes to SOT research. However, the device size of three-terminal SOT is larger than that of two-terminal STT, but is still half size of the modern static random access memory (SRAM). The SOT device also has an advantage over STT ones in terms of its speed. The incubation time in STT due to parallel alignments of the incoming spin and the magnetization of the free layer is absent in perpendicular SOT devices. The SOT geometry offers another advantage, the spin currents in SOT devices can be very large, as the electron interacts with the FM many times taking advantage of lateral scattering in the SOT device. In fact, Yoda et al. reported the switching efficiency of SOT is 3-4 times higher than that of STT using an in-plane MTJ device [56]. In the remaining part of this section we will discuss about the major developments in SOT-MRAM devices. At first, we begin by giving a brief introduction on SOTs. A SOT device heterostructure typically consists of a SOT source adjacent to a magnetic layer which is the data storage unit [57,58] (see Fig. 6(c)). The SOT source is a material with large SOC, for example heavy metals (HM) such as Pt [57,[59][60][61], Ta [61][62][63][64], W [65,66] etc. The magnetic layer is typically a metallic ferromagnet (FM) such as NiFe, CoFeB, etc. When a current is passed through the HM, spins accumulate at the interface of the HM and FM. The accumulated spins diffuse into the FM during which they transfer their angular momentum to the magnetic atoms. The overall result of this process is the manipulation of the FM magnetization and its eventual switching. The SOT research has been mostly focused on the understanding and exploiting the rich SOT physics to develop energy-efficient SOT heterostructures. The spin accumulation and the subsequent spin current that is generated on passing a charge current through a SOT device is generally due to two physical mechanisms. The first mechanism is the SHE [67][68][69][70][71][72][73]. The SHE governs charge to spin conversion in a material with large SOC. Due to the SHE, charge currents carrying electrons with opposite spins are separated in two opposite directions resulting in a spin current (JS). The direction of JS is orthogonal to both the charge current (JC) direction and the spin polarization (σ) direction as illustrated in Fig. 7(a). Although predicted long back in 1971 [73], the SHE was not observed until 2004 [74] when it was measured using magneto-optical Kerr microscopy. It should be noted that the spin accumulation due to the SHE is a result of charge currents flowing though the bulk of the HM. The spin separation in SHE arises either from the band structure of the SOC source (intrinsic SHE) [72] or the asymmetric spin-dependent scattering of the electron with the impurities in the SOC source (extrinsic SHE) [67,70]. The charge to spin conversion factor therefore depends on both the intrinsic and extrinsic factors, and it dictates the amount of spin current generated from a given charge current. The charge to spin conversion factor is called the spin Hall angle (θSH). It should however be noted that in many of the reports the θSH is a combined representation of spin conversion factor from the SHE and other bulk spin current sources. Finally, a SOC source which exhibits the SHE also reciprocally converts a spin current to charge current through a process known as the inverse SHE (ISHE) [75,76]. The second process of the spin to charge conversion is through the interfacial Rashba-Edelstein effect also referred to as the Rashba effect [17,52,61,77,78]. In a heterostructrue with broken inversion symmetry (e.g. SOT devices) an electric field exists at the interface of two different layers. For example, in a SOT device an electric field (E) exists at the interface of HM and FM. When an electron flows through this interface, the electron experiences an effective magnetic field in a direction given by E × p, where p is the electron's momentum (see Fig. 7(b)). Under the effect of this relativistic magnetic field, the electrons at the interface are polarized in the direction E × p. These spin polarized electrons diffuse into the adjacent magnet and manipulate its magnetization similar to the SHE. However, unlike SHE, the Rashba effect is an interfacial effect and does not depend on the current flowing through the bulk of the HM. In reality, both the SHE and Rashba effect are present in a typical device and its relative contribution is not easy to be distinguished, even though thickness dependence studies are often utilized for this purpose. We encourage the readers to refer to the focused reviews on SHE and interfacial SOC effect for a detailed physical insight on these phenomena [68,69,79,80]. While most of the experimental works on evaluation of the SHE, Rashba effect, and SOTs involve their indirect measurements using an adjacent magnet, the first observation of SHE was a direct imaging in semiconductors GaAs and InGaAs using magneto-optical Kerr microscopy [74]. A similar examination of spin accumulation in metallic HM systems with Kerr microscopy has yielded controversial results. With some groups claimed to have observed the Kerr rotation in the presence of accumulated spins [81,82], others suggest the SHE signal to be too weak to result in any Kerr rotation for these systems [83,84]. It was suggested that the observed signal probably arises from the change in reflectivity of the metal due to heating. With the debate still open on the validity of MOKE to visualize the SHE in metals, an alternate method of photoconductance has been employed recently to visualize the current-induced spin accumulation [85,86]. In a photoconductance measurement, a circularly polarized laser is shone on the channel which results in a voltage difference across it. When a current is also passed through the channel, it was observed that the generated voltage has helicity dependence due to magnetic circular dichroism. Figure 8 shows the spatially resolved photovoltage map for Bi2Se3 (topological insulator) and Pt. The helicity dependent photovoltage polarity is reversed on changing the current direction in the channel. This observation is in line with the reversal of the spin polarization direction due to the change in electron's momentum. We will discuss the recent advances in the development of SOT devices. The sub-sections have been divided in terms of the approaches used for these developments. TIs are materials with an insulating bulk and conducting surfaces (topological surface states, TSS) [122,123]. Interestingly, the TSS exhibit spin-momentum locking in which the spin polarization direction of an electron on the TSS is fixed with respect to its momentum. This results in spin accumulation at TSS similar to the SHE and the Rashba effect observed in HMs. In spintronics, TI research has been pursued both for their interesting physics and practical applications in SOTs. In 2014, it was shown that current induced spins in TIs (Bi0.5Sb0.5)2Te3 can switch a magnetic element [101]. However, this experiment was performed at 5 K and a magnetic doped TI was used as the switching element instead of a metallic FM. The first room temperature magnetization switching using TI was performed on a ferrimagnet ( Fig. 9(a)) [107] and on a metallic FM [104]. On the latter, the switching was observed using MOKE microscopy, and a TI thickness dependence study was also performed. The SOT efficiency in TI (θTI) as a function of thickness can be divided in three regions ( Fig. 9(b)) depending on the dominating source of SOTs for these thicknesses. While the spin-torque from bulk states (current flowing through the bulk) dominates the spin accumulation in Region I, the large enhancement of θTI is attributed to the TSS in Region-III (smaller thickness). Pan et al. have mapped the spin texture of the TSS using a very simple yet effective electrical measurement technique of bilinear magnetoresistance (BMR) [124]. Both in-plane and out-of-plane spin textures were mapped using this method. Figure 9(c) shows a 30° canting angle of the spin with respect to the film plane as measured using the BMR probe. Another exotic material system for SOT research is the Weyl semimetals [125]. They have been predicted to have large spin splitting i.e. Edelstein effect due to their non-trivial band structure [115]. Current Weyl semimetal SOT researches are largely focused on WTe2. The crystal structure of a Weyl semimetal, WTe2 has only one mirror plane and does not contain a two-fold rotational invariance ( Fig. 10(a)). Therefore, the current-induced spin accumulation response in WTe2 is anisotropic. In fact, when the current is passed through a low-symmetry axis (a-axis), a sizable out-of-plane spin accumulation is detected [118,119]. This behavior was evident from the asymmetric spin-torque ferromagnetic resonance (ST-FMR) spectra when the magnetic field was applied in two opposite directions ( Fig. 10(b)). This unique property of WTe2 makes it ideal for use in SOT devices with perpendicular magnetic anisotropy, while a few other field-free SOT switching schemes were proposed [126][127][128][129][130]. Current-induced magnetization switching using WTe2 has been recently demonstrated, shown in Fig. 10(c,d), and an in-plane θSH, which increases up to a maximum value of 0.8 with increasing the WTe2 thickness has been reported [119]. Interestingly, the required power to switch the magnetization was 19 times smaller in WTe2/Py than that of Bi2Se3/Py and 350 times smaller than that of Pt/Py due to a high efficiency and low resistivity of WTe2. Apart from TIs and Weyl semimetals, several other 2D materials such as MoS2 and WSe2 have been also shown to have a moderate charge to spin conversion efficiency [131,132]. 2DEGs at the interface of LaAlO3 and SrTiO3 also support a giant charge-induced spin accumulation [133][134][135][136][137]. While the above mentioned exotic materials have been promising in terms of their charge to spin conversion properties, the biggest hindrance towards their practical applications is the difficulty in their fabrication for a large and uniform area as well as their large resistivity leading to a current shunting issue. Single crystal TIs, semimetals, and other layered materials in majority of the reports discussed above have been either exfoliated or fabricated using sophisticate molecular beam epitaxy (MBE). Only very recently sputter deposited TIs have been shown to produce large SOTs capable of switching an adjacent magnet [106,138]. While sputter deposited exotic materials are expected to be more technologically relevant, it is still not clear whether topological features are present and what the role is in the structure with small crystalline clusters and non-stoichiometric inhomogeneous composition. Future research effort should focus on fabrication of these novel materials in a fast, efficient and reliable ways. B. Engineering the magnetic layer While the SOTs nominally arise from the SOC source such as HMs, the magnetic layer can also serve as a secondary source and modulator of the SOTs. Among the different types of magnetic materials, ferromagnets which have positive exchange interaction between the individual atoms or layers have been widely explored for SOTs. However, there are also another class of magnetic materials such as antiferromagnets (AFMs) and ferrimagnets (FIMs). These materials typically consist of two different atomic sub-lattices that prefer to align their spins opposite to each other due to the negative exchange interaction between the two sub-lattices. While AFMs have net zero magnetization due to equal and opposite magnetization of the constituent sub-lattices [139,140], FIMs have non-zero magnetization due to unequal magnetization of the constituent elements. The major advantage of AFMs and FIMs over FMs is their robustness against external magnetic fields. This makes them thermally very stable resulting in a longer retention time compared to FMs. The thermal stability in particular is a very important factor for scaling down the spintronics devices in the nanometer regime. In 2016 it was first shown by passing a current through an AFM with locally broken inversion symmetry such as CuMnAs [141], the antiferromagnetic domains can be switched [142]. The switching mechanism in CuMnAs is illustrated in Fig. 11(a). Current-induced local magnetic field is generated around individual Mn atoms due to the inverse spin galvanic effect [143,144] which requires broken inversion symmetry in the given system. While the CuMnAs crystal as a whole has inversion symmetry, the local environment of two sub-lattice formed around Mn atoms has broken inversion symmetry. This results in staggered magnetic field of opposite polarity around these atoms as shown in Fig. 11(a). The staggered magnetic field results in switching of the AFM Néel vector as a whole. The switching of the Néel vector was detected using the anisotropic magnetoresistance (AMR). Figure 11(b) shows the varying AMR signal on application of current pulses. Later, AFM switching was also demonstrated in Mn2Au [145,146]. Since a single current pulse moves the AFM domain only very slightly, multilevel memory cells were demonstrated with CuMnAs [147]. The ultrafast dynamic of AFMs allow their switching with picosecond current pulses [148]. The AFM domains have been imaged using x-ray magnetic linear dichroismphotoemission electron microscopy (XMLD-PEEM) as shown in Fig. 12(a-c) [149]. Chen et al. have recently shown that the magnetic anisotropy of Néel vector in Mn2Au deposited on a ferroelectric substrate, PMN-Pt, can be switched between two orthogonal directions [150]. This results in a ratchet like switching behavior (see Fig. 12(d)). Apart from inducing switching, it has been also proposed that the staggered relativistic SOT fields can result in a very high domain wall velocity in these AFMs [151]. Other than AFMs with broken inversion symmetry, there have also been demonstration of Néel vector switching in insulating AFMs such as NiO [152][153][154]. In these works, the SOTs are generated from an adjacent heavy metal instead from the AFMs themselves. While the AFMs present an exciting and stable system for spintronic applications, the detection of their magnetization state remains challenging. Since the AFMs are read electrically using a small value of AMR rather than TMR, they are incompatible with a magnetic tunnel Synthetic antiferromagnetism arises from the interlayer Ruderman-Kittel-Kasuya-Yosida (RKKY) coupling between two ferromagnetic layers separated by a spacer such as Ru [156][157][158][159][160]. Compared to AFMs such as CuMnAs, SAF are easier to fabricate and do not require special crystalline substrate. Figure 13(a) shows a measurement of a very large domain wall velocity (750 m s −1 ) obtained in a SAF formed by Co/Ni/Co multilayers through a Ru spacer [161]. The negative exchange coupling results in this high domain velocity which increases with the amount of compensation ( Fig. 13(b)) between the two coupled layers. It was also found that the SOT switching efficiency in a completely compensated SAF made with Co/Pd FM layers was significantly higher compared to the FMs [162]. Recently, it was shown that the SOT for the AFM coupling case is ~15 time larger compared than without the AFM coupling in a Pt/Co/Ir based SAF system [163]. This report suggests that the interface induced phenomena apart from the negative exchange torque [161,164] are possibly responsible for such a large SOT efficiency in the SAF. However, the amplification of SOT near compensation cannot be fully explained by the reduced magnetization. It has been proposed that apart from reduction of the magnetization near compensation, the drastic amplification of SOTs in compensated ferrimagnets has been attributed to the enhanced negative exchange torque [162][163][164]. The high efficiency of SOTs in the RE-TM FIMs and the fact that these materials derive perpendicular anisotropy from the bulk, helps in developing devices with thick and thermally stable magnetic layer. Roschewsky et al. have shown current-induced magnetization switching of 30 nm thick GdFeCo layer which has thermal stability of ~ 100 kBT, where kB is the Boltzmann constant and T is the temperature [172]. The faster intrinsic dynamics due to the negative exchange coupling also results in ultrafast switching in the FIMs. Figure 13(c) compares the switching energy and switching duration of the FIMs to that with the FMs [173] from which it can be seen that the FIMs are around 1-2 orders of magnitude better than FMs on both these parameters. The ultrafast dynamics in FIMs [174,175] also leads to a very high current-induced DW velocity up to 5.7 km/s [173] when compared to the FMs or even the SAFs ( Fig. 13(d)) [176,177]. The high DW velocity finds applications in DW motion based magnetic devices. It has also been reported that the FIMs and AFMs support longer spin coherence lengths [178,179]. Figure 14(a) illustrates the alternating spin alignment in a ferrimagnet that assists spin information transfer over a longer distance. Due to opposite exchange fields in alternating sublattices, spin dephasing is partially compensated. The SOTs therefore act over a larger thickness in the FIMs as shown in Fig. 14(b) showing bulk-like SOT [178] when compared to their limited penetration in (< 1.2 nm) in the FMs [180,181]. A disadvantage of RE-TM FIM is that the RE element is prone to oxidation. In MRAM fabrication steps in which the die is exposed to different temperatures and chemicals, preventing oxidation of FIMs can be challenging. The temperature sensitivity of the magnetization of FIM and related properties should be considered while designing memories based on them. For example, the temperature sensitivity of FIM could be a critical issue involving varying temperature environment such as automotive applications. Considering that FIM has been successfully commercialized for magneto-optical disks, the above oxidation and temperature issues could be also addressed in magnetic memory applications. While in this section we have mainly discussed the research works concerning the antiferromagnetically coupled materials, there are analogous efforts to exploit other interesting magnetic systems such as multilayers [182,183], magnetic insulators [184,185], etc. Apart from a faster speed and lower power consumption, the integration of new magnetic systems in the MTJ based MRAM architecture is a criterion that is important to meet for real applications. C. Engineering the SOT heterostructure stack The simplest SOT device consists of a SOC source neighboring a magnet. In this section we will discuss about alternative SOT heterostructures that enable an enhanced efficiency. Fig. 15(a) [187]. It was found that a combination of Pt and W results in the largest effective θSH of 0.45. Additionally, it was suggested that the thicknesses of the two HM layers are critical for obtaining the largest SOT efficiency, as it dictates the current shunting in the individual layer. It has been also shown that a bilayer of two heavy metals (Pt and Ta) can be used to continuously tune both the magnitude and direction of SOTs [188]. The interface between the HM and FM is critical in determining the magnitude of SOTs in a given device. The quality of this interface decides the amount of spin memory loss and spin transparency, hence the magnitude of spin current experienced by the FM [189][190][191]. Due to the above reasons, the experimentally obtained SOT strengths (e.g. θSH) are effective values including various bulk and interface effects. It was shown that by dusting a thin layer (~ 0.5 nm) of Hf at the Pt/CoFeB interface, the spin-torque efficiency could be enhanced up to 0.12, which is nearly double compared to the Pt/Py system [192]. An accompanied reduction of the magnetic damping, α = 0.012, was also reported because of spin pumping suppression. The above approach was used to achieve a low switching current density of 5.4 × 10 6 A/cm 2 in an in-plane MTJ which had Hf dusting on both interfaces of the free layer [193]. An interfacial Ti layer (~ 1 nm) between Pt and CoFeB also helps in reducing the switching current density three times [194]. Recently, it has been suggested that a lower value of interfacial SOC at the HM/FM interface results in a lower attenuation of the spin currents by decreasing the spin memory loss [195]. Capping the HM/FM stack with Ru layer has been shown to result in an enhanced value of SOTs [181]. This enhancement is attributed to the spin current absorption in the Ru layer as illustrated in the schematic of Fig. 15(b). The SOT effective field increases by a factor of three for a Ru thickness of 0.6 nm when compared to a Pt/Co/Ni/Co device without any Ru layer on the top. In another variation of the SOT devices, it has been demonstrated that an in-plane magnetized FM layer ( Fig. 15(c)) can perform charge to spin conversion similar to a HM [196][197][198]. The resulting spin current in this heterostructure is capable of switching a perpendicular CoFeB layer without any assist in-plane magnetic field (see Fig. 15(d)) due to a slight out-of-plane alignment of the generated spins. However, in this heterostructure the in-plane magnet should be immune against the external magnetic field. Therefore, an in-plane magnet with a high coercively is required. Moreover, repeated switching events can perturb the magnetization direction of the in-plane magnet and thus affecting the long term stability of the memory device. Inadvertent switching of the in-plane magnet can result in opposite sense of switching loop and can lead to unreliable memory operations. We would like to bring to the reader's attention that apart from the aforementioned works on engineering SOT heterostructures, there is a dedicated research effort on engineering SOT heterostructure for field-free SOT switching of the perpendicular magnetization, which is of significant importance for applications, the details of which can be found in a recent review [53]. Even though many different schemes were proposed for field-free switching, there is no clear avenue which can satisfy requirements of MTJ integration and large wafer size scaling with a homogeneous device to device uniformity, and further research is required to address this important issue. D. SOT modulation by oxygen incorporation Oxygen plays an important role in determining many magnetic properties. For example, the incorporation of oxygen in metallic FMs such as Fe, Ni and Co, induces negative exchange interaction resulting in the formation of AFMs such as Fe2O3, NiO and CoO. For applications in nanodevices, a strong perpendicular magnetic anisotropy (PMA) which is vital for device scalability is derived from the orbital hybridization of magnetic atoms with the oxygen at the interface [199,200]. As a result, the PMA has a strong correlation with the amount of oxygen at the magnetic interfaces. In fact, it was shown that by migrating the interfacial oxygen using the electric field, the PMA can be significantly altered in a non-volatile and reversibly way [201][202][203]. In the field of SOTs, it was found that the oxide capping layer plays a role in determining the magnitude of SOTs for a given device [204]. Both the field-like and damping-like torques were found to be 10 and 6 times larger, respectively, for a MgO capped Hf/CoFeB device compared to a TaOx capped one. The different interfacial electric field at the oxide interface and the resulting Rashba torque was attributed to this variation [87]. This suggests that not only the bottom HM/FM interface, but also the top FM/capping interface should be considered to estimate the overall SOT effect in a given structure. Incorporation of oxygen in the magnetic layer is another way of manipulating the SOTs. It was first shown that when oxygen is dynamically introduced in the Co layer of a Pt/Co/GdOx device by the application of a gate voltage, the resultant SOT was an order of magnitude larger compared to the unmodified device [205]. While an enhancement of SOT on Co oxidation is expected due to the reduced Co magnetization, the observed amplification of SOT in this work was found to be disproportionately larger compared to the amount of reduction in the magnetization. This hints towards a role of oxygen in modifying the interfacial SOTs. A similar enhancement of SOTs was also reported on a HfOx capped device in which the gate voltage was applied using ionic liquid [206]. Hasegawa et al. recently showed that on introducing an oxidized Co layer at the Pt/Co interface a 4-and 10-fold enhancement of the longitudinal SOT effective field (HL) and transverse SOT effective field (HT), respectively, was achieved [207]. Apart from modulating the magnitude of SOTs, an oxidized Co or CoFeB layer on a thin Pt layer results in a reversal of the overall spin accumulation direction or the effective θSH polarity [208,209] [208]. Interestingly, the SiO2 thickness of 1.5 nm corresponds to the native oxide thickness of SiO2 when it is exposed to the air. This opposite direction of SOT fields resulted in an opposite current induced switching polarity as measured by the anomalous Hall resistance (RH) in the Fig. 16(a). It was found that the thickness of the SiO2 capping determined the amount of oxygen in the magnetic layer, hence the polarity of SOTs. Recently, it was shown that this SOT polarity control can be achieved dynamically and reversibly in a single Pt/Co/GdOx device using electric field assisted oxygen migration [209] as illustrate in Fig. 16(b), in which GdOx works as an oxygen reservoir sending and receiving oxygen ions depending on a negative and positive top gate bias voltage, respectively. The SOT polarity reversal has been attributed to competition between the bulk SHE and interfacial Rashba effect. An oxidized Pt/Co or CoFeB interface has a larger Rashba torque with an opposite polarity compared to a spin Hall torque, therefore the device has a negative SOT polarity. The critical level of oxidation of the Co layer is found to be ~30 to 40% in order to observe a SOT sign change, however over-oxidation (> 50%) can cause an irreversible formation of CoO, which cannot be reduced back to Co by applying a gate bias. In addition, the oxygen at the interface was fine-tuned with the electric field to program the SOT device with a range of effective spin Hall angle as shown in Fig. 16(c). Such a sign change behavior with oxygen cannot be understood by the spin Hall physics. A recent work revealed that the experimentally reported oxygen-induced sign reversal of the SOT in Pt/Co bilayers is due to the significant reduction of the majority-spin orbital moment accumulation on the interfacial HM atoms [210]. One of the drawbacks of oxygen migration based spintronic devices is their slow speed of modulation. Unlike electrons, the oxygen ions migrate relatively slowly requiring between few ms to tens of seconds to change the state of given device. The optimization of the gate oxide thickness and material is essential to improve the performance of these devices. For example, recently yttria-stablized zirconia was used as an gate oxide to achieve anisotropy modulation in ms, which is 100 times faster than any of the previously demonstrated magneto-ionic devices [211]. Due to a slow speed, these devices are not suitable for near core memories, but can be exploited for flash replacement and field-programmable gate array (FPGA). In addition to the slow speed, the constant flux of oxygen ions inside these devices may result in a breakdown of the oxide or even the magnet after many cycles of operation. Since the process of oxygen migration is stochastic, there is cycle to cycle variability in the device operation. All the above concerns of oxygen migration based spin devices need to be addressed before their practical applications, which share similar fundamental challenges with resistive random-access memory (RRAM) due to the ionic migration nature. The oxidation of the SOC source or the HM has been used in few works to modulate the SOTs. A large θSH of −0.5 was reported when 12 % of oxygen was incorporated in tungsten [212]. The θSH remains relatively insensitive with increasing the oxygen content ( Fig. 17(a)) even though the material properties of W change with oxidation. The giant θSH was attributed to β-phase stabilization of W and an increase in the interfacial SOTs. However, the high resistivity of W which further increases with oxygen incorporation will results in large power consumption when these devices are used in memories. Recently it was reported that an oxidized Pt, which is an insulator, shows θSH comparable to that of normal Pt, even capable of inducing efficient magnetization switching [213,214]. While W and Pt are heavy metals with large SOC, a similar enhancement of SOTs on both surface and bulk oxidation of Cu was observed in a series of studies [215,216]. Fig. 17 [215]. The SOTs in oxidized Cu were attributed to intrinsic Berry curvature and modification of orbital hybridization [216]. Overall, we see that oxygen in different layers and interfaces of a SOT device plays a vital role in enhancing and manipulating SOTs. Future research efforts should aim toward a more dynamic modulation of this oxygen content in a single device using gate biasing for applications. Apart from the spin memory devices discussed in this section which are mainly based on switching of a magnetic element, there have been parallel efforts to enable magnetic memories using alternate schemes. One such methodology involves skyrmions on race tracks [217]. A race track is a magnetic channel that can be used to store memory bits in form of magnetic domains [218]. These bits or magnetic domains can be moved with the help of spin currents. In the skyrmion race track memory, these magnetic domains are replaced by skyrmions which have greater topological protection. In addition, the small size of skyrmion and the ease with which can be moved with a current, holds promise towards developing a denser and low-energy storage. The spin current to move the skyrmions can be applied either using STT or SOT. However, there are many challenges that are currently being addressed in the field of skyrmion devices before a reliable memory can be realized. The corresponding research work primarily focuses on stabilization of skyrmion, their energy-efficient and linear movement on a magnetic track and performing low-noise reading. More details of these works can be found in focused reviews [38,39]. IV. Spin devices and non Von Neumann computing The present-day computing systems are based on the Von Neumann architecture in which the memory and processing units are separated and information processing is carried out serially. This architecture and computing methodology have fueled the information technology revolution for the past few decades. However, the massive increase in the amount of data with the recent rise of interconnected devices necessitates a new type of computing scheme that can efficiently interpret the data similar to a human brain. To this end, neuromorphic or brain-inspired computing aims at developing devices and circuits that can perform tasks involving learning, training, recognition and cognitive ability. In addition, there is devoted research on alternate computing systems such as Ising machine and quantum computing, which can perform certain optimization tasks at a much faster speed compared to modern computers. In this section, we will discuss recent progress in the spintronic devices and systems that have applications in the above mentioned non Von Neumann computing methodologies. In the area of spin based neuromorphic computing hardware, a variety of synapse and neuron models have been proposed based on combination of magnetic domain walls, MTJs, and SOTs [219]. The earliest proposal for spintronic synapse shown in Fig. 18(a) has a magnet with a DW acting as a synapse which is connected via a non-magnetic channel to the magnetic neuron [220]. The position of the DW in the synaptic-magnet determines the spin polarization of the current when a voltage is applied on this magnet. The spin polarization is therefore an analog function of the DW position, and thereby represents the synaptic weight. During the write operation a current passed vertically through the synaptic-magnet will carry the weighted information in the form of degree of spin-polarization. This spin polarized current which represents a potential stimulus at the input of neuron can be used to switch an adjacent magnet that acts as a neuron. Many of the these synaptic-magnets can be connected as fan-in to a single neuron which will receive weighted sum of spin currents from these inputs as shown in Fig. 18(b). In a later proposal, Sengupta et al. proposed a synaptic design which has a DW integrated with a MTJ as shown in Fig. 18(c) [221]. In this structure, the weight is written by SOTs induced DW motion. The DW moves transverse to the channel length when a current is passed from terminal C to D as illustrated in the figure. The position of DW in the free layer of the MTJ determines its conductance which can be inferred as the synaptic weight. The parallel alignment of the free layer with respect to the reference layer results in a high conductance or maximum weight. On the other hand, an anti-parallel alignment between two layers is a state of minimum weight due to a low conductance of the MTJ. The rest of the conductance states that lie in between these two states are determined by the DW position in the free layer. A similar device structure but without the extended free layer and HM channel (see Fig. 18(d)) was also proposed to perform identical synaptic functions [222]. In this design, the current is applied along the length of the MTJ and the DW moves along the current direction. The proposed device can emulate a neural transfer function as well when connected with a reference MTJ and a transistor as shown in Fig. 18 Fig. 3(a), the above proposals are also based on ideal behaviors of the DW, the magnet and the spin interconnects. The superposition of spin currents from different fanins as shown in Fig. 18(b) has not been demonstrated yet. Moreover, precise optimization of interconnect lengths and materials is also required to avoid spin decay. For DW based synapse, the nano-dimension of the future devices will result in very few analog states. For further details on a potential spin based hardware solution for neuro-computing we refer to the reader a focused review [219]. On the experimental side of spintronic neuromorphic hardware, Lequeux Fig. 19(a)). A problem with this device is a reliable control of the DW. The motion of DW is not predictable due to the inhomogeneity of the material and thermal effects. While a linear weight tunablity is desirable for synapse in ANNs, the DW synapse has a non-linear weight programming due to the arbitrary distribution of the pinning sites. A series of MTJs with moving DW under them have been used to implement precise and linear weight generator [224]. The schematic of this device and the corresponding scanning electron microscope image is shown in Fig. 19(b). The variation of the MTJ device areas produces a non-linear activation function as shown in Fig. 19 which presents analog switching states like a biological synapse (Fig. 19(d)) [225]. The analog switching behavior is due to pinning of domains with AFM grains. In this work, the authors have demonstrated associative memory operation using 36 of these SOT devices integrated with a field programmable gate array. Recently, magnetic synapse has been demonstrated with a Pt/Co/GdOx SOT device [226]. Its working principle involves electric field induced reversible oxidation and reduction of the Co layer, thereby modulating its magnetization. The magnetization of the Co layer represents the synaptic weight measured using the anomalous Hall resistance (RAHE) as shown in Fig. 19(e). Synaptic functionalities like potentiation, depression, spin-rate and -time dependent plasticity were demonstrated on this device. Fig. 19(e) Fig. 19(f). However, the skyrmion readout signal is generally very small, which poses a challenge in the readout operation and dynamic range of these synapses. While the synaptic functionalities using spintronic elements have been implemented by a few groups, a hardware realization of neuron has been rather challenging and limited. Few experiments have used MTJs to demonstrate a stochastic spiking neural function. A threshold current to drive the MTJ in an unstable state that results in stochastic current spikes was used in one of the scheme [228]. In a recent experiment, VCMA was used to enable a stochastic switching behavior of the free layer of the MTJ as shown in Fig. 20(a) [229]. In this work, the perpendicular anisotropy of the free layer was modified by an applied bias voltage to the MTJ. The bias voltage results in lowering the energy barrier leading to a stochastic switching of the free layer. The switching probability has therefore a sigmoidal relation with the applied bias field. While a stochastic device with sigmoidal transfer function is suited for ANN, the leaky integrate-and-fire neuron is necessary to implement a spiking neural network. Thermally assisted current-induced SOT switching was exploited to enable this integrate and fire function [230]. Current pulses arriving at a high frequency integrate the temperature-current budget above the switching threshold of the SOT due to minimal leakage or less heat dissipation. This results in magnetization switching or neural firing (probability of switching, Psw = 1). For current pulses arriving far apart, the heat generated by the first pulse is dissipated by the time when the second pulse arrives, therefore the SOT device does not switch (Psw = 0). For, pulses of intermediate Figure 20(b) shows the distribution of switching probability as a function of pulse frequency for the given SOT device. This behavior is similar to a biological neuron in which incoming potential spikes to a neuron integrate in a leaky fashion resulting in neural firing when the membrane potential exceeds a pre-determined threshold. However, it should be noted that while the above SOT neuron can integrate and fire, it does not automatically reset like a biological neuron. This functionality is yet to be achieved using a spin device. Apart from the research on spintronics counterpart for biological synapse and neuron, there are some interesting works on system level implementation of spin devices for recognition and optimization tasks. Torrejon et al. have used a spin-torque oscillator (STO) which converts an input dc current into voltage oscillation through TMR, for spoken vowel recognition [231]. A spintorque oscillator is basically a MTJ in which the magnetization oscillates in a self-sustaining fashion around the effective magnetic field when a balance is achieved between the magnetic damping and applied current-induced torque. The oscillation of the magnetization in effect results in an oscillating TMR signal. The frequency of these oscillations is a function of the effective magnetic field experienced by the magnet and its gyromagnetic ratio. Therefore, a STO is characterized by its oscillation frequency which can be tuned by varying the external magnetic field. A STO combines non-linearity and memory in a single device [232]. The non-linear behavior is between the applied current and the amplitude of generated voltage, while the memory function is achieved due to the dependence of output on the past input currents. After pre-processing, the input speech signal was applied in the form of current to the STO, while the output was recorded in form varying voltage signals. A comparison of the recognition performance with and without an oscillator in Fig. 21 shows clearly the improved performance of the STO based system. In a later work, four of these oscillators were electrically coupled for a vowel recognition task [233]. The electrical coupling was achieved by physically connecting these oscillators with wires. These oscillators were synchronized with two external microwaves as shown in Fig. 22(a). Each oscillator synchronizes itself with an external microwave frequency in a different range as shown in the right panel of Fig. 22(a). The range of synchronization can be tuned by varying the applied bias current to the individual oscillator. The spoken vowels were coded as a function of two external input microwave frequencies (fA, fB). The frequency distribution map of each vowel for different speakers overlapped on the oscillator synchronization map is shown in the left panel of Fig. 22(b). The color in the synchronization map represents the oscillator or oscillators synchronized with the two microwaves as shown in the legend bar to the right of the figure. For example, (1A) represents the 1 st oscillator synchronized with microwave source A, and (2A, 4B) represents 2 nd and 4 th oscillator synchronized with microwave source A and B, respectively. Since the goal of this work was to recognize vowel independent of speaker, the learning involved adjusting the bias current through the oscillators so that the points for each vowel are contained in single synchronization region. The synchronization map after different training steps are shown in the middle and right panels of Fig. 22(b). Comparing the first and last panel of Fig. 22(b), it can be seen that after sufficient training, the spoken vowels which were initially distributed in more than one region on the synchronization map, eventually reside in approximately a single synchronization region. A reasonable recognition rate of 89% was achieved after around 50 training steps. The disadvantage of using oscillators for neuromorphic computing is their limited scaling potential. In addition, the frequency spectrum of spintronic oscillators is not very sharp and has a large full width at half maximum which hinders their operational reliability. Very recently, the stochastic behavior of a thermally unstable MTJ has been utilized to develop a probabilistic-bit (p-bits) for integer factorization [234]. A p-bit is an entity that fluctuates between two binary states with a probability that can be controlled by an input [235,236]. The relation between the input I(t) and output m(t) of a p-bit is given by where rand(-1, 1) is a random number uniformly distributed between -1 and 1 [235]. The stochastic nature of the p-bit finds applications in probabilistic computing. The MTJ for a spintronic p-bit is designed by optimizing its volume and free layer thickness so that the energy barrier between its two bistable states is low enough to be surpassed by ambient thermal energy. This results in the MTJ switching stochastically. Initially, current controlled MTJ p-bits have been proposed [235,237] as shown in Fig. 23(a). These three-terminal p-bits were driven by SHE torques induced by input currents flowing in the heavy metal below the MTJ. The output can be read by passing a small read current through the MTJ which can be fed into a buffer stage. The CMOS buffer stage helps in providing a gain and isolation at the output. For the magnitude of input current greater than the threshold value the MTJ free layer pins in one of the stable states resulting in an output of +1 or -1. However, for values of input current in between the MTJ behaves stochastically and the instantaneous value of the output can fluctuate between -1 and 1 as determined by Eq. (1). In a voltage-controlled scheme, the stochastic MTJ is connected to a n-type transistor (NMOS) as shown in Fig. 23(b) [234]. The transistor also has a resistance (Rsource) at its source terminal which limits the currents through the MTJ to a value at which its switching probability is 0.5 [234]. The output of the transistor is connected to a comparator which determines the final output (VOUT) of the p-bit. While the instantaneous VOUT of the p-bit will be either of the rail-torail values, the time averaged output (over 700 ms and 2000 sampling points) is a sigmoidal with respect to VIN as shown in Fig. 23(c). For performing factorization, these p-bits are interconnected (not physically) such that the input of each p-bit is a function of output of all the other bits. In Fig. 23(d). Occasionally the p-bits end up in state representing the number (5,5), (7, 7), etc., although with a very less probability. In this work, spin based probabilistic computing compared to CMOS-based alternatives was suggested to offer an energy benefit of 10 times and an area advantage of 300 times. However, whether the proposed scheme can be scaled for a real application is questionable (e.g. the most common form of 256-bit encryption corresponds to 78 digits). It should also be noted that the network weights are still implemented in a microcontroller in this work. Going forward these should also be implemented in hardware using memristors or capacitive network. Other application of p-bit enabled stochastic circuits include machine learning inspired applications such as Bayesian inference and accelerating learning algorithms. Quantum inspired applications such as inverted Boolean logic (e.g. finding input to a logic gates for a given output) and optimization problem like travelling sales man problem can be also solved using a network of p-bits [238]. Current-controlled p-bit can be used to implement binary activation function in a binary neural network (BNN) in combination with memristors that implement weights [239]. A more detailed review on these applications can be found in [238]. There are many other alternative computing methodologies and application specific hardware that can be enabled in an energy-and area-efficient way with spin devices. BNNs which use binary values of weights and activations, achieve the same degree of accuracy compared to the normal neural networks while being more resource efficient in terms of storage, speed and power. During inference the BNNs use the XNOR operation, which can be implemented using a single SOT device. The two inputs of the write driver of a SOT cell can serve as the inputs to the XNOR gate [240]. Non-volatile memories are also used for in-memory computations, which help in saving a large amount of power and time that is spent due to the separation of memory and computing units in a von Neumann architecture. The in-memory computing capability has been proposed to implement a two-bit AND gate used for performing the bit convolutions in BNNs [240,241]. In this scheme, the row decoder is modified to turn on two read-word lines simultaneously. The sense current of the selected SOT bits flowing through the bit line determines the values stored in them. By appropriately setting the reference voltage in the sense amplifier circuit, logic AND can be implemented. Instead of using two bits, a single STT cell-based scheme was proposed for performing in memory computation by fabricating two MTJs on top of one other [242]. Apart from the use in BNNs, the in-memory computing architecture using spin devices can be leveraged for bioinformatics (e.g. DNA read alignment) [243,244] and graph processing applications [245,246]. The results discussed in this section demonstrate the viability of spintronics hardware for non-von Neumann computing systems. At present the ANNs are implemented mostly in the normal computers. Since the weights of the network is stored in memory and the computing is performed in the processor, the flow of information between these two components is both a speed and power bottleneck. Hardware implementations of ANNs have involved accelerators for matrix multiplication e.g. GPUs and in few cases using transistors circuits to implement weights. However, these efforts are not power and area efficient. The biggest stride in the use of alternative solid-state devices to build components of ANNs is using memristive technology [247,248]. A single memristor incorporates most of the functionality of a synapse that is used in both ANN and biological neural networks. The most important feature of these functions is weight programmability in which the weights are represented by the conductance of the memristor. The simple device structure of a memristor i.e. an insulator between two metals, also enables 2D and 3D cross-point architecture that is very efficient in performing matrix multiplication (between inputs and weights) [249,250]. However, the operation principal of memristive devices which is mostly based on stochastic movement of ions exposes them to cycle to cycle variability. In addition, the memristor physics is still not well understood and modeled, therefore there is device to device variability that makes the scaling of memristive architecture challenging. While the effort towards spin devices for neuromorphic computing is fairly recent and in an early stage, the well- Going forward it is also necessary to integrate spin based synapse and neurons for a complete neural architecture. In order to achieve general purpose computation, we may need to mimic biological systems such as a human brain, in which one neuron has 10,000 synaptic connections. Obviously, this requirement is beyond our capability using modern fabrication techniques. If we aim to solve a specific problem, however, a typical cross-point architecture can fulfill the job. For the time being, research activities will be focused on developing a particular recognition and optimization hardware such as the ones discussed in ref. [231,233,234] accelerating the field towards real applications. The learning part of the recognition task which is still performed on conventional computer should also be enabled on spin architecture in order to realize a full spin based non Von Neumann computing system. (c) V. Spintronics for flexible electronics A huge segment of the future generation of consumer electronics such as wearables, medical implants, displays, etc. depends on the fabrication of electronics components on flexible substrates. A key requirement is that the device performance should be comparable to that when they are fabricated on conventional rigid substrates. In the field of spintronics, the deposition of exchange-biased magnetic layers has been shown on free-standing organic films (e.g. mylar, Kapton, ultem, etc.) a few decades back [251]. Later [Co/Cu] based GMR multilayers deposited on plastic substrates with a photoresist buffer were shown to have two times larger GMR compared to the films deposited on bare silicon substrate [252]. The GMR value of these multilayers was unaffected by tensile deformation of up to 4.5% when grown on elastic poly (dimethylsiloxane) (PDMS) membranes [253]. These GMR layers were also made into printable ink by dissolving the multilayers deposited on photoresist coated silicon films in acetone and subsequently mixing the dissolved flakes in a binder solution [254]. Ota et al. have recently shown that GMR devices can even be used for sensing the direction of strain [255]. Since the MTJs form the backbone of modern spintronic applications, integrating them on flexible substrates has been a topic of active interest [256][257][258][259]. Co/Al2O3/NiFe MTJs have been fabricated on Kapton substrates which have a robust TMR with stress/bending as shown in Fig. 24(a) [258]. However, it has been shown that the TMR can in fact be engineered with strain for in-plane MTJs with an MgO barrier [256,257,260]. In a series of measurements on MTJs fabricated on silicon substrates it was shown that increasing strain results in a two-fold increase in the TMR [257]. The strain on the sample was applied through a clamp and screw setup and the obtained results are shown in Fig. 24(b,c). It was elucidated that while the parallel resistance of the MTJ channel remains same under strain, the antiparallel resistance increases, resulting in a larger TMR. The TMR remained small and unperturbed by strain for a non-annealed sample, thereby establishing the sensitivity of quantum tunneling through an epitaxial MgO barrier to strain. In a later work, it has been shown that the MTJ stack when fabricated on a flexible polyethylene terephthalate (PET) substrate exhibits stable and reliable TMR values. In fact, the TMR of MTJ on the PET was 50 % higher when compared to Si substrates [256]. The MTJs were fabricated by transfer print process. In this process illustrated in Fig. 25, the MTJs were first fabricated on Si substrate, and the substrate was etched using dry etching methods. The suspended MTJ stack was then transferred on the PET, glass, Al foil, PDMS and nitrile glove ( Fig. 26(a)). Figure 26(b) shows that TMR of the device is stable even after application of various degree of stress over time. Strain engineering has been extensive applied to modern p-type field effect transistors in order to improve the mobility and to optoelectronic devices to modify the effective hole mass, both of which improved the device performance significantly. In the future, flexible spintronics research should aim towards development of complex spintronic circuits on flexible wafers and integration of silicon components alongside the spintronics counterpart on these flexible wafers. More importantly, active strain engineering to enhance the device performance, detecting the amount of strain, and even harvesting the energy could be envisioned using flexible spintronics devices. A more cross-disciplinary approach which involves adopting learnings from efforts in other flexible electronics counterparts is essential for a faster implementation. VI. Spintronics in terahertz The electromagnetic (EM) radiations with frequency in the range 100-300 GHz to THz are termed as terahertz (THz) radiations, which find applications in spectroscopy, medical imaging, communication etc. A low cost and energy-efficient THz source/emitter is desirable to fully develop these THz systems and further expand their applications. The currently used THz emitters based on photoconductive semiconductors, electro-optic crystals (e.g. ZnTe), air plasma based emitters, etc. have drawbacks in terms of narrow bandwidth, skipped bandwidth or requisite of a high energy laser pump. In view of these shortcomings, there has recently been wide-ranging research interest to explore spintronics based THz emitters. THz emission using spintronic devices is related to the ultrafast spin dynamics which was at first revealed in sub-picosecond demagnetization in Ni using a femtosecond laser pulse [264]. It has been proposed that a superdiffusive transport of spin-polarized electron is responsible for this ultrafast demagnetization [265]. This mechanism was later confirmed by an experiment involving the Ni and Fe layer with the parallel and antiparallel alignment between them [266]. When the Ni layer was pumped with a femtosecond laser pulse, an increase (decrease) of Fe magnetization was observed when the two layers were parallel (antiparallel) to each other. Soon after, Kampfrath et al. used Fe/(Au or Ru) bilayers to detect the superdiffusive spin current [267]. The schematic of the scheme they used is shown in Fig. 28(a). The laser-induced superdiffusive spin current (Js) on arrival in the metallic Au or Ru layer is converted to a charge current ( = × | | ⁄ ) due to ISHE. The charge pulse is converted to a EM wave with a frequency in the THz spectrum, governed by the Maxwell's equation. The THz radiation was electro-optically sampled the results of which are shown in Fig. 28(b). It can be seen that a reversal of the magnetization results in a corresponding reversal of the THz signal due to the reversal in direction of Js. It should be noted that the emitted THz were polarized in the x-direction for the sample magnetization (M) along the y-direction in Fig. 28 Fig. 29(b,c). An increase in THz signal for increasing HM and FM thickness followed by its attenuation is a result of balance between the limited spin diffusion and THz absorption in the two layers [269]. It has also been proposed that the peak THz signal for a particular thickness is possibly due to constructive Fabry-Pérot interference at this thickness [268]. as an excellent THz source [270]. A peak THz emission was found for three repetitions of these layers. Apart from conventional HMs, exotic materials such as a TI Bi2Se3 [271] and monolayer MoS2 [272] in combination with Co have been demonstrated to emit sizeable THz waves (Fig. 31). Attachment of a collimating Si lens was proposed to collect most of the diverging THz to maximize the power output [273]. The THz can also be enhanced by passing currents through the heterostructure resulting in an additional photoconduction related contribution to the total THz [274]. While the initial reports on THz generation assume an essential presence of net magnetization in the system for a finite THz generation, recent THz emitter reports based on nearly-compensated ferrimagnets have proved it otherwise. Chen et al. have demonstrated emission of a finite THz signal from a nearly-compensated Co1-xGdx/Pt based heterostructure [275]. The magnitude of this THz is comparable to the pure Co based emitter. In fact, the polarity of emitted THz reverses when the composition of Co1-xGdx traverses from a Co rich to Gd rich state as shown in Fig. 32(a,b). This behavior of RE-TM base THz emitter is due to the localized nature of magnetic moment carrying f-shell electron in the RE metals. Therefore, the contribution to the superdiffusive spin current is only from the Co sub-lattice. Similar results were reported by Schneider et al. for another RE-TM ferrimagnet, Fe1-xTbx [276]. It was found that a CoGd heterostructure emits a stronger THz compared to the CoTb ones possibly due to a large out-ofplane anisotropy in CoTb [277]. The anomalous Hall effect (AHE) as a possible alternate mechanism of THz generation instead of ISHE has been put forward recently [278]. A single FeMnPt layer without a heavy metal was found to generate considerable THz as shown in Fig. 32(c). To summarize this section, magnetic heterostructures provide a cheap and efficient solution for THz generation. The peak intensity of the generated THz from a NM/FM structure exceeds compared to that from a ZnTe and GaP based emitter (500 µm) which are conventionally used for THz generation. While the ZnTe and GaP THz spectrum shows considerable gaps, specifically between 3 to 13 THz, the spectrum of the spintronic THz emitter is wider and continuous. The THz amplitude for most of the spectral range from spintronic emitters exceeds that of the ZnTe and the GaP based emitters. When compared to a photoconductive switch, the spintronic emitter has a wider bandwidth with a larger intensity above 3 THz. For frequency below 3 THz, a photoconductive switch performs better [268]. A significant enhancement of the THz signals in a lower THz frequency range below 1 THz can be achieved by a novel ultra-broadband spintronic THz emitter enhanced by a current modulation through the semiconductor channel [274]. The spintronics THz emitter films are easy to fabricate and do not require any high temperature deposition process or specific substrate. Future works on spintronic THz emitters should be focused on further improvement of the THz signal at lower laser fluence, removal of the external magnetic field and enabling robustness to external temperature and magnetic fields. their storage density will definitely make them a lot more competitive in a broader range of the memory pyramid. In addition, the potential of other magnetic memory candidates such as skyrmions should also be continuously evaluated. The field of solid-state devices for non-von Neumann computing itself is in a very early stage with majority of success being shared by memristors. However, recently there have been continuous demonstrations of non-von Neumann systems using spin devices. This suggests the viability of spintronics as one of the promising approaches for pursing alternative computing methodologies [231,233,234]. Being a field under development, alternative computing schemes do not yet have a coherently defined requirements from the devices and systems. In both memristor and spintronics, the researchers, with the tools in their hands, are proposing various standalone devices and architectures which although solving the specific problem in question, fall short of marching towards a coherent and a general-purpose hardware for alternative computing needs. Since the viability of spintronic devices for ANNs, biological neural network and other computing schemes has been already suggested, the next step should be towards a more joint effort between the device, circuits and system architecture teams for obtaining tangible spin solutions. At present, the ANNs and ANN-like systems made from memristors and spintronics perform only the inference step, while the learning and sometimes even weight storage is still carried out in conventional computers. Going forward, while it is desired to move the learning tasks to the new solid-state devices, a hybrid architecture can be a more practical option. Like spin-logic, the real application of spins in unconventional computing will become clearer in the coming decade. As detailed in this review, spintronics should not only be approached as a device candidate for computing applications, but its physics should also be exploited for non-computing systems. In form of a THz emitter, the spin devices have already proved themselves very competitive. A fast track development of THz systems based on these devices can be in fact applied to some applications in the coming few years. While in this review we have discussed some of the major emerging applications of spintronics, there are analogous ongoing research efforts for several other equally important applications e.g. spin based electronic oscillators [279][280][281] which may find their use in both communication and computing. The low complexity and less stringent device requirement of non-computing systems should aid spintronics researches to quickly transfer a research device from an academic laboratory to industry, which requires not only a functionality testing of an individual device, but also manufacturability and scalability aspects. Finally, flexible electronics will be playing a big role in the future, especially in consumer electronics. Works of any spin device on rigid wafers such as silicon for both computing and non-computing needs should have an analogous effort towards developing them on flexible substrates. VII. Conclusion In this review, we have discussed the novel emerging areas of spintronic applications. Spin devices proposed for the logic computation were discussed in Section II. The low static power consumption of spin elements makes them attractive for logical devices. However, full replication of the speed and versatility offered by the CMOS is one of the foremost challenges in this area of research. In Section III, spintronic memories were discussed with emphasis on the latest generation In the final section (Section VI), we detailed on the advancement of cost effective THz emitter based on magnetic/non-magnetic heterostructures. The THz emitted from these spin devices are of equivalent strength and much broader bandwidth compared to the conventional crystal and semiconductor-based emitters. Overall we see that spintronics has emerged as a very promising and an actively pursued solid-state technology for meeting the future (opto)electronics needs in a wide variety of application areas. Future spintronic research should focus on improvement of spin device on all fronts that concern the stringent requirement of device practicality. In addition, research concerned towards system level implementation of spintronics should also be pursued.
2020-09-22T01:00:46.215Z
2020-09-21T00:00:00.000
{ "year": 2021, "sha1": "1f571be7d116faee77afdc38fa16cb0b1093125d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2009.09917", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1f571be7d116faee77afdc38fa16cb0b1093125d", "s2fieldsofstudy": [ "Physics", "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Physics", "Engineering" ] }
13368093
pes2o/s2orc
v3-fos-license
Role of the aphid species and their feeding locations in parasitization behavior of Aphelinus abdominalis, a parasitoid of the lettuce aphid Nasonovia ribisnigri Aphid species feeding on lettuce occupy distinct feeding sites: the lettuce aphid Nasonovia ribisnigri prefers to feed on heart leaves, whereas the potato aphid Macrosiphum euphorbiae feeds only on outer leaves. The aphid parasitoid Aphelinus abdominalis, known to be able to regulate M. euphorbiae on many crops, has recently been indicated as a promising biocontrol candidate also for use against N. ribisnigri, a major pest of lettuce. This study therefore examined A. abdominalis parasitization preference between N. ribisnigri and M. euphorbiae and its ability to parasitize aphids feeding on different parts of lettuce plants. In addition, life history traits of A. abdominalis on these aphid species were investigated. In no-choice laboratory experiments on leaf discs and 24 h exposure, A. abdominalis successfully parasitized 54% and 60% of the offered N. ribisnigri and M. euphorbiae, respectively, with no significant difference. In the corresponding choice experiment, however, A. abdominalis had a tendency for a significantly higher preference for M. euphorbiae (38%) compared to N. ribisnigri (30%). Growth chamber experiments on whole plants demonstrated that A. abdominalis was able to parasitize aphids, regardless of their feeding locations on lettuce plants. However, aphid feeding behavior had a significant effect on the parasitization rate. A. abdominalis parasitized significantly higher percentages of M. euphorbiae or N. ribisnigri when aphids were exposed separately to parasitoids on whole lettuce plants as compared with N. ribisnigri exposed only on heart leaf. A significant preference of A. abdominalis for M. euphorbiae compared to N. ribisnigri was also observed in the growth chamber choice experiment. A high percentage of adult emergence (> 84%) and female-biased sex ratio (> 83%) were found irrespective of the aphid species. Introduction Infestation by aphids is a serious problem in the production of lettuce, Lactuca sativa L. (Asterales: Asteraceae), both in glasshouses and under field conditions [1]. Although several species of aphid occur on lettuce globally [2], the green peach aphid Myzus persicae (Sulzer), potato aphid Macrosiphum euphorbiae (Thomas) and lettuce aphid Nasonovia ribisnigri (Mosley) (Hemiptera: Aphididae) are of major importance [3,4,5]. Out of these, N. ribisnigri is the most critical due to its frequent occurrence throughout the growing season and its cryptic feeding habitat with a feeding preference for the heart leaves of lettuce [6,7]. The other two species M. persicae and M. euphorbiae are usually considered of lesser economic importance as they only feed on outer lettuce leaves [6] and occur less frequently during the growing season [4,5,8]. Control strategies for N. ribisnigri populations on lettuce rely largely on the use of insecticides [9,10,11]. However, demand for alternative methods to control N. ribisnigri has been stimulated due to the increased risk of insecticide resistance in aphid populations [12,13], and because of concerns related to the environment [14] and human health [15]. Potential biocontrol methods for N. ribisnigri comprise the use of predators, including syrphids [16,17] and lacewings [18,19], and fungal pathogens [20,21]. A further potential method for N. ribisnigri biocontrol is the use of parasitoids [22]. Shrestha et al. [22] evaluated three commercially available parasitoid species for their potential against N. ribisnigri and found Aphelinus abdominalis (Dalman) (Hymenoptera: Aphelinidae) to be the most promising candidate. This parasitoid is believed to originate from Europe, but now occurs also in Asia and North America [23]. A. abdominalis has been used for biocontrol of M. euphorbiae in glasshouse and field crops [23,24]. No information, however, is available regarding the parasitization preferences of A. abdominalis towards N. ribisnigri and M. euphorbiae, which appear simultaneously in lettuce fields. It is thus important to further evaluate the potential of A. abdominalis against N. ribisnigri, taking into account that feeding behavior may influence the degree to which an aphid species is parasitized [25,26]. Parasitoids that attack more than one aphid species show differences in preference and performance in response to various aphid species [27,28,29,30,31,32,33,34]. The preference behavior of parasitoids between aphid species or taxa is influenced by a number of factors such as 1) host quality, with better quality hosts species usually, but not always [31], being preferred over poor quality hosts [27,28]; 2) color of aphid morph, with a greater preference for the green morphs compared to the red morphs [35,36]; 3) aphid size, with smaller aphids usually, but not always [37,38], being preferred over larger ones [27,39]; and 4) aphid age, with a stronger preference for young or intermediate growth stages of aphids over old stages [40,41]. Additional factors that influence parasitoid preference are: parasitoid age, with a greater preference for low quality hosts by short-lived than longer-lived parasitoids [42], and parasitoid egg load, with females with low egg load preferring high quality hosts compared with females with high egg loads [43]. Aphid parasitoid life-history traits such as offspring survival and offspring sex ratio are parameters commonly measured to evaluate parasitoid fitness on different host species [29,31,44]. Aphid parasitoids may be able to regulate the fitness of their offspring in relation to the host species they attack [27,29,31,44]. Ovipositing females may allocate male and female offsprings differentially in different host species. Moreover, different host aphid species may produce changes to the survival of male and female offspring. [27,29,31,44]. It is therefore important for biocontrol programs to investigate whether host aphid species influence parasitoid offspring sex ratio or survival. In addition, the capacity of parasitoids to locate hosts in their feeding sites is vital for the efficiency of a parasitoid as a biocontrol agent [25,26]. Some parasitoids have the capability to find and parasitize aphids feeding on concealed parts of plants [45,46] and vice versa [46]. None of the above-mentioned aspects have been explored in relation to use of A. abdominalis against N. ribisnigri co-occurring with M. euphorbiae. This study therefore examined the parasitization preference of A. abdominalis with regard to N. ribisnigri and M. euphorbiae under laboratory conditions and its capacity to find and parasitize the two aphid species when they are feeding on different areas of the lettuce plant under growth chamber conditions. In addition, female sex ratios and successful adult emergence of A. abdominalis on the two aphid species were also studied to evaluate parasitoid fitness. Materials and methods Plants Iceberg lettuce, L. sativa cv. 'Mirette' was used as a source of plant material for the laboratory and the growth chamber experiments. Seeds were sown on Jiffy-strip trays and maintained in a glasshouse at 15-18 o C, 55-70% RH and natural light conditions until three true leaves had emerged (approx. 2 weeks after seed sowing). Afterwards, plants were transplanted into 2 L pots filled with peat soil, perlite and vermiculite (mixed at 90:8:2) with a pH of 6-7. These plants were either utilized within 6-10 days for production of aphid cohorts, for rearing of parasitized vs. unparasitized aphids (detached leaflets, lab and growth chamber experiments) or maintained for additional three days in a glasshouse and subsequently transported to the growth chamber. Insects The lettuce aphid N. ribisnigri and the potato aphid M. euphorbiae, originally supplied by Dr. Gemma Hough (Warwick Crop Centre, University of Warwick, UK) and senior research scientist Lesley Smart (Department of Biological Chemistry and Crop Protection, Rothamsted Research, UK), respectively, were reared separately on iceberg lettuce plants inside the insectproof net-covered cages (68 × 75 × 82 cm). They were maintained in a controlled environment glasshouse compartment at 22 ± 1 o C, 70 ± 5% RH and 16:8 L: D. The parasitoid A. abdominalis, supplied as mummies by EWH BioProduction, Tappernøje, Denmark, were placed in plastic Petri dishes (diameter: 15 cm) and kept in a climate cabinet at 22 o C, 70 ± 5% RH and 16:8 L:D. Mummies were checked daily for adult emergence and the cohorts of adults emerging on a same day were reared until the age of three days with the technique described by Shrestha et al. [22]. Aphid cohorts The cohorts of 2-3 rd instar aphids of N. ribisnigri and M. euphorbiae were used for laboratory and growth chamber experiments since these stages of both aphid species have been reported suitable for parasitization by A. abdominalis [47,48]. To obtain cohorts of the two aphid species, adults (10-12 days old) were carefully transferred from the stock culture to uninfested leaves of lettuce. The base of each leaf was wrapped with moist cotton, inserted into a 1.5 ml Eppendorf tube with demineralized water and then placed at the bottom of mesh screened Plexiglass box (17 × 11 × 3 cm) with moist filter paper. These boxes were kept in a climate cabinet at 22 o C, 70 ± 5% RH and 16:8 L: D. After 48 hours, the produced nymphs were gently transferred either to new clean leaves (Eppendorf tube and Plexiglass set up) for the laboratory experiments or to the clean plants for the growth chamber experiments. Aphids were maintained for additional two days for the nymphs to develop into 2-3 rd instars at similar conditions as described above [49,50]. Laboratory experiments: Parasitization preference and life history traits No-choice tests. The no-choice experiments were performed to evaluate the parasitization rates and fitness of A. abdominalis on N. ribisnigri and M euphorbiae by measuring parasitism events (both successful and incomplete, i.e. without mummy formation) as well as a parasitoid emergence rates and sex ratios. The experiment was performed in vented Petri dishes (diameter: 9 cm) lined with a moist filter paper. A circular lettuce leaf disc (diameter: 5 cm) was placed at the bottom of each dish. Twenty aphid individuals of 2-3 rd instar, either of N. ribisnigri or M euphorbiae, were transferred to each lettuce dish by using a fine camel hair brush. Aphids were allowed to settle on a leaf disc for one hour before the introduction of a female parasitoid. One mated female parasitoid (4 days old) was released into each Petri dish arena containing N. ribisnigri or M euphorbiae and left for a 24 hour parasitization period in a climate cabinet at 22 o C, 70 ± 5% RH and 16:8 L: D. The female parasitoid was subsequently removed and the number of dead and live aphids in each dish counted under a stereo microscope. The numbers of aphids dying due to host feeding by A. abdominalis was determined based on their shrunken appearance [51]. The live aphids of each leaf disc were transferred to two clean leaves with the petiole wrapped with moist cotton and inserted into a 1.5 ml Eppendorf tube with demineralized water. This was done to avoid degradation of the leaves. These two leaves were placed in a Plexiglass box with moist filter paper and incubated in a climate cabinet under similar conditions as described earlier. After 4-5 days, the filter paper was replaced and if necessary, a new fresh leaf was placed in the vicinity of the old leaf to allow the aphids to translocate themselves. Generally, lettuce leaves remained fresh for at least six days using this setup. Aphids were checked at 1-2 day intervals for two weeks for appearance of mummies (successful parasitization), while only up to nine days for aphids that died without mummification (incomplete parasitization). Aphid mummies that formed in each dish were gently collected using a fine camel hair brush and transferred individually into small transparent medicine cups (diameter = 15 mm) with screened lids. Emergence of adult parasitoids was checked at 1-2 day intervals and emerged parasitoids sexed under a stereo microscope. For each treatment, 12-14 replicates were performed. For the controls, five replicates without addition of parasitoids were used for each aphid species and same procedure as above was followed. Choice test. The choice experiment was conducted in order to assess the preference of A. abdominalis for parasitization with regard to N. ribisnigri and M. euphorbiae. The experimental procedures and experimental conditions were similar as described above except that cohorts of 2-3 rd instar lettuce aphids and potato aphids (n = 20+20) were offered simultaneously on the same leaf disc. The lettuce aphid nymphs were introduced first and allowed to settle for 15 min prior to the releases of the potato aphid nymphs. N. ribisnigri nymphs are easily distinguished under a stereo microscope by their color (red) in contrast with whitish-green potato aphids. The number of replicates for treatment was 15 and the controls (replicates = 5) were performed without addition of parasitoids. Growth chamber experiments: Aphid feeding locations The growth chamber experiments were performed to assess whether the aphid feeding location preference on lettuce plants influences the host finding ability of A. abdominalis by measuring successful parasitization under no-choice and choice conditions. Lettuce plants established in the plant growth chamber were 28 days old after seeding and had five unfolded leaves (4 outer leaves and 1 heart leaf) at the time of experiment initiation. A leaf developed from the central portion of plants was denoted as heart leaf and the leaves developed from peripheral layers as outer leaves. Plants were drip irrigated daily for half an hour each morning and evening in the growth chamber room and they were maintained at 22 o C, 70 ± 5% RH and 16:8 L: D, for the duration of the experimental period. No-choice and choice tests. The no-choice tests consisted of three treatments: 1) M. euphorbiae inoculated on leaves of a lettuce plant, 2) N. ribisnigri inoculated on leaves of a lettuce plant and 3) N. ribisnigri inoculated on only the heart leaf of a lettuce plant. Fifty 1 st instar aphids were inoculated on each plant in all three treatments (see section aphid cohorts), but the number of aphid individuals inoculated into each leaf of a lettuce plant varied among the treatments. In treatment 1, M. euphorbiae individuals were inoculated on outer leaves (4 leaves at the time of inoculation) with 12-13 individuals (totaling 50 aphid individuals) per leaf since it known that this aphid species does not colonize the heart leaves [6]. In treatment 2, N. ribisnigri individuals were inoculated on all five leaves (4 outer leaves and 1 inner leaf) with 10 individuals per leaf because this aphid species is known to colonize not only to heart leaf but also on outer leaves [7]. In treatment 3, 50 N. ribisnigri individuals were inoculated only on the heart leaf and the outer leaves were removed one day before the aphids' introduction, as the heart leaf is the most preferred feeding site of N. ribisnigri on lettuce plant [7]. The removal of outer leaves in treatment 3 was done to avoid the movement of aphids to outer leaves and also to obtain the best estimate of A. abdominalis' parasitization on aphids situated on this leaf. With respect to the choice test, fifty 1 st instar nymphs of each aphid species (totaling 100 aphid individuals) were established simultaneously on each lettuce plant. The inoculation of M. euphorbiae or N. ribisnigri individuals was carried out in a similar fashion is as in treatment 1 and 2 in the no-choice tests, respectively. From this point forward, the experimental procedures for both choice and no-choice tests were the same. Plants established in the growth chamber were transported to the insect inoculation chamber, where the aphids were carefully inoculated on the dorsal side of the leaves by using a fine camel hair brush. Aphids were allowed to settle on the plants for 1-2 hours after which the plants were subsequently transported back into their original location in the growth chamber. Each aphid-inoculated plant was kept separately in an acrylic cylindrical insect cage (diameter = 18 cm and height = 12 cm) with 5-6 mesh screened holes (diameter = 5 cm) on the side. Forty-eight hours after inoculation, when aphids were allowed to distribute themselves on the plants and develop into 2-3 instar aphids, five female A. abdominalis parasitoids (mated, 4 days old) were released onto the top of the plant canopy of each aphid-inoculated plant. Parasitoids were then allowed to parasitize for 48 hours. Afterwards plants were transported to the insect inoculation chamber and carefully removed from pots in order to minimize the loss or escape of aphids. The number of live aphids present on each plant leaf was counted and each aphid transferred onto uninfested leaves (Eppendorf tube and Plexiglass set up) and incubated in a climate cabinet at 22 o C, 70 ± 5% RH and 16:8 L: D for 2 weeks. The subsequent handling of the aphids as well as the scoring of data was conducted as described above for the laboratory experiment. There were twelve replicates (each plant = one replicate) for each treatment for the both nochoice and choice tests. The controls (replicates = 5) were performed in absence of any parasitoids. Statistical analysis The data were analysed in R 2.15.1 [52]. For all data, a normal quantile-quantile plot was first performed to check the normality of residuals and the equality of residual variances. A transformation (angular) was done to achieve normal distribution prior to statistical tests. Tukey contrast pairwise multiple comparisons were used to test for significant differences in means [53]. For the laboratory data set (Petri dish setup), one way analysis of variance (ANOVA) was performed to test the effect of aphid species on the percentage of successful parasitization and incomplete parasitization in the no-choice experiments and for any differences in successful parasitization or incomplete parasitization when two aphid species were offered simultaneously in the choice experiment. The percentage of successful parasitism was calculated as (Number of mummified aphids/Total numbers of aphids exposed minus host feed aphids) × 100 and incomplete parasitism as (Number of corrected dead aphids without signs of mummification/Total numbers of aphids exposed minus host feed aphids) × 100 [22]. Dead aphids recorded in the incomplete parasitization group [54] were corrected for control mortality [55] prior to calculation and statistical analysis. Similarly, for the growth chamber data set, one way analysis of variance (ANOVA) was performed to examine the effect of aphid species feeding sites on successful parasitization percentage in the no-choice experiment and for any differences in successful parasitization when two aphid species were offered simultaneously in the choice experiment. The percentage of successful parasitism was calculated as (Number of mummified aphids recorded per plant/Total numbers of aphids exposed) × 100. The adult emergence and sex ratio data were found to be non-normally distributed even after the angular transformation and the non-parametric one-way analysis of variance, Kruskal-Wallis test, was therefore used to test for differences. A Mann-Whitney U-test was used as a post hoc test for multiple comparisons between the means. Laboratory experiment Parasitization. This study showed that A. abdominalis has the ability to successfully parasitize two aphid species N. ribisnigri and M. euphorbiae when they were offered simultaneously or separately to a parasitoid on the same leaf discs. In no-choice situations, A. abdominalis successfully parasitized 54.02 ± 5.13% and 60.52 ± 5.35% of N. ribisnigri and M. euphorbiae, respectively, offered within a 24 h exposure period. There was no significant difference between in percent parasitism between the two aphid species (df = 1, 28; F = 0.94; P = 0.34) (Fig 1A). However, in the choice situation, there was a tendency for A. abdominalis successfully parasitizing more M. euphorbiae than N. ribisnigri (df = 1, 28; F = 4.04; P = 0.05), with parasitization of 38 ± 3.32% and 30 ± 2.50% respectively (Fig 1B). With respect to incomplete parasitization, a very low percentage (less than 6%) of aphids mortality occurred and no significant differences were detected when the two aphid species were offered simultaneously on the same leaf disc (df = 1,24; F = 0.01; P = 0.89) or on separate leaf discs (df = 1, 28; F = 0.00; P > 0.98) (Fig 1). Growth chamber experiments This study showed that A. abdominalis has the capacity to find and parasitize not only aphids feeding on an exposed area (outer leaves) but also on a concealed area (lettuce heart leaf). There was a significant effect of aphid feeding location on the host finding ability of A. abdominalis in the no-choice tests (df = 2, 33; F = 17.46; P < 0.001). The percentage of M. euphorbiae or N. ribisnigri parasitized by A. abdominalis on whole lettuce plants was significantly higher than the percentage of N. ribisnigri parasitized when exposed only on the heart leaves (Fig 2A). There was no significant difference in the mummification of two aphid species exposed to parasitoids on whole plants. In the choice test, however, when lettuce plants were inoculated with M. euphorbiae and N. ribisnigri simultaneously, a significant difference was detected in the degree of successful parasitization between two aphid species (df = 1, 22; F = 5.43; P = 0.03). A. abdominalis showed a significant preference to M. euphorbiae over N. ribisnigri (Fig 2B). Discussion The understanding of parasitoid preference for parasitization between different aphid species, and the ability of parasitoid to find aphids feeding on different plant locations are important aspects in development of efficient biocontrol strategies against target pest populations [25,26]. The choice experiment showed that A. abdominalis had a tendency for a higher successful parasitization in M. euphorbiae compared with N. ribisnigri, when they were offered simultaneously on the same leaf disc. Parasitoid preference for various aphid species have been examined previously [27,28,35,36,56]. For example, Bueno et al. [55] and Tepa-Yotto et al. [28] showed that Lysiphlebus testaceipes (Cresson) (Hymenoptera: Braconidae) preferred cotton aphid Aphis gossypii (Glover) as compared to three other aphid species: the green peach aphid, cowpea aphid Aphis craccivora (Koch) and mustard aphid Lipaphis erysimi (Kaltenbach). However, limited information exists regarding the parasitization preference of A. abdominalis between aphid species, except for the study by Wahab [57], who indicated that it preferred the shallot aphid Myzus ascalonicus (Doncaster) as compared with the ornate aphid M. ornatus (Laing) or mottled arum aphid Neomyzus circumflexum (Buckton). The tendency to a higher preference of A. abdominalis for M. euphorbiae over N. ribisnigri found in our study is likely to be an effect of aphid size since the former species were relatively smaller in size (G. Shrestha, pers. obs.), and therefore presumably easier to handle, compared to the latter. Preference for small sized aphids was also observed for the parasitoid Monoctonus paulensis (Ashmead) (Hymenoptera: Braconidae) by Chau and Mackauer [27], who reported that small sized aphids have less well developed anti-parasitoid defense behaviour and therefore are easier to subdue. In addition, the green color of M. euphorbiae (as opposed to the red color of N. ribisnigri) could have played a role for the preference of A. abdominalis towards this aphid species [35,36,58]. For instance, Libbrecht et al. [36] reported that when parasitoids were given a choice, green morphs of pea aphid Acyrthosiphon pisum (Harris) were significantly more attacked by Aphidius ervi (Haliday) (Hymenoptera: Braconidae) when the neighboring colony consisted of red morphs. The no-choice laboratory experiments showed no significant difference in degree of successful parasitization between the two aphid species; N. ribisnigri and M. euphorbiae and the parasitization percentages obtained for both aphid species are consistent with previous findings [22,24]. This suggests that both aphid species are high quality hosts for A. abdominalis as it also substantiated by a high percentage of adult emergence and a strongly female-biased sex ratio being observed irrespective of aphid species. Sex ratio is important for aphid parasitoids, including A. abdominalis, as it affects parasitoid population growth rate and effectiveness in biocontrol [59,60]. Parasitoids with female-biased sex ratios usually perform better in inoculative biocontrol programmes aimed at temporary establishment and reproduction in cropping systems [48,59,60]. Aphid parasitoid sex ratios can be influenced by a variety of host-related factors such as quality [31], age [48,54], size [61] and species [44]. On suitable host aphid species, sex ratio in parasitoid offspring emerging from small hosts tend to be male-biased, and female-biased from intermediate or large hosts [29]. However, male-biased sex ratio in offspring from large hosts (fourth instar or adults) has been observed in some parasitoid species [54]. Our study found that sex ratios where female-biased on aphid nymphs of intermediate ages from both N. ribisnigri and M. euphorbiae, indicating that more female parasitoids emerged from higher quality hosts. This result supports the host quality model of Charnov and Skinner [62]. Similar results have also been reported for other aphid parasitoids [27,44]. Irrespective of the different feeding locations of the aphids, the no-choice growth chamber experiments demonstrated that A. abdominalis has the capacity to find and parasitize aphids feeding both on outer leaves and on the heart leaves of lettuce. No reports are available on the effect of feeding locations on parasitization behavior of A. abdominalis. However, our results resemble the finding of Stadler and Volki [46] who reported that other parasitoids such as Aphidius colemani (Viereck) (Hymenoptera: Braconidae) partitioned parasitization or searching activity for banana aphids Pentalonia nigronervosa (Coquerel) between open and concealed areas of the banana plants. Our results, however, also showed that A. abdominalis parasitized a significantly lower proportion of aphids when offered N. ribisnigri inoculated only on heart leaves compared to when N. ribisnigri were offered on whole lettuce plants. This indicate that only a proportion of lettuce aphids located on the heart leaf were accessible to the parasitoid, presumably a result of some aphid being positioned on more open part and others on the deeper and more narrow part of the heart leaf (G. Shrestha, pers. obs.). In contrast with our findings, Stadler and Volki [46] showed that the parasitoid L. testaceipes parasitized P. nigronervosa only on open areas but not on the cryptic areas of banana plants. Thus, our and these previous illustrate that parasitoid ability to find hosts feeding on cryptic locations differ between parasitoid species, probably in combination with plant species morphology. With respect to the choice growth chamber experiment, our study showed that A. abdominalis preferred to parasitize to M. euphorbiae as compared with N. ribisnigri. This will reduce the parasitoid's ability to regulate populations of the N. ribisnigri when both aphid species occur simultaneously in lettuce plants as A. abdominalis will probably encounter and parasitize M. euphorbiae feeding on outer leaves [6] more frequently than N. ribisnigri feeding on heart leaves [7]. This is in accordance with results from a study by Gardner and Dixon [45], which showed that the rose grain aphid Metopolophiurn dirhodum (Walker) feeding on wheat leaves were parasitized more by Aphidius rhopalosiphi (DeStefani-Perez) (Hymenoptera: Braconidae) than the English grain aphid Sitobion avenae (Fabricius) feeding on the cryptic part (ear) of the wheat. In conclusion, the present results indicate that A. abdominalis has a potential for inoculative biocontrol of N. ribisnigri and M. euphorbiae. The results suggest that the use of A. abdominalis against N. ribisnigri may not be adequate on its own, but that it may be considered as an additional option to be integrated with other potential biocontrol agents (e.g., predators and fungal entomopathogens) of N. ribisnigri [19,21,63]. Further long-term field or glasshouse studies that include several potential biocontrol agents and several aphid species are therefore needed in order to further validate the potential of biocontrol agents to suppress the N. ribisnigri population.
2018-04-03T00:36:20.379Z
2017-08-30T00:00:00.000
{ "year": 2017, "sha1": "daa61ee6ad73c6aff9528cecdbcca0e47fe377c0", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0184080&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "daa61ee6ad73c6aff9528cecdbcca0e47fe377c0", "s2fieldsofstudy": [ "Biology", "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
119180923
pes2o/s2orc
v3-fos-license
Probing the non-equilibrium dynamics of hot and dense QCD with dileptons It is argued that, in heavy ion collisions, thermal dileptons are good probes of the transport properties of the medium created in such events, and also of its early-time dynamics, usually inaccessible to hadronic observables. In this work we show that electromagnetic azimuthal momentum anisotropies do not only display a sensitivity to the shear relaxation time and to the initial shear-stress tensor profile, but also to the temperature dependence of the shear viscosity coefficient. Introduction One of the main goals of Relativistic Heavy Ion Colliders, either the Relativistic Heavy Ion Collider (RHIC, at Brookhaven National Laboratory) or the Large Hadron Collider (LHC, at CERN), is to investigate the thermodynamic and transport properties of the hot and dense phase of QCD. Much work has been concentrated on the determination of an effective value of the shear viscosity coefficient from analyses of relativistic heavy-ion collisions but so far, such investigations haven been performed mostly by comparing to hadrons produced at the final stages of the collision. Electromagnetic radiation constitutes a class of complementary and penetrating probes that are sensitive to the entire space-time history of nuclear collisions including its very early stages. In this contribution we show that thermal dileptons are affected by the transport properties of the fluid and by the non-equilibrium aspects of the initial state that are usually inaccessible to hadronic probes. We establish that the azimuthal momentum anisotropies of thermal dileptons are particularly sensitive to the temperature dependence of the shear viscosity coefficient. We also show the potential of thermal dileptons in differentiating between possible initial shear-stress tensors and shear relaxation times. Fluid-dynamical model We will discuss only Au-Au collisions at √ s NN = 200 GeV. The time evolution of the hot and dense medium created at RHIC is modeled using music, a 3+1D hydrodynamical evolution [1]. The main equations of motion are the conservation laws of energy and momentum, given by the continuity equation for the energy momentum tensor, T µν , i.e., ∂ µ T µν = 0. As usual, T µν = ε u µ u ν − ∆ µν P + π µν , with ε being the energy density, P the thermodynamic pressure, u µ the fluid four-velocity, π µν the shear-stress tensor, and ∆ µν = g µν − u µ u ν the projection operator onto the 3-space orthogonal to the velocity, with a metric tensor g µν = diag(1, −1, −1, −1). The lattice QCD equation of state is used to relate P and ε [2]. 1 The conservation laws are complemented by a relaxation equation for the shear-stress tensor, given by a version of Israel-Stewart (I-S) theory [3,4], where π µν NS = 2η σ µν = 2η∆ µν αβ ∂ α u β is the Navier-Stokes limit of the shear-stress tensor, with ∆ µν αβ = ∆ µ α ∆ ν β + ∆ µ β ∆ ν α /2− ∆ αβ ∆ µν /3 being the double, symmetric, traceless projection operator. In its simplest from, I-S theory has two transport coefficients: the shear viscosity η, also present in Navier-Stokes theory, and the shear relaxation time, τ π , which only exists in I-S theory. We use a constant value η/s = 1/4π as the default value for the shear viscosity over entropy density ratio. In the QGP phase, i.e. for temperatures above a transition temperature T tr = 0.18 GeV, we will also consider an η/s with linear temperature dependences of the form η/s(T ) = a(T/T tr − 1) + 1/4π. The effect of the temperature dependence of η/s on hadronic and eletromagnetic flow observables is tested by modifying the slope parameter a. The values of a employed in this work are a = 0, 0.2427, and 0.5516, with a = 0 corresponding to the constant default value. The shear relaxation time is assumed to be of the form τ π = b π η/ (ε + P). The role of τ π is to govern the rate at which π µν evolves and relaxes towards its Navier-Stokes limit. The default value used in this study is b π = 5. Here, we test the effect of larger relaxation times by also considering b π = 10 and 20. The initial energy density profile is determined by the Monte-Carlo Glauber model, with all the free parameters being tuned to describe the multiplicity and elliptic flow of hadronic observables at RHIC's highest energy. The initial value of the shear-stress tensor is also varied in this work and is parametrized in the following way π µν 0 = c × 2ησ µν . The parameter c controls the deviation of the initial state from local thermodynamic equilibrium. Here, we set c = 0, 1/2, and 1, with the default value being zero. The initial velocity profile is always set to zero in hyperbolic coordinates. Thermal dilepton rates Thermal dilepton rates can generically be expressed as: where α is the electromagnetic structure constant, Π R γ * = Π R, µ γ * , µ is the trace of the retarded virtual photon self-energy, and M 2 = q 2 , where M is the virtual photon invariant mass. This expression is valid at leading order in α em , but is exact at all orders of α s [5]. We have used the Born rate in this paper, which corresponds to the quark-antiquark annihilation rate into dileptons. Viscosity is included via a deviation of the thermal distribution functions n entering in evaluating Π R γ * such that n → n + δn, where δn(p) = G(p)n(p)(1 ± n(p))p µ p ν π µν / 2T 2 (ε + P) , and G(p) is a function that must be determined through the use of microscopic physics. In order to find G(p), we solved the Boltzmann equation assuming a massless gas of particles with constant 2 → 2 cross section. The general form of the thermal dilepton rates Eq. (2) can be applied in the hadronic sector (low temperatures) via the introduction of the Vector Dominance Model (VDM). Through VDM, Π R γ * is expressed in terms of D R V , the vector meson (V) retarded propagator. A key ingredient in evaluating D R V is the vector meson self-energy Π V , the latter being presented in detail in Ref. [6]. Results In the left panels of Figures 1, 2, and 3, we show the differential elliptic flow of charged hadrons as a function of transverse momentum, v 2 (p T ). In the right panel of the same figures we show the integrated elliptic flow of thermal dileptons as a function of their invariant mass, v 2 (M). In Figure 1 η/s was varied, while in Figures 2 and 3 π µν 0 and τ π were changed, respectively. In each case, the parameters that are not varied are kept at their default values. For each parameter configuration, we computed 200 events, all in the 20-40% centrality class. The color bands in the plots indicate the statistical uncertainties of the calculations. We note that our results for charged hadron v 2 (p T ) are in good agreement with PHENIX data, which corresponds to the points in the left panels of our figures. It was already shown in Ref. [7] that charged hadrons have a small sensitivity to the η/s(T ) in the QGP phase at the top RHIC energy. The left panel of Figure 1 illustrates that this behavior still holds true for the temperature dependence of η/s used in this study. In addition, the results plotted in the left panels of Figures 2 and 3 confirm that the elliptic flow of charged hadrons at RHIC's highest energy has a very small sensitivity also to variations of π µν 0 and of τ π . Even though it is not shown here, we verified that the same is true for the transverse momentum spectra of charged hadrons. The situation is not the same when it comes to thermal dileptons. The effect of varying π µν 0 and τ π is visible on v 2 (M), but it is still relatively modest, as seen in the right panels of Figures 2 and 3. However, the magnitude of the slope of η/s as a function of T has a sizeable influence on the elliptic flow of thermal dileptons; this is shown in the right panel of Figure π µν (τ 0 )=0 π µν (τ 0 )=ησ µν π µν (τ 0 )=2ησ µν The change in the hydrodynamical evolution induced by a T-dependent η/s is occuring far from the freeze-out surface and therefore is only accessible to electromagnetic probes. At freeze-out (for collisions at RHIC energies), most of the memory of different values of η/s in the QGP phase has faded: the charged hadrons v 2 is thus mostly unaffected (see left panel of Figure 1 and Ref. [7]). τ π =5η/(ε+P) τ π =10η/(ε+P) τ π =20η/(ε+P) Conclusions In this contribution, we showed that thermal dileptons are affected by the transport properties of the QGP and by non-equilibrium aspects of the initial evolution that are usually inaccessible to hadronic probes. For the first time, we explicitly demonstrate that the invariant mass distribution of dileptons and their azimuthal momentum anisotropy have a small but non-negligible dependence on the magnitude of the shear relaxation time and on the value of initial shear-stress tensor. Importantly, virtual photons may also reveal the temperature dependence of the shear viscosity coefficient. This endeavor reaffirms the potential that penetrating probes, such as dileptons, have in furthering our understanding of QCD at high temperatures and densities. We expect that, as experimental uncertainties become smaller, such probes will play a more dominant role in the extraction of the initial state and transport properties of the bulk QCD matter created in ultrarelativistic heavy ion collisions at RHIC and at the LHC.
2014-08-26T13:58:25.000Z
2014-08-05T00:00:00.000
{ "year": 2014, "sha1": "d59cd12320bd364e4c2330f38f6a50833811c3d6", "oa_license": "elsevier-specific: oa user license", "oa_url": "http://manuscript.elsevier.com/S0375947414003388/pdf/S0375947414003388.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "d59cd12320bd364e4c2330f38f6a50833811c3d6", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
15232837
pes2o/s2orc
v3-fos-license
Mechanisms driving alteration of the Landau state in the vicinity of a second-order phase transition The rearrangement of the Fermi surface of a homogeneous Fermi system upon approach to a second-order phase transition is studied at zero temperature. The analysis begins with an investigation of solutions of the equation $\epsilon(p)=\mu$, a condition that ordinarily has the Fermi momentum $p_F$ as a single root. The emergence of a bifurcation point in this equation is found to trigger a qualitative alteration of the Landau state, well before the collapse of the collective degree of freedom that is responsible for the second-order transition. The competition between mechanisms that drive rearrangement of the Landau quasiparticle distribution is explored, taking into account the feedback of the rearrangement on the spectrum of critical fluctuations. It is demonstrated that the transformation of the Landau state to a new ground state may be viewed as a first-order phase transition. Introduction In the Landau-Migdal theory of Fermi liquids [1,2], the ground state of a homogeneous Fermi system is described in terms of a quasiparticle momentum distribution n F (p, T ) that coincides with the momentum distribution of the ideal Fermi gas. This theory has been remarkably successful in advancing our qualitative and quantitative understanding of a broad spectrum of Fermi systems, including bulk liquid 3 He, conventional superconductors, and nucleonic subsystems in neutron stars. However, the theory is known to fail in the strongly correlated electron systems present in high-T c compounds. Certain experimental results obtained very recently [3,4,5] may prove decisive to an understanding of this failure. The systems involved are a dilute two-dimensional (2D) electron gas and 2D liquid 3 He. The experiments show how, under variation of the density, these systems progress from conditions of moderate correlation to the regime of very strong correlation. A striking feature is that both systems appear to experience a divergence of the effective mass M * as the density approaches a critical value ρ ∞ associated with some kind of phase transition, which is presumably of second order [5]. We base our analysis on a necessary condition for the stability of the Landau state, namely that the change δE 0 of the ground state energy E 0 remain positive for any admissible variation from the quasiparticle distribution n F (p) = θ(p F −p), while keeping the particle number unchanged. Explicitly, this condition reads δE 0 = ξ(p; n F )δn F (p)dτ > 0 , for any variation δn F (p) satisfying δn F (p)dτ = 0 . In these equations, dτ is the volume element in momentum space, while ξ(p) = ǫ(p)−µ is the single-particle (sp) spectrum, measured from the chemical potential µ and evaluated with the distribution n F (p). The condition (1) holds provided the equation has the single root p = p F . Otherwise, it is violated, the Landau state loses its stability, and the ground state must take another form, implying a rearrangement of singleparticle degrees of freedom. In weakly correlated Fermi systems, ξ(p) is a monotonic function of p, so that equation (3) has no extra roots. However, as correlations build up, the character of the curve ξ(p) may change. Indeed, it becomes non-monotonic in the vicinity of an impending second-order phase transition, when critical fluctuations of wave number q c > 0 produce a diverging susceptibility and hence a collapse of the corresponding collective degree of freedom. Let the second-order phase transition occur at a critical density ρ c . As we shall see, there is another critical density ρ b at which a bifurcation arises in equation (3), resulting in the emergence of a two additional roots p 1 , p 2 (see figure 1). The distance between these extra roots increases linearly from zero in proportion to |ρ − ρ b |. It should be S ξS S S S ) Figure 1. Illustration of the emergence of additional roots p 1 , p 2 of equation (3). emphasized that the stability condition (1) is never violated when applied to variations of the quasiparticle distribution n(p) for momenta lying beyond the interval [p 1 , p 2 ]. Hence, at |ρ − ρ b | ≪ ρ the rearrangement process is confined to a constricted domain in momentum space. Accordingly, a rearrangement that entails a major alteration of the ground state in configuration space, involving all of the occupied sp states and therefore disfavored energetically and therefore irrelevant to the present study. In particular, Mott-Hubbard localization is ruled out. For this reason our attention will be focused on two plausible scenarios for the rearrangement of the momentum distribution n F (p). In the first scenario, modification of the Landau state consists in the formation of empty spaces in momentum space that have been named Lifshitz bubbles (LB). In the LB phase, the quasiparticle occupation numbers have the usual values 0 and 1, but the Fermi surface becomes multi-connected. In fact, this and related phenomena were studied in model problems more than 20 years ago [6,7]. In the limit |ρ−ρ b | ≪ ρ, the LB mechanism has no rivals, provided the interval [p 1 , p 2 ] is not located in the immediate vicinity of the Fermi momentum p F . Otherwise, there exists a novel competitor called fermion condensation [8,9,10,11,12], which is the second scenario to be examined here. Fermion condensation is a rearrangement of the Landau state leading from the Fermi step n F (p) to a continuous quasiparticle momentum distribution n(p) having no Migdal jump at p F . In the region C adjacent to the original Fermi surface where n(p) departs from n F (p) by dropping smoothly from 1 to 0, the sp spectrum turns out to be completely flat, with ǫ(p) = µ. This behavior gives rise to a singular, δ-function term in the density of states ρ(ε). Considered as a phase transition, fermion condensation does not break any symmetry, and has much in common with the classical gas-liquid phase transition [11]. However, the presence of the singularity in ρ(ε) enhances the feedback of the rearrangement process on the spectrum of the relevant critical fluctuations, which, in its turn, affects the competition between the two mechanisms proposed for rearrangement of the Landau state. After investigating the nature of the instability of the Landau state, we shall illuminate the competition between LB and FC rearrangement scenarios by considering a simple model, in which the softening effect is assumed to depend linearly on the phase volume of region C occupied by the fermion condensate. It will be found that formation of the FC state exerts the greater impact on the collective degree of freedom. This being the case, we demonstrate that (i) the FC phase wins the contest with the LB reconfiguration, and (ii) the corresponding transformation of the Landau state is a first-order phase transition. Instability of the Landau state To gain detailed insight into the emergence of the bifurcation point in equation (3), we employ the Landau relation [1,19] which connects the quasiparticle group velocity dξ/dp with the momentum distribution n(p) in terms of the Landau scattering amplitude f . First, we consider the case q c ∼ p F , which applies to several phase transitions of fundamental interest. One of these is pion condensation, predicted to occur in (3D) neutron matter due to collapse of the collective spin-isospin mode with pion quantum numbers [13,14,15,16]. In this situation, the leading term in the amplitude f , being proportional to the singular term in the static spin-isospin susceptibility, has the form [13] f (q) = g κ 2 (ρ) + (q 2 /q 2 c − 1) 2 , where g is a positive coupling constant and the stiffness coefficient κ 2 (ρ) vanishes at the critical density ρ c . The same form of f is expected to apply in two-dimensional liquid 3 He, where spin fluctuations play an important role [17]. The sp spectrum ξ(p) in the Landau state, with quasiparticle distribution n F (p), may be evaluated in closed form by means of equation (4). Substituting the expression (5) for the amplitude f and performing the integration on the right-hand side, we obtain dξ(p, n F ) dp Further integration yields the formula the dimensionless function w(p) being given by . Corresponding values of the parameter κ are indicated near the curves. where and ξ 0 (p) = p 2 /2M − µ. Results of numerical calculations for neutron matter are shown in figures 2 and 3. For simplicity, we take the coupling constant in the amplitude (5) to be g = 1/2m 2 π , corresponding to bare πNN vertices. The spectrum ξ(p), evaluated with the critical momentum q c = 0.9 p F and for four values of the parameter κ, is displayed in panel (a) of figure 2. A new root p b ∼ 0 of equation (3) is seen to appear at κ b ≃ 0.356, signaling that the Fermi step has become unstable. It is worth noting that at the customary values [13] of the critical momentum, q c /p F ∼ 0.7−1.0, the bifurcation point lies exactly at the origin in p. However, as q c increases to greater values, it rapidly moves toward the Fermi momentum and leaves the Fermi sphere at q c ∼ 1.14 p F . This evolution is illustrated by panels (b)-(d) of figure 2, where the spectra ξ(p) calculated for q c = 1.05 p F , 1.14 p F , and 1.2 p F are drawn. Figure 3 depicts the dependence p b (q c ) in the large interval 0 < q c < 2 p F (upper panel), together with the dependence of the critical parameter κ b on the wave number q c (lower panel). Remarkably, the largest values of κ b are achieved just in the preferred range q c /p F ∼ 0.7 − 1.0. The value κ ∞ of κ at which the border of the instability region [p 1 , p 2 ] reaches the Fermi momentum p F , is plotted in the lower panel of figure 3. The resulting curve lies below the curve of κ b (q c ) everywhere except for the point of contact at q c ≃ 1.14 p F . The above results refer to the 3D problem. In the 2D case, analytical evaluation of the spectrum ξ(p) is rather cumbersome, but its numerical computation is easily accomplished. We have calculated ξ(p) for 2D liquid 3 He under the assumption that the dominant term in the quasiparticle amplitude is by governed by the static spinspin susceptibility. Results are shown in figure 4. While the spectrum of 2D liquid 3 He is found to differ quantitatively from that of 3D neutron matter, the shapes are qualitatively similar, as is the evolution with increasing q c . We infer from these two sets of results that in the case q c < ∼ p F , the Landau state becomes unstable prior to the second-order phase transition itself. As will be seen, this is a generic feature. On the other hand, particulars of the alteration of the Landau state will depend on the parameters that specify the amplitude f . To illustrate the general situation, we focus on a phase transition associated with the spontaneous generation of density waves in dense neutron matter or the dilute electron gas, in both of which the critical wave number q c is close to 2p F . In this case, the scattering amplitude f has the same form as (5), but the sign of g is negative [11]. The spectrum ξ(p) of the 3D electron gas, calculated for a critical momentum q c = 1.95 p F at three values of the parameter κ, is drawn in panel (a) of figure 5. The solid line shows the spectrum at κ ≃ 0.5015, for which equation (3) Figure 5. Electron spectra ξ(p) in 3D (measured in units of ε 0 F ), as calculated for q c = 1.95 p F (panel (a)), q c = 2.00 p F (panel (b)), and q c = 2.05 p F (panel (c)). The corresponding values of the parameter κ are indicated near the curves. In each panel, the solid line traces the spectrum before the instability point is attained (κ > κ b ), and the dotted line shows that at κ < κ b . In the panels (a) and (c), the dotted line indicates the spectrum for κ = κ ∞ , at which the instability region reaches the Fermi surface. at the Fermi momentum p F . The long-dashed line depicts ξ(p) at κ = κ b ≃ 0.5005. As seen, the bifurcation point p b appears close to the Fermi momentum p F . Also shown is the case when the bifurcation point reaches p F : the short-dashed line traces the sp spectrum at κ ∞ ≃ 0.4987, where the effective mass becomes infinite. This result was first obtained in reference [12]. The relevant plots for q c = 2.0 p F are displayed in panel (b). The solid line shows the spectrum at κ ≃ 0.4800 > κ b . For this choice of q c , the bifurcation point p b appears exactly at the Fermi surface when κ b ≃ 0.4794, as indicated by the long-dashed line. Since the effective mass goes to infinity, κ ∞ and κ b coincide. The short-dashed line corresponds to a case beyond the critical point, with κ ≃ 0.4780. In all three cases, in the spectrum has a cubic-like shape as a function of p − p F in the vicinity of the Fermi momentum. The spectra for q c = 2.05 p F are shown in panel (c). Competition between different rearrangement scenarios We now turn the discussion to the proposed scenarios for alteration of the Landau state beyond the limit of its stability, assuming that the difference |ρ − ρ b | is much smaller than ρ. Ignoring the feedback effects of rearrangement: Lifshitz-bubble formation In the pion-condensation example where q c < ∼ p F , we have seen that new roots of equation (3) arise quite far from the Fermi momentum p F . As was shown in reference [18], the basic rearrangement mechanism transforming the Landau state in this case involves the formation of some number of the Lifshitz bubbles. The quasiparticle occupation numbers n(p) remain integral at 0 or 1, but the Fermi surface becomes multi-connected [6,20]. isospin fluctuations, q c = 0.9 p F (panel (a)), q c = 1.1 p F (panel (b)), and q c = 1.2 p F (panel (c)). For all three parameter choices, the density ρ is slightly above the critical value, and n(p) exhibits a single LB, the position of which strongly depends on q c . The bubble is located at the origin for q c = 0.9 p F , at p ∼ 0.7 p F for q c = 1.1 p F , and mostly outside the original Fermi sphere for q c = 1.2 p F . The bubble is small in cases (a) and (b), and the net disturbance relative to the original filled Fermi sea is small in all three cases. As the density increases, the LB moves and multiplies. This behavior is demonstrated in figure 9, which shows the phase diagram of neutron matter in the (q c , κ) plane. The Landau state with n(p) = n F (p) occupies the white region of the diagram (labeled FL in the figure). The LB phases populate the shaded part of the plane, which is separated from the FL region by the curve κ b (q c ) (see figure 3). We shall not delve deeply into the "zoology" of the LB world, instead classifying the LB phases simply by the number i of sheets of the Fermi surface. Formation of Lifshitz bubbles is by no means the only kind of rearrangement the Fermi surface can experience as a result of the violation of stability condition (1). If the bifurcation point in equation (3) is situated close to the Fermi momentum p F , then a new rearrangement scenario, fermion condensation [8,9,10,11], comes into play. Its salient features are apparent from the basic equation This equation determines a new quasiparticle distribution n 0 (p) that differs from the Fermi distribution n F (p) within the region C, but coincides with it outside. In contrast to the Lifshitz-bubble phases, the rearranged distribution n 0 (p ∈ C) appears to be a continuous function of p, with values lying between 0 and 1. Since its l.h.s. is nothing but the quasiparticle energy ǫ(p), the condition (10) implies the presence of a completely flat portion of the spectrum ξ(p). This plateau in ξ(p) identifies the fermion condensate (FC), i.e., the subsystem of quasiparticles with energy pinned to the chemical potential. As consequence of this behavior, the density of states ρ(ε) acquires an infinite term at ε = 0, as in a Bose liquid. It must be kept in mind, however, that the fermion condensation is in actuality an intermediate stage, since its inherent degeneracy must somehow be lifted. The analysis of this process is beyond the scope of the present article; a detailed treatment may be found in reference [11]. Equation (10) can be rewritten in explicit form by employing the well-known Landau formula for the variation of the ground state energy E under variation δn F (p) = n(p) − n F (p) of the Landau quasiparticle momentum distribution n F (p). Here where f is the Landau amplitude entering equation (4). Insertion of this formula into condition (10) leads to the following equation for determining the new momentum distribution n 0 (p), Solutions of this equation can be assigned an order parameter η, taken as the ratio of the FC density to the total density ρ. Nontrivial solutions can arise beyond the point where the effective mass M * changes its sign. However, as we know from figure 7, Lifshitz bubbles already exist at this point. Thus, in the model adopted, LB states make their appearance prior to the formation of a fermion condensate. To elucidate the situation, we may exploit the fact that in the region adjacent to the Fermi momentum p F , the group velocity dξ/dp has essentially a parabolic shape. Defining a new variable y = (p − p F )/p F , we can write ξ(y; n F ) = p 2 The three parameters y m , specifying the spectrum ξ(p) depend on the parameter κ appearing in the model form (5) for the Landau amplitude f . We observe that the parameter b must be negative in the vicinity of the Fermi surface. At the point κ = κ ∞ where the effective mass diverges, i.e. (dξ/dy) F = 0, the parameters y m and b are connected by the relation On the other hand, the equation ξ(y) = 0, with ξ(y) given by the formula (15), has the single root y = 0 for those κ values at which Otherwise, the function ξ(y) acquires two additional zeroes rendering the Landau state unstable. Setting κ = κ ∞ in equation (15) and appealing to relation (17), we infer that at the point where fermion condensation sets in, the equation ξ(y) = 0 already has three roots, namely y 1,2 = 0 and y 3 = 3y m (κ ∞ ). This confirms that the Landau state is unstable at the point of fermion condensation. Thus, we have demonstrated both numerically and analytically that in the oversimplified model under consideration, alteration of the Landau state due to formation of Lifshitz bubbles does indeed precede fermion condensation. This property was first documented in the numerical calculations of reference [20]. A simple model including feedback: the contest between fermion condensation and Lifshitz-bubble creation To this point, no consideration has been given to the effect of feedback on the critical fluctuations as reflected in their basic parameter, the stiffness coefficient κ 2 entering the interaction function f (q) of equation (5). We now address this issue. Our analysis shows that the impact of Lifshitz-bubble formation on the critical fluctuations is insignificant. On the other hand, the feedback effect may be crucial in the case of fermion condensation, because of the infinite value taken by the density of states ρ(ε = 0) at T = 0. To provide a basis for analysis, we evaluate the gain in energy due to the emergence of a small FC fraction, assuming a trial FC function for the variation δn(y) = n tr (p) − n F (p) having the simplest form, δ tr n(y) = 1 2 sgn y , −λ < y < λ . Particle number is conserved as long as the parameter λ is sufficiently small. With this trial function, we evaluate the first-and second-order variations, δ tr E and δ (2) tr E, in the Landau formula (12). After inserting the trial function δ tr n(p) along with the sp spectrum (15) into equation (12), simple manipulations yield and δ (2) tr E = where we have introduced the dimensionless group velocity v g = A(3y 2 m + b). Collecting terms, we arrive at with v g = s LB + 9y 2 m /4 and s LB given by relation (18). As we have seen, the LB phase wins the contest with the Landau state if s LB < 0. To uncover the conditions under which the FC state can prevail in the competition between the two phases, let us investigate the roots of the function δ tr E(λ) given by equation (23). Quite evidently, if v g > 0, or equivalently, if s LB > −9y 2 m /4, this function has no roots, and hence δ tr E(λ) > 0. This result demonstrates that without accounting for feedback of the FC on the stiffness coefficient κ 2 , and hence on v g , the FC phase looses the contest. To proceed further we make the simple assumption that v g falls off linearly with increase of the FC density. Thus we write v g (κ, λ) = v 0 (κ) − λv 1 (κ), where v 1 (κ) > 0 is a slowly varying function of κ. It is straightforward to show that equation δ tr E = 0 has two positive roots between which δ tr E(λ) < 0 holds provided v 2 1 > 2(A + B)v 0 . Therefore for any λ within the range λ 1 < λ < λ 2 , the variation δ tr E(λ) is negative. Since the true value of δ 0 E, calculated with the true function n 0 (p) from equation (13), should lie lower, we infer that fermion condensation is energetically preferred over the Landau state -at least in the case that v 2 1 > 2(A + B)v 0 . This inequality is always satisfied close to the point of fermion condensation, where according to equation (17), v 0 vanishes. Since both of the roots λ 1,2 are positive, not zero, fermion condensation is predicted to be a weak first-order phase transition. In deciding the competition between the FC and LB phases, it is instructive to focus on the case of small positive s LB (κ), for which LB formation is still forbidden. The input parameters may be chosen so as to locate the minimum of dξ/dp not far from the Fermi surface, which implies a sufficiently small value of y m . But at the point where s LB = 0, we have v 0 = 9y 2 m /4. Hence, if y m is sufficiently small, both of the roots λ 1 , λ 2 of equation (24) are real, and δ tr E(λ) is negative in the interval between them. We then conclude that for s LB → 0 + , fermion condensation is allowed, while Lifshitz-bubble formation is forbidden. Figure 10. Energies (per particle) of the trial state δ tr E (solid line) and of the LB state δ LB E (dashed line), calculated for the 3D electron gas with q c = 2 p F and κ 0 = 0.4820, 0.4810, 0.4794, and 0.4790 (ordered from top to bottom). The energies, measured in units of the Fermi energy ε 0 F , are plotted versus the order parameterη. Left panels: feedback off (κ 1 = 0). Right panels: feedback on (κ 1 = 0.1). Numerical illustration The foregoing model analysis of the role of feedback in the competition between fermion condensation and Lifshitz-bubble formation can be illustrated by numerical calculation of the variation δE of the ground-state energy corresponding to chosen variations δn(p) of the quasiparticle distribution away from the Fermi distribution n F (p). (The same exercise will serve to demonstrate that the parameter α appearing in the secondorder energy variation (22) is indeed of order unity. We compare the energy variation corresponding to the FC trial variation δ tr n(p), with the energy variation δ LB associated with the LB phase. In figure 10, the energy shifts δ tr E and δ LB E, evaluated for the 3D electron gas with q c = 2 p F , are drawn as functions of the order parameter η, taken as the relative phase volume of the region in momentum space within which the quasiparticle distribution is rearranged. The left panels show the results obtained ignoring the suppression of the stiffness coefficient κ 2 due to formation of the FC. In this case, it is seen that both the FC trial state and the LB state give lower energy than the Landau state at κ < κ b , but the LB state has the deeper minimum. The feedback of the quasiparticle rearrangement on the charge fluctuations strongly alters the competitive balance between LB and trial FC states. For the trial FC state, the feedback effect is included in the same manner as detailed above. In particular, we assert a linear dependence on η of the term κ 2 in the denominator of the amplitude (5), in the form κ 2 (η) = κ 2 0 −η κ 2 1 . To be definite, we set κ 1 = 0.1. For the LB state, feedback is unimportant, since the density of states ρ(ε) receives no dramatic enhancement in this rearrangement scenario. The right panels in the figure demonstrate the role of feedback in the competition between the three competing states. We observe that the plots of δ tr E(η) differ markedly from their counterparts in the left panels, which represent the feedback-off situation. In accordance with the analysis of the previous section, a negative minimum of the curve δ tr E first appears at a value of κ below the critical value κ c . Beyond κ c (right bottom panel), this minimum is lower than that of the LB curve by two orders of magnitude. Therefore, the state possessing the true FC -whose energy is necessarily below that of the trial FC state -clearly wins the contest with the LB phase, and the transition from the Landau state to the FC state is of first order. Taking the feedback into account changes the phase diagram of neutron matter. The FC wins the contest with the LB states in the part of non-Landau area of the (q c , κ) phase diagram between two dashed lines in figure 9. We estimated these borders using the same parameter κ 1 = 0.1 as for the 3D electron gas. Conclusions Based on standard relations of the Landau theory of Fermi liquids, we have explored the properties of mechanisms that may force a rearrangement of the Fermi surface of a homogeneous system at zero temperature. It is found that in advance of a secondorder phase transition to a state with long-range order induced by the softening of the spectrum of critical fluctuations, there arise additional, nontrivial roots of the equation ǫ(p) = µ, signaling an instability of the Landau state. The consequent metamorphosis of the quasiparticle spectrum has been traced to a divergence, at the second-order transition point, of the leading term in the quasiparticle amplitude, which is proportional to the pertinent static susceptibility. We have clarified the competitive status of two scenarios for alteration of the Landau state, Lifshitz-bubble formation and fermion condensation. In general, and in particular for the case of fermion condensation, it must be expected that the rearrangement of the quasiparticle momentum distribution will exert an influence on the implicated collective degree of freedom. This feedback effect has been taken into account through a simple model in which the stiffness coefficient depends linearly on the FC density. Without feedback, Lifshitz-bubble formation precedes fermion condensation. However, the introduction of feedback reverses this picture: under increase of density, the first-order phase transition to the state containing a fermion condensate takes place before bubble-formation becomes the favored state.
2014-10-01T00:00:00.000Z
2004-02-18T00:00:00.000
{ "year": 2004, "sha1": "1ad8cc16231a66537a1c7f3ac009114b53c73af2", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/0402481", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1ad8cc16231a66537a1c7f3ac009114b53c73af2", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
230570177
pes2o/s2orc
v3-fos-license
EFFECT THE UNDERSTANDING OF TAXATION, TAX SANCTIONS AND TAXPAYER AWARENESS OF TAXPAYER COMPLIANCE (RESEARCH ON TAXPAYERS OF INDIVIDUAL ENTREPRENEURS IN TANGERANG REGION) The purpose of this research are to examine how much influence the understanding of taxation, tax sanctions and taxpayer awareness of tax compliance. The population in this research are the taxpayers of individual entrepreneurs in the Tangerang Region, then the samples are drawn using a method simple random sampling. Using quantitative and analytic statistics method for the analytical method that are multiple linear regression analysis method. The results showed that taxpayer understanding and taxpayer awareness affected taxpayer compliance while tax sanctions did not affect the compliance of taxpayers of individual entrepreneurs in the Tangerang Region. The data analysis technique used in this research was SPSS v.23. INTRODUCTION Taxes are the main source of state revenue used for government's finance spending and national development. Listed in the Indonesia State Budget (APBN) where state revenue from the tax sector is the most income. The more the government used spend for national development, more demanded increasing in state revenues (Putut, 2012). Taxpayer compliance is considering an important aspect in Indonesian taxation system that adopts a self-assessment system which is absolutely gives the taxpayer the confidence to calculate in process, paying and reporting their obligations. Because of that, the correctness of tax payments depending on taxpayer compliance. So that tax compliance is the most important issue in Indonesia. If the taxpayer is not obedience, it can create a desire to avoid and neglect tax obligations. The phenomenon in this research are the lack of knowledge of taxpayers about the tax regulations that makes many taxpayers who have not fulfilled their obligations as taxpayers and understand the benefits of tax revenue. One of the factors influencing the low compliance of taxpayers is the level of understanding which is one of the potential factors for the government to fulfill the increasing compliance in tax obligations. The understanding of tax can also influence taxpayers to comply. Taxpayers may not be able to obey if the taxpayer does not have the understanding that related to compliance. Otherwise, taxpayers can comply if the taxpayer have the understanding for what must be done related to tax obligations. Another factor that can affect taxpayer compliance is tax sanctions that will be imposed on non-compliant taxpayers. Tax sanctions are guarantees that the provisions of tax legislation (tax norms) will be obeyed/followed (Mardiasmo, 2018). For the tax regulations to be obeyed, there must be tax penalties for offenders. Taxpayers will fulfill their tax obligations if they consider that taxation sanctions cause more loss from their profit. Another factor that can increase taxpayer compliance is by raising awareness in paying taxes. The formulation of the problem in this research are the understanding of taxation, tax sanctions, and the awareness of taxpayers that affects the tax compliance. RESULTS AND DISCUSSION 1) T-Test (Partial Test) Based on the results of the equation above, the obtained results are partially understanding taxation has a significant effect on taxpayer compliance and tax sanctions do not significantly influence compliance taxpayers while taxpayer awareness have a significant effect on taxpayer compliance. 2) F-Test (Simultaneous Test) Based on the equation results above, the estimation results produce a Prob (F Statistic) of 0,000. Value Using α = 0.05 indicates the understanding taxation, tax sanctions and awareness of taxpayers simultaneously or together has a significant effect on individual taxpayer compliance (0,000 <0.05). Based on the proposed hypothesis, Ho rejected, which means statistically the understanding of taxation, tax sanctions and awareness of taxpayers simultaneously or together has a significant effect on individual taxpayer compliance (α = 0.05). 3) Determination Coefficient Test (R 2 ) Based on the equation above, the coefficient of determination is equal to 0.398 or equal to 39.80%. This means that the contributions of the understanding of taxation, tax sanctions and awareness of taxpayers towards personal taxpayer compliance is 39.80%. While the remaining 60.20% is contributed by other variables that not discussed in this research. Hypothesis testing that has been done, it can be put forward some discussion below: 1) The understanding of taxation has a significant effect on compliance of individual taxpayers The results of this research are matching with research conducted by Nerissa Arviana Soelistijo (2014) which has concluded that the understanding of taxpayers about tax regulations has a positive and significant effect taxpayer compliance. Shows the more the understanding of taxation for tax regulations, lead the increasing of the taxpayer compliance. 2) Tax sanctions do not significantly influence the compliance of individual taxpayers. The results of this research matching with research conducted by Handayani, Ucik, and Laily (2017) produce the same conclusion where tax penalties do not affect tax compliance. The results of this research matching with the research conducted by Tia (2016) revealed that taxpayer awareness has a significant effect on taxpayer compliance. CONCLUSIONS AND SUGGESTIONS Conclusions Based on the research's result with multiple regression analysis, the conclusions of this research are: a. The understanding of taxation has a significant effect on individual taxpayer compliance. The higher the level of understanding of tax provisions, the better the implementation of tax obligations as for increasing compliance. This is reasonable because often taxpayers do not carry out their tax obligations properly not because of a desire to disobey, but the complexity of taxation sometimes forces them to disobey (passive tax resistance). b. Tax sanctions have no significant effect on personal taxpayer compliance. This shows that the imposition of penalties on taxpayers who violate tax regulations in the form of tax sanctions does not provide a deterrent effect on the taxpayer. c. Awareness from taxpayers has a significant effect on taxpayer compliance. If the taxpayers has high awareness which come from the motivation to pay taxes, then the willingness to pay taxes will be more affected and will increase the state income from taxes. Suggestions In this research there are still some shortcomings, and I suggest: a. The needs for intense socialization of the tax authorities to individual taxpayers in the City of Tangerang related to the latest tax regulations, so that taxpayers always update the latest tax regulations and carry out their tax obligations in accordance with applicable tax laws. b. Future research are advised to use more than 4 independent variables to examine the effect on individual taxpayer compliance. c. Future research are recommended to use more respondents or taxpayers used as research samples.
2020-12-31T09:04:39.592Z
2020-12-10T00:00:00.000
{ "year": 2020, "sha1": "9e6a8af0bb1a201f828f35142e9c7226bd31ac6f", "oa_license": "CCBY", "oa_url": "https://dinastipub.org/DIJDBM/article/download/638/414", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "8431f1566b210918a35f8aec5ad9be58ce77aa7c", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
10950038
pes2o/s2orc
v3-fos-license
Antimicrobial and cytotoxicity properties of the crude extracts and fractions of Premna resinosa (Hochst.) Schauer (Compositae): Kenyan traditional medicinal plant Background Premna resinosa (Hochst.) Schauer also called “mukarakara” in Mbeere community of Kenya is used in the management of respiratory illness. In this study we investigated antituberculous, antifungal, antibacterial activities including cytotoxicity and phytochemical constituents of this plant. Methods Antibacterial and antifungal activities were investigated by disc diffusion and micro dilution techniques. Antituberculous activity was investigated using BACTEC MGIT 960 system while cytotoxicity was analyzed by MTT assay on Vero cells (Methanolic crude extract) and HEp-2 cells (fractions). Finally, phytochemicals were profiled using standard procedures. Results P. resinosa had high antituberculous activity with a MIC of <6.25 μg/ml in ethyl acetate fraction. The antibacterial activity was high and broad spectrum, inhibiting both Gram positive and Gram negative bacteria. Dichloromethane fraction had the best antibacterial MIC of 31.25 μg/ml against Methicillin-resistant S. aureus while Ethyl acetate fraction had the highest zone of inhibition of 22.3 ± 0.3 against S. aureus. Its effects on tested fungi were moderate with petro ether fraction giving an inhibition of 10.3 ± 0.3 on C. albicans. The crude extract and two fractions (petro ether and methanol) were not within the acceptable toxicity limits, however dichloromethane and ethyl acetate fractions that exhibited higher activity were within the acceptable toxicity limit (CC50 < 90). The activity can to some extent be associated to alkaloids, flavonoids, terpenoids, anthraquinones and phenols detected in this plant extracts. Conclusion Our findings demonstrate that P. resinosa has high selective potential as a source of novel lead for antituberculous, antibacterial and antifungal drugs. Of particular relevance is high activity against MRSA, S. aureus, C. albicans and MTB which are great public health challenge due to drug resistance development and as major sources of community and hospital based infections. Background Herbal medicine, also known as practice of herbalism, or botanical medicine is the use of plants for their therapeutic and medicinal values. Historically, all medicines have an herbal origin with statistics showing that over 80 % of chemical drug compounds originated from natural material [1]. A survey by UNCTAD has shown that 33 % of drugs produced by industrialized countries are plant derived while 60 % have a natural origin [2]. Traditional herbal treatment remains the first option for patients from resource poor countries [3]. World health organization (WHO) estimates that 80 % of the world population presently use herbal medicine for some aspects of their primary health care [4]. This shift to herbal medication can be attributed to the following factors: 1. the low cost of herbal drugs endearing them with the poor mass of developing world; 2. the 'green' movement in the developed countries that advocates on the inherent safety and desirability of natural products; 3. the individualistic philosophy of western society that encourages self-medication, with many people preferring to treat themselves with phytomedicines [5,6]. In developing countries like Kenya, there is an increasing attempt to incorporate traditional medicine in health care systems [7]. This is after WHO resolution in 2003 (WHA56.31) recommended the inclusion of traditional healers in management of health care. This move was to help countries document traditional medicines and remedies in their countries and to ensure their safety and efficacies is established [8]. However, it is the duty of scientists to ensure improvement and scientific justification of herbal remedies so as to allow their incorporation into health care systems as an alternative to conventional medicine. Premna resinosa (Hochst.) Schauer, locally known as "Mukarakara" in Mbeere community of Embu County in Kenya, is a shrub, 1.5 -3 m tall comprising of whitish branches and stem. The twigs have anti resistance properties, while roots are used to make perfumes and as medicine for management of respiratory related illnesses [9,10]. However, to the best of our knowledge, there is no scientific report documented on the antituberculous properties as well as cytotoxicity properties of this plant. Lack of such information forms a major limitation in the consideration of the use of traditional herbal remedies mutually with or as an affordable alternative to conventional drugs [11]. In addition, knowledge of phytochemical constituent of plants is desirable because it can serve in identification of novel phytocompounds which can be used either in their unmodified form, as semi-synthetic compounds or as drug templates. In this study, we sought to interrogate the antimicrobial activity of water (aqueous) extract, methanol crude extract and various organic solvents fractions of root extract from P. resinosa as well as determine their safety by assaying for their toxicity levels. Plant material The plant for this study was identified through ethnobotanical approach. The information of its use and preparation in Mbeere community, Kenya was gleaned from local herbalist and confirmed from documentation by Relay and Brokensha [9] in The Mbeere in Kenya (ii), Botanical identity and use. The plant has been used for management of respiratory related illnesses. This plant is not an endangered species and it was collected in open community field and therefore no prior permission was required. The location for collection was around 0°46'27.0"S 37°40'54.9"E; −0.774156, 37.681908 of GPS co-ordinates. The identity was also confirmed by a Botanist at Egerton University where voucher specimen number NSN11 was deposited and the name checked as acceptable from [12]. Plant Extract preparation Root samples were chopped into small pieces of 2-3 cm and air-dried in dark at room temperature (23 ± 2°C) to a constant weight. Using a mechanical grinder, the dried root specimens were ground to powder. The powder (50 g) was cold extracted in water with intermittent shaking to mimic the traditional local method of extraction and later lyophilized to obtain a dry powder. Another 50 g was macerated in 200 ml of methanol for 48 h. The extract was then filtered using a filter paper (whatmann 1) and the residue obtained further reextracted using similar amount of methanol. The two volumes of filtrate were pooled together and thereafter concentrated in vacuo using a rotary evaporator. Afterwards, the product was allowed to air dry and the yields recorded. Fractionation of powdered root part of P. resinosa was done using different solvents of increasing polarity. The root powder (50 g) was macerated in 200 ml of Petro ether with intermittent shaking for 48 h after which they were filtered using Whatman no 1 filter paper. The residue was further re-extracted using fresh solvent for 24 h and thereafter the filtrates pooled together. The resulting residue was air dried and further extracted with Dichloromethane followed by Ethyl acetate and lastly methanol using the same procedure carried out for Petroleum ether. Using a rotary evaporator, the solvent was removed from each filtrate under conditions of reduced temperature and pressure. The resulting dry extract was weighed and stored in air tight sample bottles at −20°C until next use. Disc diffusion test The antibacterial activity was assayed by disc diffusion method according to CLSI [13] and Mbaveng et al., [14] with slight modifications. Fresh inoculum was prepared by suspending activated colonies in physiological saline water (0.85 % NaCl). Using 0.5 McFarland turbidity standard, the bacteria and fungi suspensions were adjusted to 1.5 × 10 6 CFU/ml after which they were inoculated aseptically by swapping the surfaces of the Muller Hinton (MHA) plates and sabouraud dextrose agar (SDA) plates. Whatmann filter paper (No.1) discs of 6 mm diameter were made by punching the paper, and the blank discs sterilized in the hot air oven at 160°C for one hour. They were then impregnated with 10 μl of various stock extract solution. The methanolic and water crude extracts stock solution was at (1.0 g/ml). For fractions; petro ether, dichloromethane, and methanol fractions stock solutions were made at 500 μg/ml while ethyl acetate at 250 μg/ml. This afforded disc extract concentration of 1.0 × 10 4 μg/ disc for water and methanol crude extracts, 5 μg/disc for petro ether, dichloromethane, and methanol fractions and 2.5 μg/disc for ethyl acetate. Three standard drugs were used as positive controls: Oxacillin 10 μg/disc (Oxoid Ltd, Tokyo-Japan) and Gentamycin 10 μg/disc (Oxoid Ltd, Tokyo-Japan) for Gram positive and Gram negative bacteria respectively. Nystatin 100 μg/disc (Oxoid Ltd, Tokyo-Japan) was used as the standard drug for all fungi while discs loaded with 10 μl of DMSO was used as negative controls. The impregnated dry discs were carefully placed on the agar plates at equidistance points using a sterile forceps. A positive control as well as a negative control was incorporated in each plate and the plates incubated at 4°C for 2 h so as to allow the extract to diffuse into the media after which they were incubated at 37°C for 18 h. Antimicrobial activity was determined by measuring the size of the inhibition zone to the nearest mm and the results recorded. Extracts fractions that gave an inhibition zone of more than 10 mm were considered to be active [13] and therefore their MIC (Minimum inhibitory concentration) and MBC (Minimum bactericidal concentration) determined [15]. Determination of MIC and MBC The MIC and MBC of the plant P. resinosa extracts was determined for all the organisms in triplicates using broth micro-dilution assay. The petro ether, dichloromethane, and methanol fractions stock solutions were made at 500 μg/ml while ethyl acetate at 250 μg/ml with DMSO. To 100 μl of nutrient broth agar in a sterile 96 well plate, 50 μl of varying plant concentration (petro ether, dichloromethane, and methanol fractions at 500 to 3.91 μg/ml while ethyl acetate at 250 to 1.95 μg/m) was added followed by 50 μl of test organisms previously diluted to equivalent of 0.5 McFarland standard. Addition of the test organisms was done in all the wells except for wells of column 11 which contained neat DMSO and broth, this served as control to check for purity. The adequacy of the media to support the growth of the test organism was evaluated by putting the broth and the test organism in wells of column 12. The plates were then covered with a sterile "cling-on" sealer and incubated for 24 h at 37°C. Bacterial growth was evaluated by addition of 40 μl of 0.2 mg/ml p-iodonitroterazolium chloride (INT, Sigma) to each well and incubated for 30 min. Growth of bacteria was detected by formation of a pink-red coloration while inhibition of growth was signaled by persistence of a clear coloration. The lowest concentration that exhibited color change was considered as the MIC. Minimum bactericidal concentration was determined by streaking a loopful of broth from wells that exhibited no color change onto sterile nutrient agar and sabouraud dextrose agar for bacteria and fungi respectively and thereafter incubated at 37°C for 24 h. The lowest concentration that exhibited no growth was considered as the MBC [16]. Antitubercular activity The test organism Mycobacterium tuberculosis H37Rv ATCC 27294 was sourced from the Kenya Medical Research Institute (KEMRI), Nairobi. Prior to its use, the Mycobacterium tuberculosis was revived on Lowenstein Jensen (LJ) slants for 14 days at 37°C following standard procedures [17,18]. The efficacy of the plant extracts against M. tuberculosis was carried out using the BACTEC MGIT 960 system (BD, New York-U.S.A). This is a fully automated, high volume, non-radiometric instrument that offers continuous monitoring of culture growth. The dry crude extract (water and methanolic) was first dissolved in DMSO to a final concentration of 1 g/ml for preliminary screening. Growth supplement (0.8 ml) containing a mixture of OADC-Oleic Acid, Bovine Albumen, Dextrose and Catalase was added to five 7 ml BBL™ MGIT™ tube labeled GC (growth control), STR (streptomycin), INH (isonaizid), RIF (rifampicin), EMB (ethambutol) to provide essential substrates for rapid growth of Mycobacteria. 100 μl of BBL™ MGIT™ SIRE (streptomycin, isonaizid, rifampicin, ethambutol) prepared aseptically according to the manufacturers' instruction was added to corresponding labeled BBL™ MGIT™ tube followed by addition of 0.5 ml of 1 % Mycobacterium suspension. Mycobacterium suspension was prepared by pipetting 0.1 ml Middlebrook 7H9 Broth containing Mycobacterium adjusted to 0.5 McFarland standard into 10 ml sterile saline aseptically. The BACTEC MGIT™ 960 system (BD, New York-U.S.A) was then loaded following the manufacturer's instructions and incubated at 37°C. Streptomycin at 1.0 μg/ml, isoniazid at 0.1 μg/ml, rifampicin at 1.0 μg/ml and ethambutol at 5.0 μg/ml served as the positive controls whereas DMSO was used as a negative control. The procedure was repeated using plant crude extracts at 1.0 g/ml in place of SIRE. The process was also repeated with petro ether, dichloromethane, ethyl acetate and methanol solvent fractions. The fractions were tested at concentrations ranging from 50 to 6.25 μg/ml (petro ether, dichloromethane and methanol) or 25 to 3.125 μg/ml (ethyl acetate) to determine the MIC. Cytotoxicity screening MTT [3-(4, 5-dimethylthiazol-2-yl)-2, 5-diphenyltetrazolium bromide] assay was used to determine the toxicity of the extracts obtained from the plant. This is a colorimetric assay hinged on the ability of mitochondrial enzyme (Succinate Dehydrogenase) to reduce yellow water soluble MTT to an insoluble colored substance, formazan, which is spectrophotometrically measurable. The level of formazan is directly proportional to the measure of cell viability because only metabolically active cells can reduce MTT. Test cell lines were Vero cells from African green Monkey Kidney cells (Cercopithecus aethiops epithelial cell line; ATCC CCL-81) against methanol crude extract and HEp-2 cells (human laryngeal carcinoma cell line ATCC CCL-23) against fractions that were used as test cells for this study. The test cells were grown in growth media comprising of 100 ml DMEM, 10 ml Fetal Bovine Serum (FBS), 1 ml Penstrep, 1 ml Amphotericin B, 1 ml L Glutamine. The test cells were incubated at 37°C in 5 % CO 2 until they attained confluency after which they were passaged by adding 2 ml of 0.25 % trypsin and further incubated at room temperature until the cell were detached. Growth media (6 ml) was introduced to the test cells to inactivate trypsin off action, while the cell crumps formed were broken gently by sucking and releasing the cell suspension using a pipette. 2 ml of the harvested cells were then transferred into a 50 ml vial and topped up to 50 ml mark using growth media. A cell suspension of 100 μl (1 × 10 5 cell/ml) was seeded into two rows of wells A-H in a 96-well micro-titer plate for one sample. The test cells were then incubated in 100 μl growth media at 37°C and 5 % CO 2 for 48 h to form a confluent monolayer. The growth medium was then aspirated off and replaced with 100 μl of maintenance medium comprising of 100 ml DMEM, 2 ml Fetal Bovine Serum (FBS), 1 ml Penstrep, 1 ml Amphotericin B, 1 ml L-Glutamate and 0.1 ml Gentamycin. Afterwards, cells were exposed to increasing concentrations of respective plant extracts (from 1.95 μg/ml to 500 μg/ml) and incubated at 37°C for 48 h. This was followed by a further incubation period of 4 h in 10 μl of 5 mg/ml MTT solution after aspirating off the plant extracts. This was followed by addition of 100 μl acidified isopropanol (0.04 N HCl in isopropanol). The well plate was gently shaken for 5 minutes to dissolve the formazan and then optical density measured using ELISA Scanning Multiwell Spectrophotometer (Multiskan Ex labssystems) at 562 nm and 690 nm as reference. Rows of cells containing medium without plant extracts were included to act as negative control. Cell viability (%) was calculated at each concentration as follows using the formula [19]. Phytochemical tests Phytochemical tests were done to determine the class of compounds present in the active fractions that could be responsible for activity and/or cytotoxicity. They were identified by characteristic colour changes based on standard procedures as described previously [8,[20][21][22][23]. The results were reported as (+) for presence, and (−) for absence. Alkaloids Six to eight drops of Dragendorf reagent was mixed with 2 ml of the extract. Formation of brownish-red precipitate indicated presence of alkaloids. The Dragendorf reagent was prepared by mixing two reagents: reagent 1 and reagent 2 in equal parts. Reagent 1 was made by dissolving 8.5 g of Bismuth subnitrate in a solution of 10 ml acetic acid and 40 ml of distilled water while as Reagent 2 was prepared by dissolving 8 g of potassium iodide in 20 ml of water [22,23]. Phenols Phenols were detected using ferric ferichloride which was prepared by dissolving 0.1 g of ferric ferichloride in 10 ml of water. Equal volumes (2 ml) of both ferric ferichoride and the plant extract were mixed. Formation of a violet-blue color or greenish color was evidence that phenols presences [22,23]. Formation of a blue-green ring or pink-purple coloration signified presences of terpenoids [22,23]. Flavonoids 5 ml of dilute aqueous ammonia solution was added to a portion of the aqueous filtrate of the plant extract, followed by concentrated sulphuric acid. A positive test result was confirmed by the formation of a yellow coloration that disappeared instantly [20,21]. Statistical analysis Ms Excel 2010 data sheets and Graphpad Prism version 6 were used to analyze the data. The data on cytotoxicity was expressed as a percentage of the untreated controls. CC 50 values, which is the concentration that kills 50 % of the test cells, was determined by Regression Analysis. A particular fraction's extract was considered cytotoxic if it had CC 50 of less than 90 μg/ml [24]. In addition, unpaired student's t-test was used to test for statistical significance in the differences between the treatments and the control in this study. A p value of less than 0.05 was considered to indicate statistical significance. Values were expressed as mean ± S.E.M. Results and discussion The information on use and preparation of P. resinosa plant in Mbeere community-Kenya was gleaned from tradipractitioners and herbalist and confirmed from documentation by Relay and Brokensha [9]. The plant has been used traditionally for management of respiratory related illnesses. Since the traditional preparation involved steeping the roots peels in water, alcoholic beverage, or chewing root peels, we first screened for antimicrobial activity using water and methanolic crude extract to mimic this traditional extraction mode. Although herbal practice in many places as well as in Mbeere community involves utilization of water as the main herbal extraction solvent, studies have shown that methanol organic solvent is much better and potent [5,25,26], a fact corroborated by antimicrobial results of the present study (Tables 1 and 2). This could be attributed to polarity of methanolic solvent that confers the ability to extract a variety of bioactive molecules ( Table 3). Polarity of the solvent also influences the qualitative and quantitative composition of the active compounds sequestered into herbal extract(s). This could partly explain the higher activity demonstrated by methanolic crude extract compared to water extract [5,[25][26][27]. The methanolic crude extract yield was 1 g (Table 4) and hence we began our antimicrobial screening with a high concentrations of 1.0 × 10 4 μg/disc. Both aqueous and methanol crude extracts had broad spectrum activity, inhibiting the growth of Gram positive, Gram negative bacteria and fungi at 1.0 × 10 4 μg/disc ( Table 1). The highest inhibition in water extract was 11.7 ± 2.0 for C. albicans while the highest inhibition in methanolic extract was 11.7 ± 0.3 for S. aureus. The activity against E. coli and C. albican in both extracts was not statistically different at p = 0.67 and p = 0.77 respectively. However methanolic crude extract had higher activity against S. aureus compared to water crude extract and the difference in activity between the two extracts was statistically significant (p = 0.02). Additionally, the positive control drugs in all cases had higher activity than the plant crude extract. We also did fractionation using organic solvents of increasing polarity. Each fraction was tested on a panel of microorganisms. The results varied with the extract fraction used for testing. This may suggest that the root part of P. resinosa contains several antibacterial and antifungal compounds of different polarities as supported by phytochemical studies (Table 5). Fractionation enhanced antibacterial activities in all tested cases compared to crude methanolic extracts (Table 6). For example, ethyl acetate fraction had a zone of inhibition of 22.3 ± 0.3 against S. aureus while the crude WT Water crude extract at 1 g/ml, MOH Methanol crude extract at 1 g/ml, SIRE Positive control of streptomycin at 1.0 μg/ml, isonaizid at 0.5 μg/ml, rifampicin at 1.0 μg/ml and ethambutol at 5.0 μg/ml, GC Growth control, NC Negative control of media treated with DMSO, R Resistant, S Sensitive methanolic extract had an inhibition zone of 11.7 ± 0.3 for the same organism. This may imply that, there is higher sequestration of active principle(s) at certain level of polarity explaining the high but varied antibacterial activity demonstrated by fractions from solvents of different polarities. It is also suggestive that, there is possibility of antagonism of various antibacterial active compounds when lumped together as in crude extracts [28], thus explaining for low antibacterial activity in crude extracts. This is therefore indicative that, fractions are the best candidates for the treatment of diseases associated with the tested microorganisms than crude extracts. Nevertheless, some extracts fails to have enhanced activity on fractionation as exhibited by antifungal activity in this study. This may imply either that, the antifungal principles act together in a synergistic manner and that is why crude methanolic extract had higher antifungal activity, or that, fractionation had dilution effect on the antifungal principle(s) thus explaining diminished antifungal activity with fractionation (Tables 1 and 6) [28]. The lowest MIC of 31.25 μg/ml was recorded in petro ether and methanolic fraction against S. aureus. The dichloromethane fraction had MIC of 31.25 μg/ml against methicillin resistant S. aureus (MRSA) and it was also cidal with MBC of 125 μg/ml. The latter case is more interesting considering that the MIC/MBC ratio is 4 suggesting the killing/cidal effect could be expected (Table 7). Additionally, out of 20 tested cases for MIC, 8 (40 %) had MIC of less than 100 μg/ml; the set threshold for plant extract [28]. Generally, activity against Gram positive bacteria was higher than Gram negative strains and fungi. This is in agreement with previous studies that plant extracts are more active against Gram positive bacteria than Gram negative bacteria. The higher sensitivity of Gram-positive bacteria could be attributed to their cell wall topology which has outer peptidoglycan layer which is not an effective permeability barrier as compared to the outer phospholipid membranes of Gram-negative bacteria [25,29,30]. Difference in sensibility was also evidence among tested strains in both crude and fraction extracts. This could be due to genetic differences between different strains. This proofs the necessity of antibiogram prior to prescription as a precautionary measure in mitigating drug resistance development [31]. We also investigated the antituberculous activity of crude extracts and fractions (Tables 2 and 8). Methanolic crude extract (at 1 g/ml) was highly active in inhibiting tubercle growth. We went further and investigated activity of organic solvent fractions with view of determining the MIC. Interestingly, all fractions had varying level of growth inhibition in a concentration dependent manner. The highest activity was by ethyl acetate fraction (MIC <6.25 μg/ml) and dichloromethane (MIC <12.5 μg/ml). The MIC for petro ether fraction was 25.0 μg/ml while that for methanolic fraction was 50.0 μg/ml. This depicts high potential of the extract fractions (especially ethyl acetate fraction considering the threshold MIC level of 100 μg/ml [28]) to be tapped for novel drug lead for management and/or treatment of TB. This is supported by our data showing that the level of inhibition by drug standards (SIRE) (GU-0) is same as that achieved by some of our tests samples (Table 8). But it is also important to stress that, our fractions were loaded with various bioactive compounds as shown in Table 5. It is possible that the active compound could be only a small proportion of the fractions' extract and maybe if purified, it could be even of lower amount than the standard drugs. Since in some instances people chew the roots of the plant P. resinosa for herbal treatment, we also sought to answer the question whether P. resinosa extracts are cytotoxic. The methanolic crude extract (CC 50 of 1.26 μg/ml), Petro ether fraction (CC 50 of 1.88 μg/ml) and methanolic fractions (CC 50 4.78 μg/ml) were all not within the acceptable toxicity limit (CC 50 > 90.00 μg/ml) ( Table 9). This can however be viewed as "a double edged sword". In one way, it is bad for a drug as it will be lethal to subjects. Nevertheless, structural modification can be undertaken with view of improving on their safety [19]. On the other hand, this can be good news since the fractions can be tapped as candidates for anti-cancerous drugs [5], especially now that their lethality was toward cancerous human cell lines. The observed bioactivity and/or cytotoxicity could be attributed to the array of phytochemicals that tested positive in the sample extracts. These included alkaloids, flavonoids, phenols, terpenoids and anthraquinones. Bioactive molecules are usually found accumulated as secondary metabolites in various parts of the plant and at different concentrations [32]. In this regard, the root is one of the major depository sites of such compounds making it a chief part for herbal bioprospecting. Flavonoids which tested positive in all samples except in petro ether fraction have general antibacterial activity. They have been shown to work by complexing and altering the conformation(s) of microbial proteins thus inactivating microbial enzymes and interfering with bacterial cell wall adhesins. In a similar fashion as terpenoids (also present in all samples except petro ether fraction), they also mediate their antibacterial activity through microbial membrane [32]. Other studies have associated flavonoids with antituberculous activity and it is believed that their mode of action is by inhibiting various pathways in Mycobacteria including de novo biosynthesis of fatty acid, inhibiting mycolic acid biosynthesis, proteosome inhibition, topoisomerase inhibition, inhibition of phosphatidylinositol 3-kinase, induction of cell cycle arrests, accumulation of p53 or enhanced expression of c-fos and c-myc genes [18,33]. In addition, recent studies done by [34] indicated that phenolic compounds have antimycobacterial properties although their mode of action is not well known. Similarly, studies have established that alkaloids have both antibacterial and antifungal activities, especially against S. aureus and C. albicans [35]. The results of this study provide to some extent scientific rationalization of the possible therapeutic use of P. resinosa plant in traditional medicine, and also confirms the impact of ethnopharmacological approach when investigating plants for their bioactivity [31]. Conclusion A major outcome of the current study is the identification of the ethyl acetate and dichloromethane fractions which yielded the best antituberculous activity (MIC of <12.5 and <6.25 μg/ml respectively) as well as the highest antibacterial activity (with zones of inhibition of 19.3 ± 0.3 and 22.3 ± 0.3 respectively and the lowest antibacterial MIC of 31.25 μg/ml), all within the acceptable toxicity limit (CC50 > 90 μg/ml). Our findings also demonstrates for the first time and to the best of our knowledge that, P. resinosa plant has very high selective potential as a source of novel lead for antituberculous, antibacterial and antifungal drugs. Of particular importance is its high activity against MRSA, S. aureus, C. albicans and MTB which are currently posing great public health challenge due to drug resistance development and as major sources of community and hospital based infections. Indeed, more work is needed to identify the specific active ingredients, with a view of deciphering the mode(s) of action of the plant compounds. Additionally, the information of cytotoxicity opens a new frontier to pursue in search of novel antineoplastic compounds.
2018-04-03T05:58:44.700Z
2015-08-25T00:00:00.000
{ "year": 2015, "sha1": "30f79fdf2e401b82e554a070cc1f917f1760463f", "oa_license": "CCBY", "oa_url": "https://bmccomplementalternmed.biomedcentral.com/track/pdf/10.1186/s12906-015-0811-4", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "30f79fdf2e401b82e554a070cc1f917f1760463f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
218931538
pes2o/s2orc
v3-fos-license
Observations on the oviposition of Blythia reticulata (Blyth, 1854) with new distributional records from Mizoram State, NE India The poorly known semi-fossorial snake Blythia reticulata is a small, oviparous, worm-eating species found in northeastern India and neighboring countries. Here we report on multiple new distribution localities that extend the known geographic range of the species. In addition, we provide new information on the reproductive biology of the species based on egg-laying behavior data from a captive gravid B. reticulata from Mizoram. The simultaneous presence of a second clutch of eight eggs in the oviduct of the female indicates the capacity of the species to exhibit multiple matings and egg clutches during a single reproductive season. Introduction Blythia Theobald 1868 is a colubrid snake genus comprising two extant species Blythia reticulata (Blyth, 1854) and Blythia hmuifang (Vogel, Lalremsanga & Vanlalhrima, 2017). B. reticulata, commonly known as Blyth's Reticulate Snake (Uetz 2019) or Iridescent Snake (Whitaker and Captain 2008; Das and Das 2017), is a small semi-fossorial snake, inhabiting evergreen forests at an elevation up to 1,040 m asl. (Whittaker and Captain 2008;Das 2012). The known distribution range of the species includes parts of Bangladesh, China, Myanmar and India (the states of Mizoram, Assam, Arunachal Pradesh, and Manipur (Das 2008;Purkayastha 2013;Das and Das 2017;Vogel et al. 2017). Little has been published about its natural history and reproduction, except that this oviparous snake lays clutches up to 6 eggs (Whitaker and Captain 2008). Little is also known about the ecology, life history, and genetics of B. reticulata. Given the paucity of information about this species, its conservation status is currently listed as 'Data Deficient' in the IUCN Red List (Wogan and Vogel 2012). Elucidating the reproductive biology of a species is particularly important both for understanding its general life history patterns and also for informing conservation management actions (Siegel and Ford 1987;Holycross and Goldberg 2001). Here, we contribute new details on the reproductive biology of wildcaught B. reticulata, as well as new distribution localities for the species from Mizoram State (NE India). homestead flower garden at Venghlui, Saitual town, Saitual District, Mizoram (23.674578N, 92.962397E;1,130 m asl.;8 Jul. 2019). The animal was subsequently transported to the facilities of the Developmental Biology and Herpetology Laboratory, at the Dept. of Zoology, Mizoram Univ. Aizawl. Environmental conditions were monitored with the help of a HTC-1 LCD Digital Hygrometer Thermometer with a temperature accuracy of ± 1 °C, and a humidity accuracy of ± 5%. Eggs laid on 10 Jul. 2019, at ca. 8:30 hrs were weighed using an electronic balance to the nearest 0.001 g (Gem20 High Precision Digital Milligram Scale, Smart Weigh). We measured the snout-vent length (SVL) and tail length (TL) to the nearest 1 mm using a flexible measuring tape. Scales were counted following the methodology of Dowling (1951) used for taxonomic confirmations. Both laid and oviductal egg sizes were measured using dial callipers (Mitutoyo, to the nearest 0.1 mm. The animal was provisioned with both earthworms and insects but refused to feed, although it was observed drinking some water. For the incubation of eggs, an approx. 20 mm thick layer of vermiculite (mixed with water in a 2:1 ratio) was provided for bedding in a 150 mm × 150 mm × 80 mm polypropylene container covered with a perforated lid. Temperatures in the incubation box were maintained between 26.5 °C-28 °C with humidity between 85%-90%. Fresh leaves provided were occasionally sprayed with water for maintaining a proper humidity level. Photographs were taken with a digital camera (Canon PowerShot SX430 IS). Distributional records. To prepare a distributional map of B. reticulata we collected geographical coordinates of specimen collection sites from Mizoram (India) using a portable GPS unit (Garmin Montana 650-GPS navigator). Field survey methodologies largely followed Doan (2003) and Manley et al. (2004) and specimens were collected using a Visual Encounter Survey (VES) approach by checking ground, bushes, leaf litter, underneath tree-bark, logs, rocks, around water bodies (e.g. streams, canals and tanks), as well as crevices of rocks and boulders, and also by digging into soil. Individuals were collected with the help of tongs or by hand. Collected specimens were kept in snake bags and later catalogued and deposited at the Departmental Museum of Zoology, Mizoram University (MZMU). (Fig. 1). Results Reproductive observations. On 10 Jul. 2019, the captive B. reticulata (MZMU 1424) began the oviposition of the first egg at ca. 8:30 hrs (room temperature 24.1 °C-25.5 °C; humidity: 84-89%); second egg at ca. 10:00 hrs; third egg at 10:58 hrs and completed ca. 13 minutes later ( Fig. 2A-C); and then the fourth egg laid at 12:55. Oviposition resumed the next day with the fifth and sixth eggs laid at ca. 8:00 hrs-11:00 hrs, and finally the seventh egg laid at ca. 17:50 hrs (Fig. 2D). Eggs were whitish, soft, with a leathery texture and oblong shape. All of the seven eggs appeared fully viable at the initial stages and were incubated for several days at 26.5 °C-28 °C temperature and 85%-90%, humidity. Unfortunately eggs never hatched, either because of fungal infection or due to suboptimal temperature and humidity conditions. Thus, neither precise information on incubation temperature and humidity requirements, nor duration of incubation, time of hatching or neonate biometric data are available at this time. On 11 Jul. 2019, the female was anaesthetized using 250mg/kg of 0.7% sodium bicarbonate buffered MS-222 (Tricaine Methanesulfonate) solution by intracoelomic injection, and then euthanized using a second intracoelomic injection of 0.1ml unbuffered 50% (v/v) MS-222 solution (see Conroy et al. 2009). The animal was dissected prior to preservation. Notably, we observed a second clutch of eggs (N = 8) in the oviduct (Fig. 3). The specimen was then fixed in 10% formalin, preserved in 70% ethanol, and catalogued as a voucher specimen in the Departmental Museum of Zoology, Mizoram University (MZMU 1424). A second specimen (MZMU 941) was also dissected and contained a seven egg clutch (Table 1 for the detailed egg measurements). Discussion The present work provides new distributional records for Blythia reticulata from the NE Indian state of Mizoram (Saitual, Khawzawl, Lungdai, Tanhril, and Tlungvel) in addition to the previously recorded sites i.e. Hmuifang, Sawleng, Sihphir, Durtlang, Sihhmui, and Aizawl (Vogel et al. 2017), and also expands the known elevational range of the species from the 949-1,040 m asl. zone up to 1280 m asl. (see Whitaker and Captain 2008;Vogel et al. 2017). The specimens in this study were either collected from the side of the road, or were excavated from the ground. All individuals were collected from the microhabitats in the proximity of surface water, including streams, ponds and puddles. Because the species was encountered either in the morning or in the evening, we suggest it has likely a crepuscular pattern of natural activity. The climate pattern of Mizoram is moist tropical to moist sub-tropical with temperatures ranging between 18 °C-29 °C in summer, whereas in winter temperatures vary between 11 °C-24 °C; average annual rainfall in the region is about 2,540 mm (Geological Survey of India, 2011). The specimens were encountered between the onset and the end of the monsoon season (late February to October); gravid specimens were encountered during the wettest part of the monsoon season (May to July). Consequently, we argue that reproductive and breeding activities in B. reticulata coincide with the rainy season in this region. Recent herpetological insights signified that the reproductive cycles of almost all snake species can be considered to some extent seasonal, with a pronounced absence of truly continuous patterns of reproduction in snakes (Almeida-Santos et al. 2006;Mathies 2011). The simultaneous presence of a second clutch of eggs at such a short time after oviposition, suggests that B. reticulata is also capable of multiple matings and/or multiple clutch- es during a single reproductive period. This phenomenon appears to be rare in snakes, with only a handful of observations in some Brazilian snake species in the family Xenodontinae (Pinto and Fernandes 2004) and especially, Philodryas nattereri (Mesquita et al. 2011), and Philodryas olfersii (Mesquita et al. 2013). The present study represents the first-ever documentation of oviposition in B. reticulata, with a maximum fecundity of 8 eggs vs. the 6 eggs previously reported by (Whitaker and Captain 2008). Although the female laid a total of 7 eggs, we considered maximum fecundity based on the number of eggs found in the oviduct by following Mesquita et al. (2013). According to Mathies (2011), data from specimens with eggs in the oviduct is the best metric for analysing reproductive cycle. Thus, the present publication serves as a novel contribution for this species, which because of its rarity and the secretive lifestyle, remains poorly known (Bassi et al. 2019). Further reproductive studies are needed to improve understanding and delimit the reproductive cycle of the snake species B. reticulata.
2020-05-21T00:10:14.438Z
2020-05-07T00:00:00.000
{ "year": 2020, "sha1": "61665b07134c8769b1160e91ecd32f4c0f3b37dc", "oa_license": "CCBY", "oa_url": "https://herpetozoa.pensoft.net/article/49768/download/pdf/", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "11cf52a8d568261e5517346ca97e40fb7338b212", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
26838948
pes2o/s2orc
v3-fos-license
Survival time following resection of intracranial metastases from NSCLC-development and validation of a novel nomogram Background Brain metastases (BM) from non-small cell lung cancer (NSCLC) are the most frequent intracranial tumors. To identify patients who might benefit from intracranial surgery, we compared the six existing prognostic indexes(PIs) and built a nomogram to predict the survival for NSCLC with BM before they intended to receive total intracranial resection in China. Methods First, clinical data of NSCLC presenting with BM were retrospectively reviewed. All of the patients had received total intracranial resection and were randomly distributed to developing cohort and validation cohort by 2:1. Second, we stratified the cohort using a recursive partitioning analysis(RPA), a score index for radiosurgery (SIR), a basic score for BM (BS-BM), a Golden Grading System (GGS), a disease-specific graded prognostic assessment (DS-GPA) and by NSCLC-RADES. The predictive power of the six PIs was assessed using the Kaplan–Meier method and the log-rank test. Third, univariate and multivariate analysis were explored, and the nomogram predicting survival of BMs from NSCLC was constructed using R 3.2.3 software. The concordance index (C-index) was calculated to evaluate the discriminatory power of the nomogram in the developing cohort and validation cohort. Results BS-BM could better predict survival of patients before intracranial surgery compared with other PIs. In the final multivariate analysis, KPS at diagnosis of BM, metachronous or synchronous BM and the histology of lung cancer appeared to be the independent prognostic predictors for survival. The C-index in the developing cohort and validation cohort were 0.75 and 0.71 respectively, which was better than the C-index of the other six PIs. Conclusions The new nomogram is a promising tool in further choosing the candidates for intracranial surgery among NSCLC with BM and in helping physicians tailor suitable treatment options before operation in clinical practice. Background Brain metastases (BM) are the most frequent intracranial tumors, resulting in significant morbidity and mortality. Among these patients, non-small cell lung cancer (NSCLC) ranks as a leading cause. As a result of prolonged overall survival(OS) in NSCLC patients and better detection of subclinical lesions, incidences of BM are increasing [1]. The risk of developing BM in advanced NSCLC (stage III-IV) is approximately 30%-50%. Even in resected early stage patients (stage I-II), the risk of developing BM at 5 years is 10% [2]. Until recently the median survival time (MST) for patients with BM was still not good [3]. BM is a highly heterogeneous disease, and prognosis and treatment options should be determined depending on the patient's performance status, the number, size and location of BM, the pathologic type, and the control of the primary tumor and extracranial disease. Some candidates decided to receive surgery if intracranial lesions could be totally resected. In clinical practice, only a portion of those candidates could benefit from the intensive treatment. There have been few studies on how to further identify those candidates who might benefit from surgery, and the individuals should avoid overtreatment before they decided to receive intracranial surgery. Many prognostic indexes (PIs) for predicting the prognosis of BM have been developed based on retrospective studies [4]. In 1997, the Radiation Therapy Oncology Group established the first prognostic score called the recursive partitioning analysis (RPA) [5]. Then, the Score Index for Radiosurgery (SIR) [6], the basic score for BM (BSBM) [7], the Golden Grading System (GGS) [8], the disease-specific graded prognostic assessment (DS-GPA) [9] and the NSCLC-RADES [10] emerged (the details of the six PIs are shown in Table 1). The published PIs have been used to help physicians tailor suitable treatment options based on the prognosis prediction. However, they were mostly designed for BM patients who were treated with radiotherapy. Whether patients who received intracranial surgery as first line treatment can be stratified by the PIs is not known. A nomogram is a graphical prediction model widely used to predict cancer prognosis. It combines several prognostic factors on the basis of the Cox proportional hazards model and reduces statistical predictive models into a single numerical estimate of the probability of an event, such as death or recurrence [11]. As a result, an individual prediction of a specific outcome can be provided for each patient. In this study, we analyzed a cohort of patients retrospectively, compared the prediction ability of six PIs, and developed a new nomogram to identify the NSCLC patients presenting with BM who might benefit from intracranial surgery more precisely and help physicians tailor more suitable treatment options. Patients We collected the data of 335 NSCLC patients presenting with BM between 01/2003 and 12/2009. All of the patients were diagnosed and treated at Huashan Hospital, Fudan University, Shanghai, China. They were randomly distributed to developing cohort and validation cohort by 2:1. The inclusion criteria was histologically confirmed BM from NSCLC, and BM lesions not exceeding three to ensure that they received total intracranial resection. Exclusion criteria were patients with leptomeningeal metastases (meningeal enhancement on MRI or tumor cells found in cerebral spinal fluid), and either histological or clinical evidence of other malignant tumors except NSCLC. Data collection and follow-up The data from the medical records included: age, gender, the KPS at the time of BM diagnosis, the time of the primary and metastatic tumor diagnosis, the pathology type of the tumor, the presence of extracranial metastases, the control of primary tumor, and brain involvement characteristics. Synchronous BM was defined as lesions in the brain that were detected within three months of NSCLC diagnosis. Metachronous BM was defined as there have been no evidence of BM within three months of the NSCLC diagnosis. The follow-up was by phone-call or letter. All patients were followed until death or up to May 1, 2015. The information included: 1) follow-up treatments; 2) survival data; and 3) the date of death. Statistical analysis The primary end-point was OS, defined as the interval from the date of BM diagnosis to the date of death or failure of follow-up. Patients alive without Table 1 Six prognostic indexes for patients with non-small cell lung cancer with brain metastases RPA recursive partitioning analysis, SIR Score Index for Radiosurgery, BS-BM basic score for BM, GGS Golden Grading System, DS-GPA disease-specific graded prognostic assessment, CPT control of primary tumor, ECM extracranial metastases, BM brain metastases, Y yes, N no, M male, F female, KPS Karnofsky performance status, CR complete response, PR partial response, PD progressive disease events were censored at the end of the follow-up. The diagnosis of BM needed to be confirmed by at least two experienced pathologists. Two hundred and twenty-three patients were distributed to the developing cohort randomly and the other one hundred and twelve patients were distributed to the validation cohort. The developing cohort was stratified by RPA, SIR, BS-BM, GGS, DS-GPA, and NSCLC-RADES. The OS curves were drawn by subgroups of the six PIs. OS was estimated by the Kaplan-Meier method, and the MST of each subgroup was compared among subgroups using the log-rank test. Harrell's concordance Index (C-index) was used to assess the discriminating ability of the six PIs. The value of C-index ranges between 0.5 and 1. 0.5 represents completely inconsistent with the practical situation, indicating that the nomogram has no predictive effect; 1 means the predictive result of the nomogram is exactly the same with the practical situation. Prognostic factors found to be p < 0.1 on univariate analysis were further explored in a multivariate analysis used with the Cox proportional hazards model. The significant variables (p < 0.05 in the multivariable Cox model) were seen as prognostic factors in the final nomogram. The new nomogram predicting the prognosis of NSCLC presenting with BM was also measured by C-index in the developing cohort and validation cohort. we used the bootstrap-corrected C-index to measure discriminative ability of the nomogram. The statistical analyses were calculated with SPSS Statistics23.0 (IBM, SPSS Inc. Chicago, IL, US) and R 3.2.3 software (https://www.r-project.org/). The developing cohort patients' characteristics In the developing cohort, a total of 223 patients were qualified for the retrospective study. By May 1, 2015, all enrolled patients arrived at the end point, apart from the 25 individuals lost during the follow-ups and the 7 patients still alive. One hundred and sixty patients received only a gross total resection, and the others were treated in combination with whole brain radiation therapy (WBRT) or stereotactic radiation (SRS). The differences of MST between the only operative group and the postoperative radiation therapy group showed no statistical significance (p = 0.260). Most patients were male and the median age was 58 years (range 22-85 years). In the metachronous entity, the intervals from NSCLC diagnosis to the confirmation of BM ranged from 3 to 68 months. Detailed characteristics of patients are listed in Table 2. Survival analysis and PIs comparison The MST of the developing cohort was 15 months (95% confidence interval, 13.01-16.99 months), and survival rates at 6-months, 1-, 2-, 3-and 5-years were 80.2%, 61.0%, 30.0%, 11.7% and 4.5% respectively. Population repartition and the MST in each subgroup are listed in Table 3. Survival curves were demonstrated in Fig. 1. All classes were represented by at least 10% of the patients, with the exception of class Univariate and multivariate analysis In the univariate analysis of the possible prognostic factors, we considered the nine variables listed in Table 2, and the following five factors, female (p = 0.013), KPS ≥80 (p < 0.001), metachronous (p = 0.044), absence of ECM (p = 0.009), and histology of lung adenocarcinoma (p < 0.001) were associated with prolonged OS. The final multivariate analysis is shown in Table 4. Independent prognostic predictors for better survival were KPS ≥80 at diagnosis of BM, metachronous BM and the histology of lung adenocarcinoma. Establishment and validation of the nomogram Following the multivariable Cox model, the three independent variables, KPS at the diagnosis of BM, metachronous/synchronous BM, and the pathologic type of NSCLC were selected in the final nomogram to predict the survival time of NSCLC presenting with BM before they decided to receive complete surgical resection. The nomogram was shown in Fig. 2. One hundred twelve patients were included in the validation cohort, whose characteristics were similar to the counterpart in the developing cohort. They were also followed until May 1, 2015. All enrolled patients arrived at the end point, apart from the 5 individuals lost during the follow-ups and the 2 patients still alive. The median OS of the validating cohort was 15 months (95% confidence interval, 9.70-16.30 months), and the survival rates at 6-months,1-, 2-, 3-and 5-years were 77.7%, 51.0%, 27.4%, 13.2% and 5.7% respectively. Most patients were male and the median age was 58 years (ranging 38-80 years). Table 2 shows the detailed characteristics of the validation patients. The C-index for the developing cohort and the validation cohort were 0.75 and 0.71 respectively. Discussion Brain metastases are becoming an increasingly common challenge for the clinician. The role of complete surgical resection in brain metastatic patients is still controversial [12]. Traditionally, the treatment for BM generally relied on radiotherapy and chemotherapy. Even if intracranial lesions could be totally resected, the survival time would not be extended [13]. Meanwhile, the operations themselves might result in higher mortality rates. However, with the advances in surgical techniques, patients with BM might benefit from intracranial operations, as confirmed by some studies. Since the 1980s, more studies have emphasized the importance of surgery in treatment for BM. They compared intracranial operations with other treatments, like WBRT or SRS [14]. Although the results were not always consistent, it could be concluded that some patients benefit from intracranial operation [15][16][17]. Moreover, surgery allows a relief of intracranial hypertension, seizures and focal neurological deficits, and is the most useful way to get a clear pathologic diagnosis. Surgery has become an important total resection [19]. We enrolled 335 eligible patients in this study. Completely surgical resection of intracranial lesions was used as the first line treatment option. We eliminated the possibilities that different treatments may affect the survival outcome, and explored the relationship between baseline situations and the prognosis. RPA [5] is commonly used in the prognosis prediction. It was developed in patients who were treated with WBRT. Agboola [20], once applied in a cohort of surgical resected BM patients, showed the predictive value of RPA. However, the 1200 enrolled patients came from three different trials, and the criteria and the dose of WBRT were not same. SIR [6] resulted in BM-related variables: the numbers and sizes of BM. Some studies found that patients benefitted from surgical treatment for BM. BSBM [7] has been advocated as a convenient, easy to use PI, which was proposed on the basis of RPA and SIR. It was further evaluated in patients receiving WBRT with surgery and WBRT with or without SRS [21]. GGS [8] was constructed specifically for NSCLC patients. However, it failed to distinguish a good prognosis from a poor prognosis in our study. DS-GPA [9] was proposed in a large sample multi-center retrospective study. With the enrolled patients spanning from 1985 to 2007, it could not eliminate the influence of treatments, and different criteria, treatment measures, and selection bias were unavoidable. The newly proposed NSCLC-RADES [10] needs to be further validated in more studies. With the six PIs targeting different populations, we could not demonstrate that one prognostic classification was superior to the rest [22]. In our research, SIR, BSBM, NSCLC-RADES, especially BSBM better predicted the survival of BM from NSCLC who were treated with intracranial surgery in China. However, some patients were still misclassified to "good prognosis" and "poor prognosis" in BSBM. So the existing PIs are still not the ideal prognostic tool to help identify those patients who might benefit from intensive treatment like surgery, and the individuals should avoid overtreatments. The PIs need to be further optimized. In our univariate and multivariate analyses, independent prognostic predictors for better survival were KPS at diagnosis of BM, metachronous BM and the histology of lung adenocarcinoma. KPS at the BM diagnosis, which was also evaluated in the six studied PIs, was a significant prognostic factor in the study. Neurological symptoms, like headaches, motor impairment, dysphasia, seizures, and even coma, are always induced by intracranial lesions. Some discomfort, like coughing, sputum, and chest congestion are related to systematic cancer. All of these symptoms influence the KPS score and affect the prognosis. As a result, use of the KPS has been criticized because of its subjective nature, variability in scoring between observers, and the tendency for the score to be influenced by acute but self-limited events [23]. When we evaluate the variable, we should notice that and try to make KPS reliable. . The pathological types of NSCLC were found to be a significant factor for prognosis, which was not involved in the six PIs. Lung adenocarcinoma (ADC) and squamous cell carcinoma (SCC) accounted for 80% of NSCLC. Our research showed significantly better OS for ADC. This result is in accordance with many other published studies [24]. There may be some reasons behind this phenomenon. First, the natural biological behaviors are not the same. The nextgeneration sequencing of the SCC subgroup identified entirely different genes [25]. Second, due to higher incidences of mutant genes (EGFR, ALK, ROS1, etc.) in ADC [26], the use of new targeted agents will enhance the response rates and prolong OS. We did not investigate the other rare types of NSCLC. In 2012, our institution conducted a study to compare synchronous BM with metachronous BM. We found that the clinical characteristics, diagnoses, and treatment methods for synchronous BM and metachronous BM were different [24]. In our cohort, 73.1% of the patients were synchronous BM. As analyzed above, the MST in metachronous BM was longer than in the synchronous BM. The possible reasons for this are as follows: 1) control of primary tumor; 2) presence of ECM; 3) sizes of BMs; and 4) even dissimilitude driver genes of the two subgroups. Further research is needed to better understand these findings. A nomogram is widely used for cancer prognosis, primarily because of its ability to integrate different variables on the basis of multivariate analysis to more accurately predict the survival of individuals. Kaizu [27] et al. established a nomogram to evaluate the risk of bone-metastasis in postoperative prostate cancer patients. Bevilacqua [28] developed a nomogram to predict the sentinel lymph node metastasis in early breast cancer and the survival of patients with breast cancer. Graesslin [29] even set up a nomogram to predict the incidence of brain metastasis in breast cancer. However, a nomogram for predicting the survival time of NSCLC patients with brain metastasis before they decided to receive complete surgical resection has not been previously investigated. Our new nomogram is a predictive tool, which creates a simple graphical representation of a statistical predictive model to predict the survival time of individual NSCLC patient with brain metastasis for intracranial surgery. Through quantifying the risk of death with a variety of factors, the nomogram can help clinicians tailor treatment modalities and avoid good prognostic patients from giving up effective treatment and prevent the poor prognostic patients from receiving overtreatment. The C-index of the nomogram showed its superior ability to predict prognosis. In conclusion, before clinicians and NSCLC patients consider to have an intracranial resection surgery, our nomogram could be used as an effective tool to predict the survival of the patients and optimize treatment modalities in clinical practice. Despite some findings of the present study, there are still several limitations. First, with the advent of targeted therapy, mutation testing has been standard practice with a NSCLC diagnosis. However, the gene expression patterns of our enrolled patients were unknown. As a result, we could not account for the molecular subtype. Although the efficacy of surgery may not be influenced by this factor, the patient's gene status should be as clear as possible in further studies. Second, as a single institution retrospective study, treatment protocols, patient selection, and follow-ups can bias the results. For all of the patients in our cohort who received intracranial surgery, the factors of KPS, age, ECM, and number of BMs were better than the average. Third, future multicenter studies are needed to confirm our developed nomogram. Conclusions In conclusion, we found that BS-BM could better predict survival of the BM patients after comparing the six existing PIs. In the final multivariate analysis, KPS ≥80 at diagnosis of BM, metachronous BM and the histology of lung adenocarcinoma appeared to be the independent prognostic predictors for better survival. Additionally, the new nomogram we built in the study is a predictive tool in further choosing the candidates for intracranial surgery among eligible NSCLC with BM. As a result, it helps to optimize NSCLC with BM patients' treatment modalities in clinical practice.
2017-11-21T18:09:06.856Z
2017-11-21T00:00:00.000
{ "year": 2017, "sha1": "36130b539de1e751a3ac1c5fd2dd08a6db3cdba9", "oa_license": "CCBY", "oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/s12885-017-3763-x", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "47e03fb8c52f86ddf96e712cc46ecf1b52da64a3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
19837476
pes2o/s2orc
v3-fos-license
A case of hypereosinophilic syndrome presenting with intractable gastric ulcers We report a rare case of hypereosinophilic syndrome (HES) presenting with intractable gastric ulcers. A 71-year-old man was admitted with epigastric pain. Initial endoscopic findings revealed multiple, active gastric ulcers in the gastric antrum. He underwent Helicobacter pylori (H pylori) eradication therapy followed by proton pump inhibitor (PPI) therapy. However, follow-up endoscopy at 4, 6, 10 and 14 mo revealed persistent multiple gastric ulcers without significant improvement. The proportion of his eosinophil count increased to 43% (total count: 7903/mm 3 ). Abdominal-pelvic and chest computed tomography scans showed multiple small nodules in the liver and both lungs. The endoscopic biopsy specimen taken from the gastric antrum revealed prominent eosinophilic infiltration, and the liver biopsy specimen also showed eosinophilic infiltration in the portal tract and sinusoid. A bone marrow biopsy disclosed eosinophilic hyperplasia as well as increased cellularity of 70%. The patient was finally diagnosed with HES involving the stomach, liver, lung, and bone marrow. When gastric ulcers do not improve despite H pylori eradication and prolonged PPI therapy, infiltrative gastric disorders such as HES should be considered. Abstract We report a rare case of hypereosinophilic syndrome (HES) presenting with intractable gastric ulcers. A 71-year-old man was admitted with epigastric pain. Initial endoscopic findings revealed multiple, active gastric ulcers in the gastric antrum. He underwent Helicobacter pylori (H pylori ) eradication therapy followed by proton pump inhibitor (PPI) therapy. However, followup endoscopy at 4, 6, 10 and 14 mo revealed persistent multiple gastric ulcers without significant improvement. The proportion of his eosinophil count increased to 43% (total count: 7903/mm 3 ). Abdominal-pelvic and chest computed tomography scans showed multiple small nodules in the liver and both lungs. The endoscopic biopsy specimen taken from the gastric antrum revealed prominent eosinophilic infiltration, and the liver biopsy specimen also showed eosinophilic infiltration in the portal tract and sinusoid. A bone marrow biopsy disclosed eosinophilic hyperplasia as well as increased cellularity of 70%. The patient was finally diagnosed with HES involving the stomach, liver, lung, and bone marrow. When gastric ulcers do not improve despite H pylori eradication and prolonged PPI therapy, infiltrative gastric disorders such as HES should be considered. INTRODUCTION Hypereosinophilic syndrome (HES) is a rare disorder characterized by the overproduction of eosinophils in the bone marrow with persistent peripheral eosinophilia, tissue infiltration, and end-organ damage by eosinophil infiltration and the secretion of mediators [1] . The diagnosis of HES is based on marked eosinophilia exceeding 1500/mm 3 , a chronic course longer than 6 consecutive months, exclusion of parasitic infestations, allergic diseases and other etiologies for eosinophilia, and signs and symptoms of eosinophil-mediated tissue injury [1,2] . While HES can involve multiple organ systems, including bone marrow, heart, lung, liver, lymph node, muscle, and nerve tissue [1] , gastrointestinal tract involvement is rare [1][2][3] . To date, only a handful of cases of HES presenting with gastritis or enteritis have been reported worldwide [4][5][6][7][8][9] , and HES presenting with intractable gastric ulcers has not been reported. We report our case of a 71-year-old male patient with HES presenting with multiple intractable gastric ulcers with a review of the literature. CASE REPORT A 71-year-old man presented with epigastric pain. He underwent cholecystectomy 20 years previously due to acute cholecystitis with gallstones, and has intermittently taken nonsteroidal anti-inflammatory drugs (NSAID) and corticosteroids on account of degenerative arthritis for 15 years. Other symptoms, as well as his past medical and family history, were otherwise unremarkable. The initial physical examination showed a flat, soft abdomen with normoactive bowel sounds with no sign of direct or rebound tenderness and no hepatosplenomegaly. Thoracic auscultation revealed no remarkable results. Routine complete blood count reported a leukocyte count of 7790/mm 3 with 5.3% eosinophils, hemoglobin level of 12.1 g/dL, and a platelet count of 19 8000/μL. There were no noteworthy findings on simple chest and abdominal radiography. No specific cardiac abnormalities on standard 12-lead electrocardiogram (ECG) or Doppler echocardiogram were detected. ECG revealed normal sinus rhythm and the echocardiogram showed normal global left ventricular systolic function (estimated ejection fraction 70%). Esophagogastroduodenoscopy (EGD) findings revealed several active gastric ulcers in the antrum of the stomach ( Figure 1A). Biopsy findings showed an ulcer with Helicobacter pylori (H pylori). He underwent H pylori eradication therapy (lansoprazole 30 mg twice a day, clarithromycin 500 mg twice a day and amoxicillin 1000 mg twice a day for 7 d) followed by a proton pump inhibitor (PPI) and gastroprotective agent therapy for 2 mo. Follow-up EGD and biopsy performed after 2 mo showed that H pylori was eradicated, whereas multiple gastric ulcers were still noticeable with only slight improvement ( Figure 1B). Follow-up endoscopy at 4, 6, and 10 mo showed persistent multiple gastric ulcers in the antrum despite continuous PPI treatment. Therefore, he was readmitted after 14 mo for etiological evaluation of the intractable gastric ulcers. In the EGD findings, multiple gastric ulcers were still found in the antrum of stomach ( Figure 1C). The endoscopic biopsy specimen revealed prominent eosinophilic infiltrations of > 20 cells/HPF ( Figure 1D). A retrospective review of the previous endoscopic biopsy specimens disclosed eosinophilic infiltration at the antrum which was overlooked at the initial evaluation. The chest computed tomography (CT) scan showed very tiny nodules in both lungs and approximately 15-mmsized nodular lesions in the posterior basal segment of the right lower lobe (Figure 2A and B). In the abdominalpelvic CT scan, multiple, small, and ill-defined low density lesions were found in both lobes of the liver ( Figure 3A and B). The liver biopsy showed eosinophilic infiltration in the portal tract and sinusoid ( Figure 3C and D). The peripheral blood smear report showed that there were no immature or dysplastic cells or morphologically abnormal eosinophils. The bone marrow aspiration smear showed an M:E ratio of 3.8:1 and an elevated eosinophil count of 22.2% ( Figure 4A). Bone marrow biopsy findings also indicated eosinophilic hyperplasia, with increased cellularity of 70% and normal distribution of erythroid, myeloid, and megakaryocytic cell lineages ( Figure 4B). The Fip1-like 1-platelet-derived growth factor receptor A Figure 1 Esophagogastroduodenoscopy (EGD) and biopsy findings. A: Initial EGD findings revealed several active gastric ulcers in the antrum of the stomach; B: In the EGD findings after 2 mo, multiple gastric ulcers were still noticeable with only slight improvement; C: In the EGD findings after 14 mo, multiple gastric ulcers were still found in the antrum; D: Biopsy findings revealed prominent eosinophilic infiltrations > 20 cells/HPF (arrows) (HE stain, × 400). A B C D fusion gene (FIP1L1-PDGFRA) rearrangement was not detected and there were no cytogenetic abnormalities. This patient was finally diagnosed with HES involving the stomach, liver, lung, and bone marrow. He was treated with oral prednisolone 60 mg/d and PPI. After two weeks of therapy, clinical manifestations rapidly improved and peripheral blood eosinophilia had subsided. DISCUSSION HES is a rare disease characterized by unexplained persistent eosinophilia associated with multiple organ dysfunction [1,2] . In 1968, Hardy and Anderson [10] reported three patients with hypereosinophilia, hepatosplenomegaly, and cardiopulmonary symptoms, and first suggested that they had a nonmalignant disorder that belonged within the spectrum of disease termed hypereosinophilic syndrome. In HES, the degree of end-organ damage is heterogeneous, and there is often no correlation between the level or duration of eosinophilia and the severity of organ damage [1,3] . Also, the clinical manifestations are variable from one patient to another, depending on targetorgan infiltration by eosinophils [11] . Virtually any tissue or organ can be affected, but cardiac involvement is the major cause of the morbidity and mortality associated with HES [1,9,12] . We did not find cardiac involvement in our patient. Since Chusid et al [13] reported the analysis of fourteen cases of HES in 1975, some cases of HES involving the gastrointestinal (GI) tract have been reported. Ichikawa et al [4] reported a case of probable HES with a gastric lesion, López Navidad et al [5] reported a case of HES presenting as a form of epithelioid leiomyosarcoma of gastric origin, and Levesque et al [6] reported two cases of HES with predominant digestive manifestations. In Korea, Jung et al [8] reported a case of HES presenting as colitis and You et al [9] reported a case of HES presenting with various GI symptoms. However, HES presenting with intractable gastric ulcers has not been reported. Our patient suffered from HES presenting with multiple intractable gastric ulcers as well as liver, lung, and bone marrow involvement. The exact mechanism of A B Figure 2 A chest CT scan showed very tiny nodules in both lungs (A) and approximately 15-mm-sized nodular lesions in the posterior basal segment of right lower lobe (B) (arrows). Park eosinophil-related tissue damage, including gastric ulcer, is not known [3] , but the accumulation of eosinophils can have direct cytotoxicity through the local release of toxic substances, including cationic proteins, enzymes, reactive oxygen species, pro-inflammatory cytokines, and arachidonic acid-derived factors [14] . The differential diagnosis of HES includes the disparate diseases associated with eosinophilia. Peripheral blood eosinophilia can be associated with allergic disorders, parasite infections, malignancies, and organ diseases, including eosinophilic gastroenteritis (EG) or eosinophilic pneumonitis due to eosinophilic infiltration [15] . In our patients, the bronchodilator response was negative and there was no symptom or sign of allergic disease. Even if allergic disease is present, the severe peripheral eosinophilia noted in our patient is unusual [15] . In addition, FIP1L1-PDGFRA gene rearrangement was not detected in bone marrow and there were no cytogenetic abnormalities. Therefore, we could rule out primary clonal eosinophilia such as eosinophilic leukemia. HES may be confused with EG. The diagnosis of EG is based on the following three criteria: (1) the presence of gastrointestinal symptoms, (2) biopsies showing eosinophilic infiltration of one or more areas of the GI tract, or characteristic radiologic findings with peripheral eosinophilia, and (3) no evidence of parasitic or extraintestinal disease [16] . Because EG is also of unknown etiology, the distinction from HES must be made on clinical and pathologic bases [17,18] . Eosinophilic gastroenteritis characteristically does not extend beyond the target organ [1,18] . Hence, EG lacks the multiplicity of organ involvement often found in HES and does not have the predilection to develop secondary eosinophilmediated cardiac damage [1,18] . Thus, EG can usually be distinguished from HES, although individual patients may on occasion present with overlapping features that confound classification [1,18] . In our patient, multiple organ involvement was demonstrated, and there was no other possible cause of severe eosinophilia. In patients with eosinophilia who lack evidence of organ involvement, specific therapy is not needed [1] . Such patients can have prolonged courses without the need for therapeutic intervention [1] . However, patients with vital organ involvement require treatment [1] . The goals in the management of HES are as follows: (1) reduction of peripheral blood and tissue levels of eosinophils; (2) prevention of end-organ damage; and (3) prevention of thromboembolic events [1][2][3] . Corticosteroids have been used for decades in the treatment of HES and, with the exception of PDGFRA-associated HES, remain the first-line treatment for most patients [17] . Typically highdose prednisone (1 mg/kg per day or 60 mg/d in adults) can be initiated [1,3] . A good response to corticosteroid therapy is associated with a better prognosis [1] . If patients are refractory or intolerant to corticosteroids, alternative therapies must be considered. Cytotoxic agents, including hydroxyurea, can be considered as second-line therapy [1,3] . Immunomodulatory agents including IFN-α, cyclosporine, and alemtuzumab can also be used [17] . In patients with FIP1L1-PDGFRA-positive HES, imatinib mesylate (Gleevec ® ), which selectively inhibits a series of protein tyrosine kinases, is considered first-line therapy [17] . In conclusion, we report a case of HES presenting with intractable gastric ulcers. The final diagnosis in this patient was HES involving the stomach, liver, lung, and bone marrow. Clinicians should bear in mind that gastric ulcers can develop in association with infiltrative disorders including HES. When gastric ulcers do not improve despite H pylori eradication and prolonged PPI therapy, an infiltrative gastric disorder, such as HES, should be considered.
2018-04-03T04:11:35.375Z
2009-12-28T00:00:00.000
{ "year": 2009, "sha1": "200f3c97fd5797680c91b9b3c8d619016a4091e3", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3748/wjg.15.6129", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "3aa07edb21c3a5df0d1f9328650371a2d918021f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119316778
pes2o/s2orc
v3-fos-license
On the continuous dependence on the coefficients of evolutionary equations In an abstract Hilbert space setting, we discuss many linear phenomena of mathematical physics. The functional analytic framework presented is used to address continuous dependence of the solution operators $\mathcal{S}(\mathcal{M})$ of certain (linear partial differential) equations on the coefficients $\mathcal{M}$. For this, we introduce a particular class of coefficients $\mathcal{M}$ and study the (nonlinear) mapping $\mathcal{M}\mapsto \mathcal{S}(\mathcal{M})$. We provide criteria that guarantee the continuity of $\mathcal{S}(\cdot)$ under the norm, the strong, and the weak operator topology. We exemplify our findings in non-autonomous electro-magnetic theory, thermodynamics and acoustics. Preface When mathematically modeling physical phenomena, the mathematical models almost always include certain unknown parameters. These parameters are usually to be determined via experiments. The experiments are subject to variability and chance. Having modeled physical phenomena mathematically with appropriate parameters determined from the experiments, one tries to predict the outcome of certain physical processes. For this prediction a computer may be used. Since machine precision is of limited accuracy, the results of these computations are subject to chance and variability due to rounding errors. The aim of this thesis is to provide a mathematical framework for addressing the latter question. Moreover, discussing several notions of "smallness", we provide results of the type: "small variations in the parameters lead to small variations of the solution." The focus is on evolutionary equations, that is, ordinary or partial differential equations involving the time derivative. Introduction In 2009 Rainer Picard [Pic09] developed a breakthrough functional analytic approach, taking advantage of a common structural property of many linear partial differential equations modeling dynamical processes of mathematical physics. It is well-known that many equations can be written in the form of a certain 'balance law' which relates the time derivative of an unknown quantity V and an operator A comprising the spatial derivatives of another unknown U to a given external source term F. Equation (0.1) is usually complemented by a 'material law' relating U and V, which in the present thesis we assume to be a linear operator M, as The resulting equation, where we substituted the material law into (0.1), reads In this introduction, we shall refer to equations of the form (0.2) as evolutionary equations and to M as the material law. Various aspects of equations of the form (0.2) have been studied extensively from the mathematical perspective using a large variety of techniques. The pioneering technique of [Pic09] is based on a new structural observation about equations (0.2) and an associated functional analytic framework in a special Hilbert space setting. It allows one to prove existence and uniqueness of solutions U as well as their continuous dependence on F for a wide class of M and A, which was not possible within the previously existing approaches. More precisely, Picard provided a space-time Hilbert space in [Pic09] such that the operator S(M) := (∂ t M + A) −1 is well-defined and continuous. Moreover, in the particular setting in [Pic09], S(M) is also shown to be causal, that is, S(M)F vanishes up to time t if F does. The Hilbert space setting developed in combination with the possibly provocative philosophy that any "reasonable" linear time-dependent problem in mathematical physics should be an evolutionary equation, that is, an equation as described in [Pic09], leads to important structural insights and well-posedness theorems for problems that emerge in diverse applied fields. The standard evolution equations in mathematical physics, that is, Maxwell's equations, the wave equation, the elasticity equations, and the heat equation fit into the aforementioned class ([PM11, Section 6]). Possibly degenerate cases like the eddy-current approximation for Maxwell's equations or problems with change of type ranging from elliptic to parabolic to hyperbolic on different space-time regions are evolutionary as well, see [PTW15c, Examples 2.7 and Examples 2.43]. The framework has found to be useful for problems in control theory with unbounded control Introduction and observation operators [PTW14c,PTW14b,PTW13]. As it was already pointed out in [Pic09,Section 3.5], the structural perspective developed is particularly useful, when discussing coupled phenomena in the light of so-called 'multi-physics' problems, see also [Pic09,Sections 4.3 and 4.4] for a brief account on thermoelasticity and piezoelectro-magnetism (cf. also [MPTW16]). The coupling of several equations of elastic type leads to well-posedness results in problems with micromorphic media, [PTW15b,Section 3]. A combination of the heat equation and the elastic equations form the description of an elastic material, that changes its elastic properties upon thermal excitation. A class of models of thermoelasticity is discussed in [MPTW14,MPTW15]; several equations of Maxwell type lead to a description of linearized versions of Maxwell-Dirac systems or the equations of gravito-electro-magnetism, [PTW14d]. The framework of evolutionary equations has also natural applications to problems with memory involving integral expressions in M. Typical examples are fractional derivatives in time ( [PTW15a,Wau14b]) or integro-differential equations involving (other) convolution-in-time type operators ( [Tro15b]). Once the unique existence of U in (0.2) is established, it is natural to address questions like energy conservation or (exponential) stability. For the former we refer to [PTW13] and for the latter we refer to [Tro14b,Theorem 3.2], see also [Tro13b,Tro15a] where criteria for the respective properties are given. Of course, in applications, one is interested in actually computing the solution to a given problem numerically. In order to reduce computational costs when treating heterogeneous media there is a need for simplification of the constitutive relations. In particular, if the coefficients describing the material properties are highly oscillatory, computational costs for computing the respective solutions might even exceed the capabilities of modern computers. In the late 1960's, Spagnolo [Spa67,Spa68] mathematically approached this problem. After that many other researchers devoted a great deal of effort into the development of the newly founded mathematical theory of homogenization. We refer to some standard references [CD10,BLP78,Tar09,Zhi83,ZKON79] and the references given there for a more detailed account on homogenization theory. With the structural perspective of evolutionary equations as in (0.2) for the particular case of time-shift invariant operators in mind, the author of the present manuscript considered homogenization problems for mathematical physics in [Wau11]. Further development was achieved for ordinary (delay) differential equations in [Wau12,Wau14a]. A contribution for partial differential equations can be found in [Wau16a,Wau14b,Wau13]. In the studies mentioned, homogenization theory is viewed as a certain property of the solution operator S(M) associated to (0. The present thesis addresses the continuous dependence of M → S(M) under various topologies for a particular class of material laws. We focus on three possible topologies the set of material laws may be endowed with: The norm topology as well as the strong and weak operator topologies. We comment on the set of material laws as follows. In non-autonomous problems, the material law M does not satisfy time-shift invariance. In [PTWW13] with minor extensions in [Wau14c] we developed an adapted solution theory for problems as in (0.2), which are not time-shift invariant. The class of admissible M that lead to a solution theory for (0.2) has been introduced in [Wau14a,Wau14c]: The class of evolutionary mappings. This class will be the central object to study in the bulk of this manuscript. Inspired by a similar notion introduced in [PM11, Definition 3.1.14], we develop a theory of evolutionary mappings. To the best of the author knowledge this has not been done before. Chapter 1 is devoted to introducing the time derivative in exponentially weighted L 2 spaces. Moreover, we will present a well-known representation theorem, which characterizes operator-valued functions of the time derivative introduced. This representation theorem will render the relationship of time-shift invariant operators to evolutionary mappings. In Chapter 2 we introduce the central concept of this exposition, the notion of evolutionary mappings. This chapter also contains some preliminary results particularly useful in the forthcoming chapters. We provide a solution theory for linear ordinary differential equations (in infinite-dimensional state spaces) and abstract partial differential equations of the form (0.2) in Chapter 3. The method of proof for the solution theory just mentioned is similar to the one used in [PTWW13] or [Wau14c]. However, as the focus is on (causal) evolutionary mappings, we will be able to derive more properties of the solution operator to the extent that we show that the solution operator S is causal and evolutionary itself. We introduce the topologies on evolutionary mappings in Chapter 4. In a first step towards applications, we provide continuous dependence results of M → S(M) = (∂ t M + A) −1 under the topologies introduced for A = 0. A natural example is the Drude-Born-Fedorov model in the theory of electro-magnetism, which we shall treat as an application of the continuous dependence results developed in Chapter 4. The corresponding results for partial differential equations are provided in Chapter 5. We will apply the respective results to the eddy-current approximation in electro-magnetic theory, where a hyperbolic equation is approximated by a parabolic one. Further, we provide an application to non-autonomous thermodynamics. The last application concerns homogenization theory and relates our findings to a homogenization problem for the equations modeling acoustic wave propagation. Introduction There is some research for specific equations under particular topologies, which complement the result of the thesis. We refer to the concluding section of Chapter 5 for a more detailed account of available results. Some Remarks on the Notation When writing this manuscript, we made an effort in using the most reasonable notation in the whole of the present thesis. However, we are aware of the fact that some readers (if not all) may find the notation used counterintuitive at some point. We mention some of the idiosyncrasies as follows. The time derivative will be denoted by or, to avoid unnecessarily cluttered notation, just by f again. The operator of multiplication by 1 (−∞,t] , the characteristic function of the real interval (−∞, t], will be written as Furthermore, we write We denote the identity operator just by 1 or id . (identity) If we consider a mapping acting as x → x, we also write ֒→ (canonical embedding) and if this mapping happens to be compact, we may use ֒→֒→ . (compact embedding) The sets L ev,ν (X, Y), and L sev (X, Y) ((standard) evolutionary mappings) of evolutionary and standard evolutionary mappings are introduced in Definition 2.1.3 and 2.1.6, respectively. The set C ev,ν (X) (closable evolutionary mappings) of closable evolutionary mappings is defined in Definition 2.3.10. Unless expressed explicitly otherwise, all vector spaces discussed in this manuscript have the complex numbers as underlying scalar field. Particularly important for the representation theorem to be proved in Section 1.2 is that all scalar products are antilinear in the first and linear in the second factor. The domain, kernel and range of a mapping T are respectively denoted by dom(T), ker(T), and ran(T). (domain, kernel, range) The image of a set D under T is written as A linear operator T with domain dom(T) mapping from a Hilbert space X to a Hilbert space Y is also written as We will occasionally employ the custom of identifying T with its graph if the graphs of T and S coincide or if the graph of S is a superset of T. We write for the restriction of T to a set D and say T = S on D, if T| D = S| D . The sum T + S and composition ST of two operators T and S are defined on the respective natural domains, that is, dom(T) ∩ dom(S) and {x ∈ dom(T); Tx ∈ dom(S)}. We will also use The spectrum and the resolvent set of T are respectively denoted by σ(T) and ρ(T). (spectrum and resolvent) The dual of a Hilbert space X is denoted by The index set of a net (x ι ) ι will generically be a directed set I; similarly, a sequence (x n ) n is thought of as a mapping N ∋ n → x n . Limits are denoted by We also write x ι ι → lim ι x ι or (x ι ) ι → lim ι x ι . We will leave it to the context to stress the particular topology. Unless explicitly expressed otherwise d is a positive integer. If there is a risk of ambiguity, we will add a subscript to the scalar products ·, · and norms · to identify the particular Hilbert or Banach space the computations are carried out. The closed unit ball of a Banach space Z is written as (unit ball) The support of f will be denoted by spt f . (support) We will also use the standard notation to denote Sobolev spaces, Lebesgue spaces, spaces of continuous functions etc. We shall also use self-explanatory notation as in R >0 , R 0 , C Re>0 , etc. A definition will be ended by △, a remark by ⋄, the end-of-proof symbol is . Time-shift Invariant Operators In this chapter, we shall present the basic results needed for the remaining parts. In particular, we introduce the (time) derivative operator on weighted vector-valued L 2type spaces. We provide an explicit spectral representation for the time derivative via the Fourier-Laplace transformation. Later on, we present a well-known representation result for time-shift invariant, causal operators. In the setting presented in this exposition, time-shift invariant, causal operators are equivalently described by bounded operator-valued, analytic functions of the time derivative. As such, time-shift invariant causal operators form an important class of so-called evolutionary mappings to be introduced in the next chapter. The Time Derivative time derivative · Young's inequality · Fourier transformation · Fourier-Laplace transformation · Theorem 1.1.6 · Theorem 1.1. 11 We start out with the definition of the time derivative. For this, we recall the L 2 -variant of the Morgenstern norm known from the classical proof of the Picard-Lindelöf theorem (see [Mor52]): Let ν ∈ R. We define L 2 ν (R) := { f ∈ L 2 loc (R); t → e −νt f (t) ∈ L 2 (R)}. Hence, L 2 ν (R) is a Hilbert space. Henceforth, we will encounter Hilbert space valued L 2 ν -functions. That is, for a Hilbert space X, we let L 2 ν (X) := L 2 ν (R; X) := L 2 (λ ν ; X) Note that L 2 ν (R; X) and L 2 0 (R; X) = L 2 (R; X) are unitarily equivalent. Indeed, the mapping of multiplication by t → e −νt , denoted by m ν , is the desired unitary operator from the weighted to the unweighted space: Take f ∈ L 2 ν (R; X) and compute Time-shift Invariant Operators Realizing that m ν is a bijection on compactly supported measurable functions, we infer that the range of m ν is dense in L 2 (R; X). Thus, m ν is unitary. We recall the Sobolev space H 1 (R; X) of Hilbert space X-valued L 2 -functions with distributional derivative representable as L 2 -function. The exponentially weighted version of this Sobolev space can be defined in two ways: either as the unitary image of H 1 or as the space of L 2 ν -functions with distributional derivative lying in L 2 ν : Proposition 1.1.1 Let ν ∈ R, X Hilbert space. Then . Proof We compute Remark 1.1. 5 We shall put Theorem 1.1.4 into an operator-theoretic perspective. The inequality asserted is the same as saying that the operator of convolution with h is a continuous mapping in L 2 ν (R; X) with operator norm bounded by h L 1 (λ ν ) . Later on, we employ this fact, when we recall the convolution theorem in the context of the Fourier transformation. Next, we provide the result of bounded invertibility of the time derivative, which may be dated back to [Pic89]. Proof To prove that the integrals in (1.1) exist and that the right-hand side of (1.1) defines an element in L 2 ν (R; X), we use Theorem 1.1.4. In fact, for ν > 0, it is sufficient to observe that h : Once proved that h * = ∂ −1 t,ν , the estimate for the operator norm follows from Theorem 1.1.4. Next, by Fubini's Theorem the distributional derivative of g := t → t −∞ f (s) ds equals f , so ∂ t,ν • (h * ) = id L 2 ν . Another application of Fubini's Theorem yields that for any f ∈ dom(∂ t,ν ) we have The main observation needed for the proof of Theorem 1.1.6 is (1.2) Exploiting equation (1.2) a bit further, we will move on proving an explicit spectral theorem for ∂ −1 t,ν , that is, we will show that ∂ −1 t,ν is unitarily equivalent to a multiplication operator. The unitary operator yielding the spectral representation involves the Fourier transformation, which will be defined next. Definition 1.1.7 Let X Hilbert space. We define the Fourier transformation as the continuous extension of the operator given by for all integrable measurable functions φ : R → X. △ Remark 1.1.8 (a) We recall that Plancherel's theorem (for Hilbert space valued functions) states that F is not only continuous but unitary. (b) For f ∈ L 1 (R) and g ∈ L 2 (R; X) the so-called convolution theorem holds, that is, In fact, the equality is an application of Fubini's theorem for f ∈ L 1 (R) and g ∈ L 1 (R; X) ∩ L 2 (R; X). In particular, for f ∈ L 1 (R) and all We read off that the mapping is continuous. Moreover, the latter mapping coincides with the continuous map all f ∈ L 1 (R) and g ∈ L 2 (R; X). ⋄ Being mainly interested in the case of exponentially weighted L 2 -spaces, we introduce the Fourier-Laplace transformation L ν as an operator from L 2 ν (R; X) to L 2 (R; X), X Hilbert space, ν ∈ R, as follows As a composition of unitary operators, L ν is unitary itself. In the proof of Theorem 1.1.6, we realized that ∂ −1 t,ν can be written as a convolution. From Remark 1.1.8(b) we get that convolutions are multiplication operators on the Fourier transformed side. Putting these two facts together, we get the desired spectral representation for ∂ t,ν as multiplication operator with a certain function. For stating the result, we introduce T h the multiplication operator of multiplying by some measurable function h : R → C in L 2 (R; X), X a Hilbert space. Corollary 1.1.9 Let X Hilbert space, ν ∈ R \ {0}. Then the equality where Tĥ ν is the operator of multiplication by the functionĥ ν : R ∋ ξ → 1/(iξ + ν). Time-shift Invariant Operators Proof Recall from equation (1.2), the equality ∂ −1 t,ν g = h * g for g ∈ L 2 ν (R; X) with h = 1 [0,∞) for ν > 0 or h = −1 (−∞,0] for ν < 0. In either case we have √ 2πFm ν h =ĥ ν . Hence, with m −ν F * = (Fm ν ) * = L * ν and using the convolution theorem (Remark 1.1.8(b)) we get for g ∈ L 2 ν (R; X): Remark 1.1.10 Note that Corollary 1.1.9 also contains the spectral representation for ∂ t,ν . Indeed, the operator L ν being unitary it suffices to observe T −1 A consequence of the latter formula is ⋄ With the help of Corollary 1.1.9 we can define a functional calculus for the operator ∂ −1 t,ν . Before properly defining the functional calculus, we analyze the spectrum of ∂ −1 t,ν first. We restrict ourselves to the case ν > 0. Moreover, note that it suffices to state the scalar case only, as the respective operator defined on X-valued L 2 ν -functions has the same spectrum as in the scalar-valued case as long as X is at least one-dimensional. We denote by B(a, δ) the open ball in C centered at a ∈ C with radius δ > 0. The boundary of an open set Ω in some underlying topological space is denoted by ∂Ω. The spectrum of ∂ −1 t,ν , ν > 0, is as follows. A Representation Theorem operator-valued functions of time derivative · causal continuous operators · translation-invariance · time-shift invariance · Paley-Wiener Theorem · Hardy-Lebesgue space · Corollary 1.2.5 In order to put the applications to be discussed later on into a broader perspective, we introduce operator-valued functions of ∂ −1 t,ν . For this, in principle, it would be sufficient to consider functions with domain ∂B(r, r), r = 1/(2ν). We will, however, also address causality in our solution theory, which necessitates the consideration of a somewhat different class of functions. We shall provide a proper definition of the notion of causality next. Afterwards, with the help of a well-established representation theorem, we motivate this specific class of (operator-valued) functions of ∂ −1 t,ν . We highlight an important property of this class (Remark 1.2.4), which will serve as the basis for the definition of evolutionary mappings. . Indeed, the necessity being trivial, we show the sufficiency next. Let t ∈ R, and let f ∈ L 2 ν (R; X). Then Thus, Mτ t P t f ∈ L 2 ν (0, ∞; Y) and, hence, So, MP t f = P t MP t f yielding the assertion. ⋄ Next, we give a well-known representation theorem for translation-invariant, causal mappings. We adopt the strategy given in [Wei91] for the proof. . Then there is a unique, bounded and analytic function M : C Re>0 → L(X, Y) with the following property: For any u ∈ L 2 (0, ∞; X) we have (1.5) Remark 1.2.4 In the situation of Theorem 1.2.3 define for ν > 0 the function M ν : R → L(X, Y) by M ν (ξ) := M(iξ + ν), ξ ∈ R. Then, using our convention for multiplication operators, we realize that equation (1.5) can be written as (1.6) Equivalently, equation (1.6) may be written as Next, as the left-hand side is translation-invariant, so is the right-hand side, which yields equality for all u ∈ L 2 c (R; X), that is, for compactly supported L 2 -functions. Furthermore, by the boundedness of M, the expression L * ν T M ν L ν defines a bounded linear operator from L 2 ν (X) to L 2 ν (Y). Thus, M defined on L 2 c (R; X) admits a unique continuous extension as an operator on L 2 ν (R; X) and as such, it coincides with L * ν T M ν L ν . Since for any u ∈ L 2 ν (X) ∩ L 2 µ (X) with µ, ν ∈ R we can choose a sequence (u n ) n∈N in L 2 c (R; X) converging in both the spaces L 2 ν (R; X) and L 2 µ (R; X) to u (e.g. take φ = 1 [−1,1] and set 1.2 A Representation Theorem u n := φ(·/n)u, n ∈ N), the respective extensions of M to L 2 ν (R; X) and L 2 µ (R; X) coincide on L 2 ν (R; X) ∩ L 2 µ (R; X), ν, µ > 0. Write M ν for the closure of M as an operator in L 2 ν . Thus, equation (1.6) finally implies We will discuss the property for operators of being continuously extendable to the spaces L 2 ν for all sufficiently large ν > 0 in the context of evolutionary mappings later on. For a proof of the uniqueness statement, let M 1 : C Re>ν → L(X) be analytic, bounded and such that (1.8) holds with M replaced by M 1 . Then, the mapping N 1 (iξ + η) := M 1 (iξ + η + ν) for all η > 0 satisfies (1.9) with N replaced by N 1 . By the uniqueness statement of Theorem 1.2.3 it follows that N 1 = N and, hence, M 1 = M. The norm estimate follows from the unitarity of the Fourier-Laplace transformation. endowed with the obvious norm. Then H 2 (X) is a Hilbert space and the mapping A converse to Corollary 1.2.5 reads as follows. Theorem 1.2.7 Let X, Y Hilbert spaces, ν ∈ R. Let G : C Re>ν → L(X, Y) be bounded and analytic. Then, for any µ > ν the operator given by defines a causal, translation-invariant, bounded, linear operator on L 2 µ (R; X). Proof Addressing translation-invariance first, we realize that for any h ∈ R, we have Next, we prove causality. By Remark 1.2.2, it suffices to show that L * µ T G µ L µ leaves functions supported on [0, ∞) invariant. For this, take f ∈ L 2 µ (0, ∞; X). Then, by definition, m µ f ∈ L 2 (0, ∞; X). Hence, using the Paley-Wiener Theorem 1.2.6, we infer that belongs to the Hardy-Lebesgue space H 2 (X). The boundedness and analyticity of G implies that T G µ maps H 2 (X) into H 2 (Y) in the sense that if g ∈ H 2 (X) then A Representation Theorem Again referring to Theorem 1.2.6, we get F * (T G µ (F(m µ f ))) ∈ L 2 (0, ∞; Y), or, equivalently, which yields the desired invariance property for the operator under consideration. The boundedness of L * µ T G µ L µ is obvious from the boundedness of G. Remark 1.2.8 Let ν > 0. By Lemma 1.1.12, the sets C Re>ν and B(r, r) with r = 1/(2ν) are biholomorphically mapped to one another by z → 1/z. Hence, the conclusion in Corollary 1.2.5 might also be written as that (apart from continuous extendability of M) there exists a unique bounded analytic M : B(r, r) → L(X) satisfying for all u ∈ L 2 µ (R; X), or, with Remark 1.2.4, ⋄ Next, we prove Theorem 1.2.3 along the lines of [Wei91]. The first preparatory result is the following lemma. For s ∈ C we put Lemma 1.2.9 ([Wei91, Remark 1.1]) Let s ∈ C Re>0 , z ∈ L 2 (0, ∞). Assume that for all h 0, we get τ h z = e −sh z on [0, ∞). Then there is a unique a ∈ C such that z = ae −s . Proof At first note that z ∈ L 1 (0, ∞). Indeed, we compute With the help of the dominated convergence theorem, we realize that The author is indebted to Sascha Trostorff for providing the latter short proof. which yields the assertion. Assume that for all h 0, we get τ h z = e −sh z on [0, ∞). Then there exists a unique v ∈ X such that z = e −s v. . The Cauchy-Schwarz inequality implies the boundedness of the sesquilinear mapping Φ with bound 1. Furthermore, note that for h ∈ R 0 , we have Hence, by Lemma 1.2.9, for any w ∈ X there exists a w ∈ C such that Φ(w, z) = a w e −s . By Lemma 1.2.10 applied to Φ(·, z) in the position of Φ and a (·) in place of v (·) , we infer that v : X ∋ w → a w defines a bounded, anti-linear functional. We identify v ∈ X with its Riesz image. It now remains to show z = e −s v. For this, we observe for all w ∈ X the equality for a.e. t ∈ R. Next, we observe for α ∈ L 2 (0, ∞), w ∈ X, y ∈ L 2 (0, ∞; X) In particular, putting α = 1 E for some E ⊆ [0, ∞) bounded, measurable, and y = z, we get for w ∈ X with the help of equation (1.13) As the set {1 E w; E ⊆ [0, ∞) bounded, measurable, w ∈ X} is separating for the space L 2 (0, ∞; X), the assertion follows. We are now in the position to prove Theorem 1.2.3. Proof (of Theorem 1.2.3) We note that as an operator in L 2 (R; X) we have that τ * −h = τ h for all h ∈ R. Thus, as M is translation-invariant, so is M * . In particular, for w ∈ Y and h ∈ R 0 denoting by P t the multiplication by 1 [t,∞) , we get Thus, Hence, by Lemma 1.2.11 applied to z = P 0 M * e −s w, for s ∈ C Re>0 and for all w ∈ Y there exists a unique v w ∈ X with P 0 M * e −s w = e −s v w . Applying Lemma 1.2.10 to Φ(w) = P 0 M * e −s w, we get that G(s) : w → v w is continuous for all s ∈ C Re>0 . In particular, note that we get for all w ∈ Y, s ∈ C Re>0 , which implies the boundedness of C Re>0 ∋ s → G(s) ∈ L(Y, X). Time-shift Invariant Operators Next, let u ∈ L 2 (0, ∞; X). We observe for s = iξ + ν ∈ C Re>0 and w ∈ Y by the first part of the proof that (1.14) Setting M(iξ + ν) := G((iξ + ν) * ) * , we infer the equality asserted in the theorem to be proved. For analyticity, we realize by putting u = e −1 v for some v ∈ X into (1.14) and using L ν e −1 (ξ) = 1 Since u is supported on [0, ∞), thus, so is Mu by hypothesis. Hence, the mapping C Re>0 ∋ s → e −s * w, Mu L 2 (Y) is analytic, by Lebesgue's dominated convergence theorem. Thus, the right-hand side of (1.15) is analytic in s. As this argument applies to all w ∈ Y, v ∈ X, the mapping M is analytic on the L(X, Y)-norming set Comments As already mentioned the idea of discussing the time derivative in weighted spaces dates back to (at least as early as) the work of Morgenstern ([Mor52]), where an exponentially weighted norm on the space of continuous functions was used to deduce existence and uniqueness of solutions of ordinary differential equations. Since then many people studied differential equations in weighted spaces. However, the core idea of discussing the derivative operator with this particular weight has been less prominent. In the late 1980's Rainer Picard ([Pic89]) studied integral transforms and their relation to explicit spectral theorems for differential operators. In these studies it then turned out that the Fourier-Laplace transformation (see (1.3)) is the unitary transformation realizing the spectral representation as a multiplication operator for the derivative on the weighted L 2 -type spaces L 2 ν (R; X), ν ∈ R. We will demonstrate that the freedom in the choice of the parameter ν together with the estimate ∂ −1 t,ν 1/|ν| yields an easily accessible way of discussing ordinary differential equations in a L 2 -type setting. We sketch a solution theory for possibly nonlinear ordinary differential equations as follows. Let F : C d → C d be Lipschitz continuous, 1.3 Comments with the property F(0) = 0. Then for all ν ∈ R, it is easy to see that the Nemitskiioperator N F induced by F, that is, is Lipschitz continuous as well. Next, let f ∈ L 2 (0, ∞; C d ) ⊆ ν>0 L 2 ν (R; C d ), we want to find u : R → C d , which is locally weakly differentiable such that (1.16) The latter equation interpreted in L 2 ν (R; C d ) is the same as Thus, multiplying by ∂ −1 t,ν , we get which is a fixed point problem for u ∈ L 2 ν (R; C d ). Once we show that there exists ν > 0 such that Φ ν is a strict contraction, the existence of a weakly differentiable function u for (1.16) is warranted. But, if |F| Lip is a Lipschitz constant for F, then |F| Lip is also a Lipschitz constant for N F . Hence, for u, v ∈ L 2 ν (R; C d ), we obtain Therefore, choosing ν large enough yields that Φ ν is a strict contraction and (1.16) can be solved with the help of the contraction mapping theorem. The application presented is a first of many others. In fact, it is possible to extend these ideas in such a way that it leads to a unified solution theory for delay differential equations, that include discrete and continuous delays as well as neutral equations, see [KPS + 13]. This line of ideas has further applications. Indeed, starting out from the observation that ∂ −1 t,ν is the convolution with the Heaviside function (see Theorem 1.1.6), it is possible to extend this unified way of looking at problems of the ordinary delay differential type to a Banach space setting, see [PTW14a]. Another class tractable with this approach is the class of so-called (ordinary) integrodifferential equations, that is, equations where the right-hand side of (1.16) is replaced by some integral expression involving u. A prominent example are convolutions with respect to the time variable. These convolutions may be written as a multiplication by some function in Fourier space by the convolution theorem, see Remark 1.1.8. Moreover, they can be represented as certain functions of ∂ −1 t,ν as in (1.12) in combination with Corollary 1.1.9, that is, in the form 1 Time-shift Invariant Operators for some appropriate M. We refer to [Tro15b,PTW15a,Wau14b] for a thorough treatment of this type of functions of ∂ −1 t,ν and applications to partial differential equations. A specific integral operator is the fractional time derivative ( [PTW15a,Wau14b]), ∂ α t,ν , which also falls into the class of operators that admit a representation as in (1.17) for some bounded, analytic function M. Indeed, for α ∈ (−∞, 0] the operator ∂ α t,ν is translation-invariant and causal by Theorem 1.2.7. A detailed look at ∂ α t,ν shows that, in fact, if applied to functions being supported on (0, ∞) only, the resulting operator is the Riemann-Liouville fractional derivative, see also [PTW15a,p 3143] or [Tsa14]. It should be noted that this fractional derivative has also been discussed with regards to numerical analysis, see [Rub15]. The representation theorem proved in this chapter, Theorem 1.2.3, has major applications in control and system theory. It is used in the representation theory for shiftinvariant causal systems and is related to the description of the input-output relation of control systems by means of their so-called transfer function. We refer to the references in [Wei91] for a more detailed account of that. Evolutionary Mappings and Causality In this chapter we provide the notion of evolutionary mappings. With regards to applications discussed later on, it is of interest whether the operators considered in certain L 2 ν -type spaces are actually "independent" of ν: Given an operator S, which is densely defined and continuous in both the spaces L 2 ν (R) and L 2 µ (R) for µ = ν, it is a priori unclear, whether the respective continuous extensions, denoted by S ν and S µ coincide on the intersection of their respective domains, that is, on L 2 ν (R) ∩ L 2 µ (R). The most prominent examples of such S are solution operators to certain (abstract) partial differential equations. Defining evolutionary mappings in Section 2.1, we will obtain the desired independence result upon assuming one additional property for S. The additional property is (forward) causality, which is defined in Section 2.3 for evolutionary mappings. Evolutionary Mappings standard causal domains D(X), D ν (X) · standard evolutionary mappings In Remark 1.2.4, we pointed out an important property of causal, translation-invariant mappings, namely the property of being extendable to continuous operators on L 2 µ for all µ > ν. Though lacking the property of translation-invariance, multiplication operators are still causal and enjoy the same property of being extendable to the L 2 νscale for all ν ∈ R: Example 2.1.1 Let X, Y Hilbert spaces, ν ∈ R, and let M : R → L(X, Y) be strongly measurable and bounded. Then Assume, in addition, that M is strongly differentiable, with bounded derivative, that is, for u ∈ X the mapping t → M(t)u is differentiable and t → (M(·)u) ′ (t) is bounded for every u. As a pointwise limit of measurable mappings, the Moreover, by the mean-value inequality, this mapping is closed and continuous by the closed graph theorem. Thus, the mapping Note that the operator norm of the multiplication operator introduced in Example 2.1.1 is independent of the chosen ν ∈ R. We recall that causal, translation-invariant mappings have an operator norm being decreasing in ν. In fact, this is the upshot of Corollary 1.2.5. Given an operator acting on the whole scale (L 2 ν (R)) ν∈R , the operator norm need not be decreasing in general, as the following example shows. ⋄ We raise the idea of extendability to an operator acting continuously on L 2 ν (R; X) for all ν large enough with operator norm "fairly independent" of ν to the main definition of this section: Definition 2.1.3 (evolutionary mappings) Let X, Y Hilbert spaces, ν ∈ R. We call a linear mapping for all µ ν and is such that The continuous extension of S to some L 2 µ will be denoted by S µ , and, if there is no risk of confusion, we will re-use the notation S. We set L ev,ν (X, Y) := {S; S is as in (2.1) and is evolutionary at ν}; L ev,ν (X) := L ev,ν (X, X). △ Note that L ev,ν (X, Y) ⊆ L ev,µ (X, Y) for all µ ν. Evolutionary Mappings Evolutionary mappings will play the central role throughout this exposition. In applications, constitutive relations or material laws can be realized as evolutionary mappings. Moreover, we will show that for given (ordinary/partial) differential equations modeling physical processes the respective solution operators will be evolutionary mappings itself. So, it is of interest to study sum, product and inverses of evolutionary mappings and to address the question, whether the resulting operators are evolutionary again. However, we want to stress two subtleties in this context. (a) If we are given an evolutionary mapping S, its adjoint S * is, in general, not evolutionary again: Indeed, acting on different Hilbert space, it is a priori unclear what operator an adjoint of S would be. So, in fact, the adjoint of S computed in L 2 ν (R; X) and L 2 µ (R; X) for both µ, ν large enough, µ = ν, might differ from one another. An example is the time-shift again, see Example 2.1. The sum of two evolutionary mappings need not be densely defined any more. Indeed, take the time-shift τ h for h < 0. Consider τ (1) h and τ (2) h as the operator acting as τ h but with Then, by the density of (linear combinations of) Hermite functions in L 2 (R), it is easy to see that dom(τ Also note that only the zero element in dom(τ (2) h ) is compactly supported: For this let g ∈ dom(τ (2) h ) have compact support. Then there are polynomials p 1 , . . . , p n : R → C and real numbers −∞ < δ 1 < · · · < δ n < ∞ with For t ∈ R large enough, setting q j := e −δ 2 j /2 p j , j ∈ {1, . . . , n}, we get Hence, as t → ∞ we have ∑ n j=2 q j (t)e −(δ j −δ 1 )t → 0 and, thus, q 1 (t) = 0. Continuing in this manner, we infer q 2 = · · · = q n = 0. Thus, g = 0. But, all functions in L 2 c (R) are compactly supported, and so dom(τ In order to circumvent the last problem, we will seek a possibility to endow an evolutionary mapping with a standard domain. It turns out that this can be done, if we assume causality for the evolutionary mapping under consideration. However, note that in Definition 1.2.1, we have defined causality for closed continuous mappings only. The definition of an adapted version of causality for closable operators is postponed to Section 2.2. But, a first link of evolutionarity and causality can be given right away. Remark 2.1.5 Let X, Y Hilbert spaces, ν ∈ R, S ∈ L ev,ν (X, Y) and assume that for all t ∈ R the set dom(SQ t ) ∩ dom(S) is dense in L 2 µ (R; X 0 ) for µ ν 1 , where Q t is the operator T 1 (−∞,t] of multiplication by 1 (−∞,t] . Then S µ is causal for all µ ν: Since L ev,ν ⊆ L ev,µ , it suffices to prove that S ν is causal. For this, let f ∈ L 2 ν (R; X), t ∈ R and assume that In particular, the latter implies that g n := P t f n approximates f in L 2 µ (R; X) for all µ ν. Now, we follow the idea of [KPS + 13, Proof of Theorem 4.5]. For this let φ ∈ L 2 c (R; Y) with support bounded above by t. For µ ν we get that ⋄ The key observation of the latter remark is that certain conditions on the domain of evolutionary mappings result in the causality of the closure of these mappings. A prototype of such a domain is given next. Definition 2.1.6 Let X, Y Hilbert spaces, ν ∈ R. Then the set is called standard causal domain (at ν); we set D(X) := ν∈R D ν (X). We call a map S ∈ L ev,ν (X, Y) standard evolutionary (at ν), if dom(S) is the standard causal domain (at ν). We define the set of all standard evolutionary mappings L sev,ν (X, Y) := {S; S standard evolutionary at ν}, Standard evolutionary mappings are closed under vector space operations and composition: Proposition 2.1.7 Let X, Y, Z Hilbert spaces, ν ∈ R, S, T ∈ L sev,ν (X, Y), U ∈ L sev,ν (Y, Z), α ∈ C. Then Proof The statement in (a) is easy. For (b), we observe that if f ∈ D ν (X) = dom(S), then, by the evolutionarity of S, S f ∈ µ ν L 2 ν (Y) = D ν (Y) = dom(U). The remaining norm estimate follows from the submultiplicativity of the operator norm. We will elaborate more on evolutionary mappings once we discussed causality for closable mappings in the next section. Causality for Closable Mappings resolution space · causal · characterization of densely defined operators with causal closure · strongly causal · Theorem 2.2.9 In order to motivate the upcoming notion of causality, we recall that for a Hilbert space X, and some ν ∈ R, we say a mapping S ∈ L(L 2 ν (R; X)) is causal (see Definition 1.2.1), if where P t is the operator of multiplication by 1 (t,∞) . If now S is defined on a proper domain in L 2 ν (R; X) the way of defining causality just mentioned has the drawback that dom(SP t ) may only consist of the 0 function, see (2.3) for a possible domain. Hence, every continuous mapping endowed with such a domain would be causal, if (2.4) characterized causality also for closable mappings. Anticipating the latter, we seek a different notion of causality, which for closed, continuous maps yields the same. For this, we discuss (2.4) in more detail: Introducing Q t := 1 − P t , we get that, equivalently to (2.4), for all t ∈ R the equality or, for all f ∈ L 2 ν (R; X), the implication Yet another reformulation of (2.6) or (2.5) is that for all φ ∈ B L 2 ν (X) , B L 2 ν (X) the unit ball of L 2 ν (X), there exists C 0 such that for all f ∈ L 2 ν (R; X) we have (2.7) Indeed, the latter estimate follows from (2.5) and implies (2.6), which, in turn, implies (2.5). The continuity estimate in (2.7), however, is the starting point for defining causality for closable mappings. Beforehand, we introduce the concept of a resolution space. Definition 2.2.1 ( [Sae70]) Let X be a Hilbert space, let (Q t ) t∈R in L(X) be a resolution of the identity, that is, for all t ∈ R the operator Q t is an orthogonal projection, ran(Q t ) ⊆ ran(Q s ) if and only if t s and Q t converges in the strong operator topology to 0 and 1 if t → −∞ and t → ∞, respectively. The pair (X, In what follows we provide the notion of causality for closable mappings. We stick to the linear case here. For a possible way to define the respective concept for non-linear mappings as well, we refer to [Wau15]. is Lipschitz continuous for all φ ∈ D, r > 0. Theorem 2.2.4 also admits a generalization to the Banach space case. In this exposition, however, it is sufficient to consider the Hilbert space case only. The somewhat more involved version of Theorem 2.2.4 (including an adapted version of causality) for the general Banach space case can be found in [Wau15]. For the proof of Theorem 2.2.4, we need some prerequisites. Note that the next lemma has already been proven in the first few lines of this section for the case of continuous S with dom(S) = X: Lemma 2.2.5 ([Wau15, Lemma 1.9]) Let (X, (Q t ) t ) and (Y, (R t ) t ) be resolution spaces. Let S : dom(S) ⊆ X → Y linear and closed. Then the following assertions are equivalent: with Q t f = 0 for some t ∈ R. By hypothesis, for all t ∈ R and φ ∈ D, we find C 0 such that For the sufficiency of (i) for (ii), we show that S violates the condition stated in (i) provided S is not causal. For this, let r > 0, t ∈ R, φ ∈ X and ε > 0 such that for all By boundedness of ( f n ) n and (S f n ) n , there exists a subsequence (n k ) k of (n) n , such that ( f n k ) k , and (S f n k ) k weakly converge. By linearity and closedness of S, S is weakly closed. Hence, we deduce that f := w-lim k→∞ f n k ∈ dom(S) and w-lim k→∞ S f n k = S f . By (weak) continuity of Q t we get we read off that S does not satisfy (i). In particular, we have |Q t ( f − g)| Q t ε. Assuming the validity of (i), we see that is Lipschitz continuous on the dense subset B S (0, r) for all φ ∈ D. This implies (ii), see also Remark 2.2.3. The converse is trivial. Proof (of Theorem 2.2.4) By Lemma 2.2.5, condition (i) is equivalent to causality of S and to causality of S on some dense set, the latter two properties are, in turn, equivalent to causality of S (condition (ii)) and causality of S on some dense set D (condition (iii)), respectively, by Lemma 2.2.6. For applications discussed later on, we give an instant of an example for closable causal mappings. for some c > 0 and all f ∈ dom(S). Then S −1 defines a linear mapping and is causal. Furthermore, S −1 satisfies the inequality Proof First of all, we verify that S is one-to-one. For this, we have to verify that Letting t → ∞, we infer f 2 = 0. We are left with showing (2.8), which is also sufficient for causality of S −1 . For this, let t ∈ R, φ ∈ X, g ∈ ran(S). We compute, using the hypothesis, Hence, In view of the applications to follow, the inequality that has been shown for S −1 is often satisfied. That is why, we introduce a concept slightly stronger than causality. The next theorem is the reason, why the notion just introduced is so important for applications. Namely, for continuous mappings, causality and strong causality are the same. Theorem 2.2.9 Let (X, (Q t ) t ), (Y, (R t ) t ) resolution spaces, S : dom(S) ⊆ X → Y densely defined, linear, continuous. Then the following conditions are equivalent: Proof Using that dom(S) = X as S is continuous and densely defined, the equivalence of (i), (ii), and (iii) has been established in Theorem 2.2.4. The equivalence of (v) and (vi) is an easy computation. Moreover, linearity of S implies that (i) is necessary for Next, clearly, (iv) is sufficient for (ii) and, assuming (v), we get for all f ∈ X which implies (iv). Causality for Evolutionary Mappings causal evolutionary mappings are standard evolutionary · closures of standard evolutionary mappings are independent of ν · standard evolutionary mappings form a vector space · closable evolutionary · criterion for inverses being standard evolutionary · bounded sets of evolutionary mappings · Theorem 2.3.12 In this section, we combine the results from Sections 2.1 and 2.2. An important result of this section is Theorem 2.3.4 together with Proposition 2.3.7, that is, roughly speaking, • causal evolutionary mappings are essentially the same as standard evolutionary mappings, (Theorem 2.3.4) • the closure of a causal, evolutionary mapping is widely independent of the exponential weight (Proposition 2.3.7). To begin with we define causality for evolutionary mappings. Proof As S is evolutionary, we are in the position to apply Theorem 2.2.9 to S considered as a mapping from L 2 Thus, using that multiplication by the exponential function is a bijection on L 2 c (R; Y), we infer from (2.9) for all t ∈ R, φ ∈ L 2 c (R; Y), there exists C 0 such that Recall that the condition of being standard evolutionary, that is, being evolutionary with the standard causal domain D ν (X) = µ ν L 2 µ (X) as underlying domain of definition, results in causality: Proof By Remark 2.1.5, S ν (the closure of S in L 2 ν ) is causal for all ν large enough. Hence, the assertion follows from Proposition 2.3.2 and Theorem 2.2.9. Next, we will seek to prove the following converse of the latter proposition. Moreover, for all µ ν: We remark here that Theorem 2.3.4 implicitly asserts that T defines a right-unique relation, that is, a mapping. Theorem 2.3.4 also serves as a justification for treating standard evolutionary mappings later on, only. Furthermore, in view of Theorem 2.3.4, we shall even employ the custom to consider causal evolutionary mappings and standard evolutionary mappings as synonymous. The first step for the proof of Theorem 2.3.4 is to show that T is right-unique. As both the left-hand and the right-hand side of the latter equality converge in L 2 loc (X), their respective limits coincide. So, Since the rationale presented applies to all t ∈ R, the claim is proved. The respective continuous extensions S ν and S µ are causal, by Proposition 2.3.2 and Theorem 2.2.9. S ν and S µ coincide on dom(S) and, thus, on the intersection of the respective domains, by Lemma 2.3.6. Proof (of Theorem 2.3.4) Proposition 2.3.7 implies that T is well-defined. Note that this also settles evolutionarity of T. In particular, we get that T is standard evolutionary at ν. The last assertion, T µ = S µ , µ ν, can be seen as follows. By definition, for all µ ν Basically, Proposition 2.3.7 asserts that the closures of evolutionary (and causal) mappings do not depend on the particular realization in some L 2 ν , that is, on the exponential weight parametrized by ν. Theorem 2.3.4 contains the prototype of domains causal evolutionary mappings may be endowed with. Another consequence of Theorem 2.3.4 is the inclusion L sev,ν ⊆ L sev,µ , µ ν, in the following sense: is the standard realization of S considered as evolutionary at µ. In the sense of Corollary 2.3.8, any standard evolutionary mapping at ν may be considered as standard evolutionary at µ ν, we shall do so in the following. This custom enables us to ease the formulations of several statements. For instance, a reformulation of Proposition 2.1.7 reads as follows. In order to develop a solution theory for certain differential equations, apart from the Hadamard requirements of unique existence of solutions that depend continuously on the data, we ask for causality of the solution operator. Moreover, the solution operator should be widely independent of the exponential weight, which results in the requirement of evolutionarity for the solution operator. For (abstract) ordinary differential equations with potentially infinite-dimensional state space X, the solution operator can be computed explicitly as a composition (of inverses) of causal evolutionary mappings, see Chapter 3. The next theorem should be viewed in the context of Proposition 2.3.9. Indeed, with additional regards to Theorem 2.3.4, causal evolutionary mappings are closed under composition and addition. The next theorem complements these statements, by giving a criterion for an inverse being causal and evolutionary. Before giving the precise statement, we need to introduce a slightly more general concept than that of evolutionary mappings. Definition 2.3.10 Let X, Y Hilbert spaces, ν ∈ R. We call ) for functions f with bounded support and assuming values in dom(A) is closable as an operator in L 2 µ , µ ∈ R. Denoting by A µ the respective closure, we see thať is closable evolutionary at ν. We will also just write A forǍ. is closable evolutionary at ν as well. Then S := B −1 is evolutionary at ν and causal. Moreover, for all µ 1 , µ 2 ν, we have that Proof Existence and causality of B −1 as a mapping in L 2 µ (R; X) follow from Proposition 2.2.7. In fact, for all µ ν the inequality holds true. Hence, by letting t → ∞ and computing the supremum over φ with norm 1, we arrive at This, together with the density of dom(S) = ran(B) in L 2 µ (R; X) establishes evolutionarity and causality of S. Let µ 1 µ 2 ν. Then, by Proposition 2.3.2, S ∈ L ev,µ 2 (X) is causal. Hence, S µ 1 coincides with S µ 2 on L 2 µ 1 (X) ∩ L 2 µ 2 (X), by Proposition 2.3.7. The condition on the density of the range of B can be dropped, if B is assumed to be continuous: where Q t is multiplication by 1 (−∞,t) . 43 For the proof of Corollary 2.3.13, it is sufficient to observe that the inequality assumed implies (as t → ∞): Thus, the needed density result for the application of Theorem 2.3.12 follows from the following observation: Proposition 2.3.14 Let X be a Hilbert space, B a densely defined, linear operator in X satisfying for all φ ∈ dom(B) and some c > 0. Then B is closable. Proof We address closability first. The Banach space version of the closability result can be found in [Bey07, Theorem 4.2.5]. Assume B not to be closable. Then we find (φ n ) n in dom(B) converging to 0 ∈ X with the property that (Bφ n ) n converges to some non-zero ψ ∈ X. Without restriction, ψ = 1. By the density of dom(B), there exists ζ ∈ dom(B) with ψ − ζ < 1/2. Hence, ζ > 1/2. Next, for all β > 0 and n ∈ N, we obtain due to (2.10) Letting n → ∞ and afterwards β → 0, we obtain a contradiction, yielding closability. Next, the inequality (2.10) for B implies the same for B (for all φ ∈ dom(B)). Hence, B is one-to-one and the, thus, existing inverse has an operator-norm bounded by 1/c. Moreover, the same inequality implies the closedness of the range of B. Hence, for showing 0 ∈ ρ(B) we are left with showing that B is, in fact, onto. For this, we recall the orthogonal decomposition X = ran(B) ⊕ ker(B * ). So, the inequality assumed for B * implies that B * is one-to-one, and, hence, ker(B * ) is trivial, implying The latter settles both the density of ran(B) in X as well as 0 ∈ ρ(B). The author is indebted to Sascha Trostorff for spotting a flaw and stating a simpler argument compared to an earlier version of the latter proposition. Sebastian Mildner eventually found the source [Bey07], which settled an issue concerning the closability statement. In order to apply the following concept right away in the beginning of Chapter 3, we conclude with some convergence aspects of evolutionary mappings. Later on, we will deal with these issues in more detail. We introduce the notion of boundedness and convergence of standard evolutionary mappings. Comments The notion of evolutionary mappings is inspired by the term 'evolutionary' introduced in [PM11]: It roots in determining the 'time-like' directions in a given partial differential expression with constant coefficients. For sake of presentation, we think of a polynomial p in d variables with formally inserted the partial derivatives ∂ 1 , . . . , ∂ d leading to where we employed multiindex notation and assume that all but finitely many c α ∈ C are 0. Next, for given f consider the problem of finding u such that We might try setting up a solution theory for (2.12). Similar to the ordinary differential equations case in the previous comments section, we seek a solution u in an exponentially weighted space for every variable, that is, in a tensor product space d j=1 L 2 w j (R) for w = (w 1 , . . . , w d ) ∈ R d . So, applying the Fourier-Laplace transformation in each variable and using Remark 1.1.10, we get that (2.12) reads (2.13) for appropriateû andf . Therefore, solving for u in (2.12) leads to inverting p(iξ 1 + w 1 , . . . , iξ d + w d ) for all ξ 1 , . . . , ξ d ∈ R. To make this procedure well-defined, we ask Following [PM11, Definition 3.1.14], we let w ∈ R d , |w| = 1, and call p(∂ 1 , . . . , ∂ d ) evolutionary in direction w, if there exists ν ∈ R such that for all µ ν the polynomial R d ∋ ξ → p(iξ + µw) ∈ C has no zeros. When discussing so-called 'canonical forms' of differential expressions, the direction of evolutionarity is singled out as the direction of time, see [PM11, Section 3.1.7]. We shall also refer to [PM11, Section 3.1.6], where evolutionarity is discussed in view of the classical classification of partial differential equations into elliptic, parabolic and hyperbolic. In the framework presented in this exposition, the direction of time is already given and modeled by the direction of the real line in the first variable of the space L 2 ν (R; X). Thus, similar to [PM11], the remaining variables are thought of being contained in X, the Hilbert space describing 'spatial coordinates'. Followed by the introduction of evolutionarity in [PM11], causality of evolutionary partial differential expressions has been discussed as well. The definition of causality is similar to the one in Definition 1.2.1 but formulated for mappings from the space of distributions to the space of distributions (see [PM11, [Pic09]. It rests on the usage of the Fourier-Laplace transformation. For non-autonomous problems this strategy, however, may not be applicable any more. Hence, we developed a framework which enables us to discuss and prove causality without employing the Fourier-Laplace transformation for the general setting of evolutionary mappings discussed here. The question of whether a closable mappings admits a causal closure has been addressed in the time translation-invariant case in the community of control and systems theory, see [JP00]. The method of choice for answering this question is the (Fourier-)Laplace transformation or the z-transformation for discrete-time settings. In [Wau15], we gave a possible characterization of operators admitting a causal closure without asking for time-shift invariance. We also developed a Banach space analogue for this characterization. The independence of the exponential weight for certain solution operators of certain partial differential equations has been addressed in [PM11, Theorem 6.1.4], [Tro13b, Lemma 3.6] for the time-shift invariant case. Hence, naturally, the arguments in [PM11,Tro13b] employ the Fourier-Laplace transformation. In [KPS + 13, Theorem 4.6] the same question is discussed for possibly non-linear ordinary delay differential equations. In [Wau14c, Section 4] the independence of exponential weight has been shown for a specific class of linear non-autonomous partial differential equations. The line of ideas in [Wau14c] together with [Wau15] have then eventually lead to the treatment developed here. There is also a huge theory of causal differential equations with a focus on ordinary differential equations in a Banach space setting developed in [LLDM09] and the references given there. We shall also refer to the references given in [Wau15] for the treatment of causal mappings in other settings. Solution Theory for Evolutionary Equations The aim of this chapter is to prove two well-posedness statements for evolutionary equations. In very abstract terms, we will consider operator equations of the type for B being defined in some L 2 ν (R; X), X Hilbert space. So, we address conditions on the continuous invertibility of B. Assuming conditions on the structure of B, we will consider both ordinary and partial differential equations. More precisely, by assuming that B is a sum of products of certain operators, we will provide conditions on the constituents of this composure yielding a solution theory for (3.1). Here, in a solution theory, we gather the three Hadamard requirements, that is, existence, uniqueness of solutions as well as continuous dependence on the data. Furthermore, we want the solution operator S = B −1 once existent to be causal. Moreover, being realized in certain weighted L 2 ν -spaces we want S to be fairly independent of ν, that is, if S existed in L 2 ν and L 2 µ then S should be a well-defined mapping on L 2 ν ∪ L 2 µ . The latter fact is properly restated as that the realizations S ν and S µ of S on L 2 ν and L 2 µ , respectively, should coincide on L 2 ν ∩ L 2 µ . Hence, asking for a solution theory of (3.1), amounts to the question of when B −1 = S is evolutionary at some ν and causal or, equivalently (cf. Proposition 2.3.3 and Theorem 2.3.4), is standard evolutionary. In consequence, both the treatments of ordinary and partial differential (evolutionary) equations discussed are similar to one another: In a preparatory step, we will show continuous invertibility of a certain operator in some L 2 ν (X)-space. This settles the three Hadamard requirements of existence and uniqueness as well as the continuous dependence on the data. The concluding step will be to apply the results derived in the preparatory step to (standard) evolutionary mappings and to show that the corresponding solution operator is standard evolutionary itself. Ordinary Differential Equations solution theory for ordinary differential equations · Theorem 3.1.4 In order to further illustrate the notion of evolutionary mappings and some of the main ideas of the solution theory to be developed for non-autonomous evolutionary equations, we stick to a specific class of evolutionary equations first. The class to be discussed in this section is the one of abstract ordinary differential equations with possibly infinite-dimensional state space. Further assume the validity of the estimate Then the operator Remark 3.1.2 The strategy of the proof of Theorem 3.1.1 will be based on an explicit computation of B −1 . In fact, we will show that where we set T = −(∂ t,ν M) −1 R as well as R = N 00 − N 01 N −1 11 N 10 and the series being convergent in operator norm. ⋄ In the next statement, we will treat the case Y = {0} in a slightly more general setting. If, in addition, the estimate holds, then B := DM − N is continuously invertible in X, Ordinary Differential Equations Proof By hypothesis both D and M are continuously invertible. Moreover, the esti- which is a consequence of (3.4), see Proposition 2.3.14. From (3.5) it follows that (DM) −1 N c −1 d c −1 m N =: θ < 1. Hence, with the help of the Neumann series we get continuous invertibility of (1 − (DM) −1 N). As a composition of continuously invertible operators, we infer continuous invertibility of B = (DM)(1 − (DM) −1 N). Moreover, we compute In order to prove the estimate asserted in the lemma, we observe (1.4)) and the needed estimate in Lemma 3.1.3 is warranted by hypothesis. One immediately verifies the computation Hence, B is a composition of continuously invertible operators and, thus, B is continuously invertible. Next, we compute B −1 . For this, we employ again Lemma 3.1.3 to getB Hence, we obtain In the series expression for B −1 ∂ t,ν 0 0 1 just derived, the summand for k = 0 reads as The norm of the second summand in (3.7) is bounded above by Next, for k 1, we compute using Thus, with the help of estimate (3.6), we obtain Therefore, for any k ∈ N 1 an estimate for the operator norm of the matrix given in (3.8) reads So, for the expression We conclude this section with the solution theory for linear abstract ordinary differential equations in the context of evolutionary mappings: hold for all t ∈ R and eventually all ν large enough, where Q t is multiplication by 1 (−∞,t) . Then the operator is one-to-one in L 2 ν (X × Y) for all ν satisfying (3.9) and Proof By Corollary 2.3.13, both M −1 and N −1 11 are evolutionary and causal. Thus, by Theorem 2.3.4, (the standard realization of) M −1 and N −1 11 are standard evolutionary. In order to apply Theorem 3.1.1 in the present context, we need to warrant inequality (3.3), that is, we need to show that for eventually all ν large enough. But, note that the left-hand side of the latter inequality remains bounded as ν → ∞ since N is evolutionary. The right-hand side, however, blows up as ν → ∞. Hence, for eventually all ν large enough the inequality corresponding to (3.3), that is, (3.11), is satisfied. Next, by Remark 3.1.2, B −1 can be represented as an (infinite) sum of compositions of standard evolutionary mappings. Observing that the partial sums are standard evolutionary by Proposition 2.3.9 and bounded, we infer together with the convergence in operator norm (see Remark 3.1.2) for every ν large enough that the series converges to a standard evolutionary mapping. The last assertion of the lemma, that is, (3.10), follows from the estimate in Theorem 3.1.1. Remark 3.1.5 Appealing to Theorem 3.1.1, we find the following estimate to hold true for a suitable evolutionary mapping R. In the abstract settings discussed here, the main difference of ordinary differential equations and partial differential equations is the occurrence of another unbounded linear operator apart from the time derivative. Indeed, in order to cope with many commonly known linear evolutionary equations from mathematical physics, one has to take into account spatial derivatives, as well. So, the general equation Bu = f from (3.1) to be studied in the following admits the more precise form Partial Differential Equations -Preliminaries In this preliminary section, we will introduce some of the assumptions on the operators M, N (the 'material law') and A (the 'unbounded spatial operator') and some of its consequences. Moreover, we will have the occasion to provide some results of a more general nature to be used later on. The operator A (see Hypothesis 3.2.1) is thought of containing the spatial derivatives. In manifold applications A is an (unbounded) skew-selfadjoint operator (in the underlying spatial Hilbert space). For incorporating more involved evolutionary problems, we will, however, relax this condition. The main assumption is roughly rephrased by both the numerical range of A and of its adjoint lying in a right half plane of the complex numbers. Further, anticipating the fact that in applications A contains the spatial derivatives, we will assume a compatibility condition for A with ∂ t,ν . The assumptions on the coefficient M of ∂ t,ν -in comparison to the ODE-case -have to be strengthened in the way that they should boundedly commute with time-differentiation (see Hypothesis 3.2.10), which reflects the fact that if treating multiplication operators with operators depending explicitly on time these operators should be Lipschitz continuous, see also Example 2.1.1. Next, we introduce the hypothesis on A: Hypothesis 3.2.1 (on the unbounded spatial operator) Let X be a Hilbert space, ν > 0. For the proof of Proposition 3.2.2 some preparations are in order. (b) Similar to the convergence result in (3.13), we observe the following for all φ ∈ L 2 ν (R; X). In fact, from (3.14) we read off that Hence, in view of (a), the left-hand side converges strongly to 0 as ε → 0, thus, so does the right-hand side. Proof For all z ∈ ∂B(r, r), we compute where the series converges uniformly in z. By the functional calculus induced by the unitary equivalence stated in Corollary 1.1.9 for ∂ −1 t,ν (see also Theorem 1.1.11) we deduce that with convergence in operator norm, yielding the assertion. Remark 3.2.5 Note that (1 + ε∂ t,ν ) −1 is translation-invariant and causal. In fact, this is a straightforward consequence of Theorem 1.2.7. For the proof of Theorem 1.2.7, we used the Paley-Wiener theorem, which we stated without proof. For having a selfcontained proof of translation-invariance and causality of (1 + ε∂ t,ν ) −1 in this exposition, we argue as follows. Indeed, the claim is a consequence of the explicit formula for ∂ −1 t,ν in Lemma 1.1 (recall ν > 0) and the representation in Lemma 3.2.4: The latter representation immediately yields translation-invariance (since ∂ −1 t,ν is translation-invariant) and that (1 + ε∂ t,ν ) −1 leaves functions supported on [0, ∞) invariant as well (since so does ∂ −1 t,ν by Lemma 1.1). Hence, (1 + ε∂ t,ν ) −1 is causal, by Remark 1.2.2. ⋄ Remark 3.2.6 (a) Aiming for a proof of Proposition 3.2.2, we will employ Lemma 3.2.4. For this observe the following elementary fact. In a Hilbert space X, let A be a closed linear operator and assume that there exists a sequence (T n ) n of bounded linear operators in X being strongly convergent to some T ∈ L(X). If, for all n ∈ N, T n A ⊆ AT n , then TA ⊆ AT. Indeed, let φ ∈ dom(A). Then, by hypothesis, T n φ ∈ dom(A), n ∈ N, and T n φ → Tφ as well as AT n φ = T n Aφ → TAφ as n → ∞. By the closedness of A, we infer Tφ ∈ dom(A) and ATφ = TAφ, which is the claim. (b) Another observation being used in the following is in order. Assume A to be only closable in X, T, T ′ ∈ L(X). Then, for all n ∈ N, we have Hence, by the closedness of A and continuity of T and T ′ , we infer Tφ ∈ dom(A) and The latter inclusion, however, is a straightforward consequence of for all λ ∈ R, where we used Hypothesis 3.2.1. Proof Let ε > 0, f ∈ dom(A). By Proposition 3.2.2, we have (1 we conclude that dom(∂ t,ν ) ∩ dom(A) is dense in dom(A) endowed with the graph norm of A. The density of dom(A) in L 2 ν (R; X) yields the second assertion. With the techniques just employed, we can also show that A is actually translationinvariant: To begin with, we represent the time translation as a function of the (inverse) time derivative. Proof (a) Let ε > 0. The convergence of the series can be seen by Fourier-Laplace transformation. Indeed, the series converges uniformly for all z ∈ ∂B(r, r). (b) For ε > 0 using the formula for the exponential function and Neumann's series, we get for z ∈ ∂B(r, r) From (dominated) pointwise convergence of t h,ε to t h : z → e h/z on ∂B(r, r), we get, using Lebegue's dominated convergence theorem, that the multiplication operators associated with t h,ε converge in the strong operator topology of L(L 2 (R; X)) to the respective multiplication operator associated with t h as ε → 0. Via Fourier-Laplace transformation, (the multiplication operator associated with) t h is unitarily equivalent to τ h . Thus, the assertion follows. (c) This is a combination of the absolute convergence asserted in part (a) and the strong operator convergence of part (b). Proof (of Proposition 3.2.8) By Lemma 3.2.9(c), the operator τ h of time translation can be approximated by a sequence of polynomials (p n ) n applied to ∂ −1 t,ν with respect to the strong operator topology. An application of Remark 3.2.6 with T n = p n (∂ −1 t,ν ) and A = A yields the assertion. We conclude this preliminary section, with the hypotheses on M and N in (3.12), and a small consequence thereof: Hypothesis 3.2.10 (on the material law) Let X Hilbert space, M, N ∈ L(L 2 ν (R; X)). Assume that there exists M ′ ∈ L(L 2 ν (R; X)) such that Moreover, in L 2 ν (R; X), that is, the closure of the commutator converges to 0 in the strong operator topology τ s . By Remark 3.2.3 (a) and (b) together with the formula just derived, we get the desired convergence result in (a). Partial Differential Equations -Invertibility Computation of [(1 + ε∂ t,ν ) −1 , B] · for u ∈ dom(B) we have (1 + ε∂ t,ν ) −1 u ∈ dom(B) · the adjoint of B · Theorem 3.3.2 Next, we come to the announced result concerning the continuous invertibility of the operator sum B = ∂ t,ν M + N + A, that is, the well-posedness of the abstract partial differential equation as in (3.12). In the next section, strengthening the positive definiteness requirement stated in the forthcoming hypothesis, we will address both causality and evolutionarity. The continuous invertibility result will be formulated in the following situation. 3.2.10 and 3.2.1, that is, M, N Furthermore, assume there exists c > 0 such that the positivity conditions and Proof First of all note that the closability follows from Proposition 2.3.14 by inequality (3.16) and Corollary 3.2.7 in order that dom(∂ t,ν ) ∩ dom(A) ⊆ dom(B) is dense in L 2 ν (R; X). We recall Lemma 3.2.11 for the formulas of the commutators of (1 + ε∂ t,ν ) −1 with the operators ∂ t,ν M and N, and Proposition 3.2.2 for the respective one with A. For the proof of the convergence result, it suffices to recall equation (3.18) and Lemma 3.2.11 as well as that (1 + ε∂ t,ν ) τ s → 1 as ε → 0, by Remark 3.2.3. Remark 3.3.6 A more detailed look at the computations in the latter proof reveals that we have the more precise estimate ⋄ We recall that we want to apply Proposition 2.3.14 for proving Theorem 3.3.2. For this, we need to compute the adjoint of B. We note that dom , which yields that B * is a well-defined linear operator. Proof With the help of Lemma 3.3.3, for u ∈ dom(B) we have Since dom(∂ t,ν ) ∩ dom(A) is dense in dom(A) with respect to the graph norm of A (see Corollary 3.2.7), we infer (3.21). We come to the proof of Theorem 3.3.2: Proof (of Theorem 3.3.2) We apply Proposition 2.3.14 to the operator B and the space L 2 ν (R; X) as underlying Hilbert space. For this, we note that (2.10) is guaranteed by (3.16). Next, since C from Corollary 3.3.8 satisfies the analogous positivity estimate (3.17), by the closability of C, the inequality is valid for C replaced by C. By Corollary 3.3.8, however, C = B * , which yields (2.11). Thus, the assertion indeed follows from Proposition 2.3.14. Partial Differential Equations -Causality and the Independence of ν a solution theory of partial differential equations · Hypothesis 3.4.4 · Theorem 3.4.6 This section is devoted to a proof of an adapted version of Theorem 3.3.2 including causality. Moreover, we will prove the independence of the solution operator of the parameter ν in the solution theory. So, as in the case of ordinary differential equations, the aim is to show that the solution operator S = B −1 associated with (3.12) is standard evolutionary. Beforehand, we will state a sufficient condition warranting both the inequalities (3.17) and (3.16). 22) Further, assume for all φ ∈ D and some c > 0. △ . Thus, we are left with showing that (3.23) carries over to all φ ∈ dom(∂ t,ν M). This, however, follows from ∂ t,ν M = ∂ t,ν M| D , which we show in the next proposition. Proposition 3.4.3 Assume Hypothesis 3.2.10. Let D ⊆ dom(∂ t,ν ) be a core for ∂ t,ν . Then Proof Endowed with the respective graph norms, we observe that the canonical embedding dom(∂ t,ν ) ֒→ dom(∂ t,ν M) is continuous by Hypothesis 3.2.10. Hence, it suffices to show that dom(∂ t,ν ) is a core for ∂ t,ν M, that is, the mentioned embedding is dense. So, take u ∈ dom(∂ t,ν M). Then, we compute for ε > 0 Hence, letting ε → 0 and recalling Lemma 3.2.11, we read off that One of the reasons of having introduced Hypothesis 3.4.1 is as follows. Theorem 3.3.2 has a natural analogue in the context of evolutionary mappings with a variant of Hypothesis 3.4.1 as the set of assumptions as we shall see next. This version of Hypothesis 3.4.1 reads as follows. Hypothesis 3.4.4 Let X Hilbert space, ν > 0, M, M ′ , N ∈ L sev,ν (X), A ∈ C ev,ν (X), c > 0. Assume for all µ ν: as well as where Q t denotes multiplication by 1 (−∞,t) and D ⊆ η ν dom(∂ t,η ) is a core for ∂ t,µ . 1/c. In particular, the solution operator does not depend on the exponential weight, that is, for all µ 1 µ 2 ν the operators S µ 1 and S µ 2 coincide on the intersection of the respective domains. With the results of Chapter 2 in mind, apart from the norm estimate, the assertion in Theorem 3.4.6 may be expressed as S ∈ L sev,ν (X). The proof of Theorem 3.4.6 relies on the Theorems 2.3.12 and 3.3.2. We need two preparatory results. Proposition 3.4.7 Let X Hilbert space, ν > 0, denote Q t as multiplication by 1 (−∞,t) , A ∈ C ev,ν (X). Assume for all µ ν Then Proof By Proposition 3.2.8 in combination with Remark 3.2.6(b), we get for all µ ν where we recall τ t f = f (· + t). Hence, for all t ∈ R, φ ∈ dom(A µ ), we have τ t φ ∈ dom(A µ ) and Proof For all η ν, dom(A) is dense in L 2 η (R; X) by hypothesis. Next, the operator (1 + ε∂ t,ν ) −1 is translation-invariant and causal by Remark 3.2.5. So, we infer that (1 + ε∂ t,ν ) −1 leaves L 2 η (R; X) invariant for all η ν, by Remark 1.2.4. By Remark 3.2.6(b), we have ∂ −1 t,η A η ⊆ A η ∂ −1 t,η . Thus, by Proposition 3.2.2, we get for η ν and ε > 0 Hence, We read off that if φ ∈ dom(A), then for ε > 0, we get Thus, Using Remark 3.2.3 and the density of dom(A), we realize that the left-hand side is dense in L 2 η (X), η ν, hence, so is the right-hand side. Proof (of Theorem 3.4.6) We will apply Theorem 2.3.12 to B. For this, we establish the positivity estimate required in Theorem 2.3.12 first. Let µ ν. By Proposition 3.4.3, D is a core for ∂ t,µ M µ . Hence, for all t ∈ R, we have Moreover, by Proposition 3.4.7, we get Thus, for all φ ∈ dom(B), t ∈ R. Therefore, the estimate required in Theorem 2.3.12 is shown. Next, we show that B ∈ C ev,ν (X). For this, we realize that B is densely defined by Lemma 3.4.8, so only closability is the issue here. But, if we let t → ∞ in (3.24), we get B is closable by Proposition 2.3.14. To conclude, we are left with showing that ran(B) is dense in L 2 µ (X) for all µ ν. This, however, follows from Proposition 3.4.2 and Theorem 3.3.2. Indeed, for µ ν, the estimates and Re φ, Aφ L 2 µ 0 (φ ∈ dom(A)) follow either from Hypothesis 3.4.4 or Proposition 3.4.7 by letting t → ∞ in order that Q t → 1 strongly. Comments As it has been mentioned already, for ordinary differential equations, there is a wider class of problems, that may be studied in this L 2 -type setting: Delay differential equations covering a class of functional differential equations, equations of neutral type, integro-differential equations or differential-algebraic equations. But note that, M and N in Theorem 3.1.4 are linear operators in space-time. Assuming time translationinvariance for M and N these are operators of convolution type. Hence, the set of equations treated in Theorem 3.1.4 may already be summarized by "integro-differentialalgebraic". Theorem 3.4.6 has its roots in [Pic09, Solution Theory]. In [Pic09], the problem of solving for some bounded and analytic function M of ∂ −1 t,ν with values in L(X) and a skewselfadjoint operator A in X has been addressed, where A is the lift of A to L 2 ν (R; X), as in Example 2.3.11(b). As noted in [Wau14c, Section 3.1], Theorem 3.4.6 covers the class discussed in [Pic09] by putting M = M(∂ −1 t,ν ) and N = 0. There are plenty of equations already covered by this class: A treatment of electro-seismic waves is included in [MP11] (one needs to involve fractional time derivatives in M(∂ −1 t,ν )), a general class of fractional partial differential equations [PTW15a] (e.g. fractional Fokker-Planck equations, or super and subdiffusion problems ([Wau13, Theorem 4.5, Remark 4.6])). For more examples, we refer to the list given in the introduction. We further remark here that the assumptions on A in Theorem 3.4.6 allow for spatial operators with certain differential equations as boundary conditions. A prominent example are boundary conditions of impedance type. We refer to [Pic12] and [PSTW16]: In both these references the assumptions on A being asked for in Hypothesis 3.4.4 have been established for impedance type boundary conditions for the wave equation and for boundary conditions of Leontovitch type in the area of Maxwell's equation, respectively. A structural point of view treating possible boundary condition in a slightly more abstract setting can be found in [Tro14a]. We will treat some (standard) problems of mathematical physics in the Sections 5.3, 5.4 and 5.5. The method of proof of the [Pic09, Solution Theory] relied on the spectral representation of ∂ t,ν . Later on, still using the explicit spectral theorem for ∂ t,ν , in [Pic12], the method has been generalized to include operators A that commute with the inverse of the time derivative, as in Hypothesis 3.2.1. The latter comes in handy, when discussing the aforementioned problems with impedance type boundary conditions. For nonautonomous problems, techniques as the Fourier-Laplace transformation have limited applicability. Hence, the regularization technique presented in this exposition might be the method of choice. This technique was used in [PTWW13]. As shown in [Wau14c], this technique applies to a broader class. The strategy of proof of Theorem 3.4.6 developed here has its roots in the methods from [PTWW13,Wau14c]. We note here that problems with changing type, that is, problems that are hyperbolic, parabolic and elliptic on different space-time regions may be addressed as well, see [PTWW13, pp 765], [PTW14c, Remark 6.2], or [Wau16b]. Another possible way to deduce a solution theory for problems of the type (3.12) is the usage of the theory of maximal monotone relations. In this line of reasoning one treats (3.12) as a sum of the two maximal monotone relations ∂ t,ν M and A. This strategy has been successfully applied to time translation-invariant M and non-linear A in [Tro11, Tro12, Tro13a, Tro15b] eventually yielding a solution theory for partial differential inclusions. For time dependent operators M, an adapted form can be found in [TW14b]. For a possible extension to a Banach space setting, we refer to [Weh15]. Going back to linear problems, one may address the minimal ν ∈ R the solution operator S in Theorem 3.4.6 is evolutionary at. In fact, if this ν was negative, it is possible to address the question of exponential stability in this framework as well, see [Tro13b,Tro15a]. Some remarks on the comparison to other strategies of finding solutions to this type of partial differential equations are in order. The overall strategy may be thought of as a particular instance of discussing sums of unbounded operators similar to the seminal paper [dPG75]. We refer to [dPG75, Section 5], where a hyperbolic type case is treated. The strategies developed in [dPG75] are thorough and deep and do also cover the Banach space case. Note that, however, restricting ourselves to a Hilbert space setting and employing the particular role of the time derivative yields a particularly accessible way of discussing continuous invertibility of evolutionary partial differential equations. Indeed, the method solely relies on emphasizing the special role of the time derivative and the well-known observation that strict positive definiteness eventually leads to continuous invertibility as demonstrated in Proposition 2.3.14. Putting M = 1 and N = 0 in (3.12) with A being quasi-m-accretive, we realize that the corresponding equation (A is given as in Example 2.3.11) (∂ t,ν + A)u = f may well be treatable with C 0 -semi-group theory. We refer to [EN99, ABHN11, Paz83] for a thorough treatment of semi-groups with regards to the solution theory of differential equations. Being genuinely developed in a Banach space setting, C 0 -semigroups may give more insight on particular properties of the corresponding solution, that is, for instance, boundedness, p-integrability, positivity, or stability. We refer to [EN99, ABHN11, Paz83] again for an account on that. Leading to a continuous-in-timesolution, semi-group theory may also be viewed as a regularity theory of evolutionary equations as in (3.12). The idea of introducing semi-groups or its second order analogue cosine families as a solution concept for partial differential equations has its roots in the finite-dimensional case: Given A ∈ R d×d the fundamental solution to ∂ t u = −Au or ∂ 2 t u = −Au may respectively be written as t → e −tA or t → cos(−tA). Keeping this idea in mind, nonautonomous equations are solved by finding a corresponding solution family associated to ∂ t u = −A(t)u, say. Solution families generalize the concept of the fundamental matrix for non-autonomous ordinary differential equations to the infinite-dimensional setting [Kat53,Paz83,Tan79,Soh78]. In particular, when A(t) is an unbounded operator for every t, one needs to cope with varying domains of A(t). Again, we view the concept of evolution families as a certain regularity theory. As a particular example, we mention the non-autonomous heat equation, formally given by As it will be demonstrated in Section 5.4, we reformulate this equation into a problem of first order. This reformulation enables us to apply the solution theory given in Theorem 3.4.6 without the need of coping with subtleties of possibly changing domains. We refer to [AMP15] for a deep and thorough treatment of these kind of problems in a Banach space type setting. Semi-groups, cosine families and evolution families basically provide solutions for initial value problems. Non-homogeneous problems are then solved using some sort of variation of constants type formulas. The solution concept developed here treats the non-homogeneous problem first. We sketch an adapted treatment of initial value problems as follows. We refer to [PM11, Section 6.2.5] for more details. A corresponding theory for initial value problems can be obtained by formally putting where δ 0 denotes the Dirac-δ-distribution at 0 and u 0 ∈ dom(A) ⊆ X with A being the lift of a quasi-m-accretive A to L 2 ν (X), ν > 0. Recalling that ∂ t,ν 1 [0,∞) = δ 0 , we obtain from (3.25) the equation (3.26) Hence, solving equation (3.26) for v = u − 1 [0,∞) u 0 , we obtain a solution to (3.25) by putting u = v + 1 [0,∞) u 0 . Indeed, causality of (∂ t,ν + A) −1 yields that v is supported on [0, ∞) only. If, in addition, v is weakly differentiable, then v is continuous on R, by Sobolev's embedding theorem (see [KPS + 13, Lemma 5.2]). Hence, Thus, u attains its initial value and satisfies (3.25) on (0, ∞). For a more detailed treatment of initial value problems, we also refer to [KPS + 13, Theorem 5.4] and [PTW14a, Example 2.18] for the ordinary differential equations case. We summarize that semi-groups, cosine families and evolution families are the fundamental solutions or abstract Green's functions to certain (partial) differential equations. The solution theory developed here complements these treatments to partial differential equations as the existence of (a sufficiently regular) fundamental solution is a priori not needed. However, in the present approach -roughly speaking -L 2 righthand sides are mapped to L 2 -solutions only. The strategy of solving partial differential equations by means of semi-groups, cosine families or evolution families leads to more regular solutions. So, take a partial differential equation, where it is possible to apply both the approach discussed in the previous section as well as one of the three approaches semi-groups, cosine or evolution families. Then either of the latter three solution strategy may be viewed as a regularity theory for the approach advanced in the present exposition. Convergence of Evolutionary Mappings and an Application to Ordinary Differential Equations In this chapter we will address convergence issues of evolutionary mappings. We have occasion to discuss the norm, the strong and the weak operator topology in the light of evolutionary mappings. As a first application of these concepts, we will consider abstract ordinary differential equations as discussed in Section 3.1. Topologies on Evolutionary Mappings norm topology · strong operator topology · weak operator topology · characterization of the topologies on bounded sets · compactness result for the weak operator topology · metrizability for bounded sets under the weak operator topology · continuity of multiplication and inversion under the norm and strong operator topologies We start out with the definition of the topologies we are interested in on standard evolutionary mappings at some ν, see Definition 2.1.6. Definition 4.1.1 Let X, Y be Hilbert spaces, ν ∈ R. The norm topology τ n,ν on L sev,ν (X, Y) is defined as the initial topology induced by for all µ ν. The strong operator topology τ s,ν (weak operator topology τ w,ν ) on the space L sev,ν (X, Y) is defined as the initial topology induced by for all µ ν. Denote L n sev,ν (X, Y) := L sev,ν (X, Y), τ n,ν , L s sev,ν (X, Y) := L sev,ν (X, Y), τ s,ν and L w sev,ν (X, Y) := L sev,ν (X, Y), τ w,ν the corresponding topological spaces. Moreover, the norm, strong operator, and weak operator topology on L sev (X, Y) are defined as the final topologies induced by for all µ ∈ R, respectively. Denote the respective topological spaces by L n sev (X, Y), L s sev (X, Y) and L w sev (X, Y). △ Note that as an immediate consequence of the latter definition, for µ ν, we have the continuous (canonical) embeddings as well as the continuous embeddings In particular, the embeddings are continuous. is not defined as the linear or locally convex final topology on L sev (X, Y). More precisely, for t ∈ {n, s, w} the topology τ t on L t sev (X, Y) is given by is the canonical embedding, ν ∈ R. In particular, we do not view any of the spaces L n sev (X, Y), L s sev (X, Y) or L w sev (X, Y) as topological vector spaces. (b) Unless specified otherwise, for product spaces of the form L t sev (X, Y) × L t sev (X, Y) we will use the final topology induced by instead of the product topology, t ∈ {n, s, w} (see also Theorem 4.1.13 below). In particular, the mapping Then, by standard density arguments, we find (one might also consult Proposition 4.1.4 below): (a) If (S ι ) ι converges in (L(X, Y), · L(X,Y) ), then (S (ν) ι ) ι converges in L n sev,ν (X, Y). ⋄ Following the general philosophy of this exposition to formulate the results in νindependent type as much as possible the following small observation is in order. . Proof For the proof of (a), note that S ∈ L(X, Y) implies the finiteness of the supremum. The equality follows from the density of D. On the other hand, the supremum being finite together with the linearity of S imply the Lipschitz continuity of S. Hence, S is Lipschitz continuous as well, yielding S ∈ L(X, Y). To prove (b) and (c), note that both the mappings j s : (B, τ s ) → (B, τ D ), x → x and j w : (B, τ w ) → (B, τ E,D ), x → x are continuous. Hence, it remains to prove continuity of j −1 s and j −1 w . The arguments will conceptually be the same for both these inverses. We will only show continuity of j −1 w . Denote κ := sup S∈B S , and let (S ι ) ι be a net in (B, τ E,D ) convergent to some T ∈ B. There exists ι 0 such that for all ι ι 0 we have With the latter proposition at hand, we can formulate a description of the strong and weak operator topology on bounded subsets of L sev . We recall from Definition 2.3.15 that B ⊆ L sev is bounded, if there is ν ∈ R with B ⊆ L sev,ν and sup µ ν sup S∈B S µ < ∞. For a subset B ⊆ L sev , we denote the relative topology of L n sev , L s sev and L w sev by B n , B s and B w , respectively. 71 Theorem 4.1.5 Let X, Y Hilbert spaces, B ⊆ L sev (X, Y) bounded. Recall the space D(X) := ν∈R L 2 ν (R; X). Then the following holds. A net (S ι ) ι in B s is convergent to some T ∈ B s if and only if there exists ν ∈ R such that for all φ ∈ D(X) and µ ν , that is, in the strong operator topology of L(L 2 µ (X), L 2 µ (Y)). But, the latter convergence implies that S ι ι → T in (L(L 2 µ (X), L 2 µ (Y)), τ D(X) ), where we adopted the notation from Proposition 4.1.4(b) for the topology τ D(X) . Hence, (S ι ) ι converges to T as in (4.3). coincides with τ f [D] induced by Let τ 0 be the initial topology on B induced by and τ µ be the initial topology on B induced by Then τ 0 = τ µ . Another application of the almost trivial observation in Lemma 4.1.8 can be found in the proof of the next lemma, which will lead us to the proof of a compactness property for the weak operator topology of standard evolutionary mappings. Lemma 4.1.10 Let X, Y Hilbert spaces, ν ∈ R. Define the topological space endowed with the product topology. Then R is compact. Proof We use Tikhonov's Theorem to deduce that is compact. Moreover, it is easy to see that is closed. Hence, R µ,ses is compact, and, consequently, so is R = ∏ µ ν R µ,ses by Tikhonov's Theorem again. Theorem 4.1.11 Let X, Y Hilbert spaces, B ⊆ L sev (X, Y) bounded. Then B w ⊆ L w sev (X, Y) is relatively compact. In order that B w is compact, we show that B w can be identified with a closed subspace of R. Recalling the Riesz-Frechet representation theorem, we observe that, for Hilbert spaces W, Z, any contraction T ∈ L(W, Z) is a sesquilinear mapping s T on Z × W with bound 1 and vice versa. The isomorphism is induced by is a well-defined one-to-one mapping. By Lemma 4.1.9, B w carries the topology of L w sev,ν . Hence, by definition of the topology on L w sev,ν , the mapping j : B w → R is continuous. For proving that j is a homeomorphism and that j[B w ] ⊆ R is closed, we are left with showing that for any closed A ⊆ B w the set j[A] ⊆ R is closed as well. For this, let A ⊆ B w closed, and (T ι ) ι be a net in A such that (j(T ι )) ι converges in R to some (s µ ) µ ν . We have to show that (T ι ) ι converges in B w to some S ∈ A with j(S) = lim ι j(T ι ). Employing the Riesz-Frechet theorem again, we infer -by the definition of R -that there exists S µ ∈ L(L 2 µ (X), L 2 µ (Y)), S µ 1, with s µ = s S µ for all µ ν. Moreover, T µ ι ι → S µ in the weak operator topology. Hence, if we show that there exists S ∈ B w with S µ = S µ for all µ ν, we infer lim ι T ι = S ∈ A, by the closedness of A. In order that S µ = S µ for all µ ν for some S ∈ B, it suffices to show that S µ 1 = S µ 2 on L 2 µ 1 (X) ∩ L 2 µ 2 (X) for all µ 1 , µ 2 ν. So, let f ∈ L 2 µ 1 (X) ∩ L 2 µ 2 (X). Since both S µ 1 f , S µ 2 f are measurable there exists a nullset N 0 such that for J := R \ N 0 we have Hence, by the fundamental lemma of the calculus of variations and the countability of D 0 , we infer that there exists a nullset N 1 ⊇ N 0 such that for all t ∈ R \ N 1 and y ∈ D 0 , we have S µ 1 f (t), y = S µ 2 f (t), y . Thus, by the density of D 0 ⊆ Y 0 , we conclude S µ 1 f (t) = S µ 2 f (t) for all t ∈ R \ N 1 . 75 Corollary 4.1.12 Let X, Y be separable Hilbert spaces, B ⊆ L sev (X, Y) bounded. Then B w is metrizable. Proof Without loss of generality B = {S ∈ L sev,ν (X, Y); sup µ ν S µ 1} for some ν ∈ R. Since X and Y are separable, then so are L 2 ν (X) and L 2 ν (Y). In particular, the (standard causal) domains D(X) = µ∈R L 2 µ (X) and D(Y) considered as respective (metric) subspaces of L 2 ν (X) and L 2 ν (Y) are separable, as well. So, take countable dense sets D ⊆ D(X) and E ⊆ D(Y). Then the mapping is continuous and one-to-one. For injectivity of j use that E and D are respectively dense in L 2 ν (Y) and L 2 ν (X). As B w is compact, we infer that j is a homeomorphism onto its image. Hence, as ∏ ψ∈E,φ∈D B C (0, φ ψ ) is metrizable by the countability of E × D, the metrizability of B w follows. In the concluding parts of this section, we address continuity of multiplication and inversion in L sev . We recall that both the latter operations are not continuous under the weak operator topology of L(X, Y) for infinite-dimensional Hilbert spaces X and Y. Thus, we cannot expect continuity of multiplication and inversion in L w sev , leading to more subtle statements, when the weak topology is involved. The results on continuity of multiplication read as follows, for which we recall from Definition 2.3.15 that a set We also recall Remark 4.1.2. Theorem 4.1.13 Let X, Y, Z Hilbert spaces. Consider the multiplication (S, T) → TS as a mapping in the following underlying topological spaces: Then the multiplication in (a) is continuous, and, on bounded subsets, multiplication in (b) is continuous, whereas in (c) multiplication is only separately continuous, that is, for every Proof Taking the continuous inclusions (4.1) into account, the results are straightforward consequences of the corresponding statements, when L sev is replaced by L(X, Y). Remark 4.1.14 Let ν ∈ R. We note that, if, in Theorem 4.1.13, one replaces all L sev (X, Y) by L sev,ν (X, Y) the respective assertions hold true as well. In fact, it is even a more direct consequence of the analogous statements for the operator topologies on L(W, Z) for Hilbert spaces W, Z. ⋄ We conclude this section with addressing the continuity of computing the inverse of evolutionary mappings. Beforehand, for a Hilbert space X, κ 0, we introduce the sets We recall our convention to denote by B n the topological subspace of L n sev for a subset B ⊆ L sev and similarly for B s . Proof For the proof of (a), we let (S ι ) ι be a convergent net in GL n sev,ν ; denote T := lim ι S ι . Hence, there exists η ν such that S ι , T, In order to show the corresponding statement for (b), we similarly let (S ι ) ι be convergent in GL s κ,sev,ν ; with T the corresponding limit. As in (a), we find η ν, such that S −1 ι (T − S ι )T −1 ∈ L sev,η (X) for all ι. Moreover, by definition of GL κ,sev,ν (X), we have sup µ η,ι (S −1 ι ) µ κ. Hence, (S −1 ι ) µ (T µ − S µ ι )(T −1 ) µ converges strongly to 0 in (L(L 2 µ (X)), τ s ), which yields the assertion. In this section and the next section, we are aiming for continuity results of ordinary differential equations on the coefficients. More precisely, we will focus on equations of the form The Norm and the Strong Operator Topologies and Ordinary Differential Equations where M, N 00 , N 01 , N 10 and N 11 are standard evolutionary mappings acting in suitable spaces. Assuming the well-posedness conditions as in Theorem 3.1.4, we address the question of whether assigning a solution operator to (4.8) is continuous. We recall the conditions that lead to a solution theory for (4.8): Let X, Y be Hilbert spaces, ν, c > 0. Then M ∈ L sev (X) needed to satisfy The condition for (4.10) We also recall that Q t is multiplication by 1 (−∞,t) and D(X) = ν∈R L 2 ν (X) and similarly for D(Y). For Hilbert spaces X and Y, c > 0, we define In the notation SO c,ν (X, Y) the letter 'S' is a reminder of 'solution theory', the 'O' stands for 'ordinary differential equations'. Being a subset of L sev (X × X × Y), we may endow SO c,ν (X, Y) with the norm, the strong operator or the weak operator topology. It is easy to see that SO c,ν (X, Y) is a closed subset of each L n sev (X × X × Y), L s sev (X × X × Y) and L w sev (X × X × Y). With the notation just introduced and recalling∂ t := ν>0 ∂ t,ν , we may rephrase Theorem 3.1.4 next. For this, we adopt the custom that evolutionary and causal mappings are, in fact, standard evolutionary (and vice versa) in the sense of Theorem 2.3.4. is well-defined. So, the aim of this section is to establish the following two theorems. with T = −(∂ t M) −1 R as well as R = N 00 − N 01 N −1 11 N 10 holds true. Recall that the sum converges in L n sev (X × Y). Indeed, this follows from Theorem 3.1.4 (3.10). Hence, before coming to the proofs of the Theorems 4.2.2 and 4.2.3, we need to establish a statement shedding light on infinite sums in view of the different topologies introduced. We need the following prerequisite of general nature. Lemma 4.2.4 Let Then ∑ ∞ k=1 w k ∈ Z and Proof First of all note that (1/α k )z k,ι ∈ B Z implies that (1/α k )w k ∈ B Z as the unit ball B Z = {z ∈ Z; z 1} is closed in Z. Thus, ∑ ∞ k=1 w k ∈ Z since Z is a Banach space. Let ε > 0. Then there exists K ∈ N such that ∑ ∞ k=K+1 α k ε. We find ι 0 ∈ I such that for all k ∈ {1, . . . , K} and ι ι 0 we get Therefore, for all ι ι 0 for some ν ∈ R. (a) If for all k ∈ N we have S k,ι converges in norm and Proof For the proof of (a), we observe that for all µ ν and k ∈ N, we have S µ k,ι α k ; hence T µ k α k as the closed (norm) ball of L(W, Z) for Hilbert spaces W, Z is also closed with respect to the weak operator topology. Thus, for all µ ν, the sum ∑ ∞ k=1 T µ k is finite, yielding convergence of ∑ ∞ k=1 T k in L n sev,ν . For the proof of interchanging the limits, let µ ν, φ ∈ L 2 µ (X), ψ ∈ L 2 µ (Y) and apply Lemma 4.2.4 to Z = C, z k,ι = ψ, S µ k,ι φ L 2 µ (Y) and w k = ψ, T µ k φ L 2 µ (Y) for k ∈ N, ι ∈ I. In order to prove (b), let µ ν, φ ∈ L 2 µ (X) and apply Lemma 4.2.4 to Z = L 2 µ (Y), The proof of (c) is an application of Lemma 4.2. Proof (of Theorems Hence, with the help of Remark 4.1.14 again and Proposition 4.2.6, we infer the assertion taking the representation of the solution operator (4.12) into account. The Weak Operator Topology and Ordinary Differential Equations 'failure' of continuity · the set SO c,ν (X) · estimates for limits in the weak operator topology · stability of positive definiteness under weak convergence of inverses · existence of convergent subsequences of solution operators · Theorem 4.3.1 · Theorem 4.3.7 Due to a lack of a version of Theorem 4.1.13 ((a) and (b)) and of Theorem 4.1.15 for the weak operator topology, the result corresponding to the Theorems 4.2.2 and 4.2.3 is more involved. In consequence, the limiting equation is more involved. In applications, this is reflected in so-called 'memory effects' occurring after 'homogenization' of ordinary differential equations. For relations to homogenization problems and a thorough discussion for the occurrence of memory effects and the relationship to Youngmeasures, we refer the reader to the discussion in [Wau14a] and to the comments at the end of this chapter. We shall describe the question to be answered in the following in a bit more detail: Let ((M ι , N ι )) ι be a net in SO c,ν (X, Y). Are there conditions on (some type of convergence of) ((M ι , N ι )) ι in terms of the weak operator topology such that (sol(M ι , N ι )) ι converges in the weak operator topology to sol(M,Ñ) for suitableM,Ñ? The first part of this section will be concerned with the first part of the question, the second part gives a description ofM,Ñ, or, expressed differently, the second part provides the limit equation for a special case. 10 for all ι. Then (sol(M ι , N ι )) ι converges in L w sev (X × Y) with sol being defined in (4.11). Proof We use the representation of sol(M ι , N ι ) as given in (4.11), see also (4.12). Multiplication is separately continuous in the weak operator topology (Theorem 4.1.13(c)). Next, interchanging the limits and summation is possible by Proposition 4.2.6, which settles the assertion. Next, we account for the computation of the limiting equation, that is, we seek to compute M ∞ , N ∞ such that lim ι sol(M ι , N ι ) = sol(M ∞ , N ∞ ). This has been addressed for the case of (convergent) sequences already in [Wau14a]. Note that without any further assumptions on M ι and N ι such as time translation-invariance, the representation of the limit equation given in [Wau14a] does not fit into the representation sol(M ∞ , N ∞ ) for some bounded evolutionary (M ∞ , N ∞ ). As we shall see later on, this is not true for the case Y = {0}. Hence, we focus on the case Y = {0} in the following. For studying the respective solution operator, we introduce as well as By equation (4.12), we get for all (M, N) ∈ SO c,ν (X): (4.14) Hence, the corresponding version of the statement in Theorem 4.3.1 for Y = {0} reads: Corollary 4.3.2 Let X be a Hilbert space, c, ν > 0, ((M ι , N ι )) ι a bounded net in SO c,ν (X). Assume that there exists η ∈ R such that converge in L w sev,η . Then (sol(M ι , N ι )) ι converges in L w sev (X) with sol being defined in (4.13). For computing the limiting equation in the case just discussed in Corollary 4.3.2, we use the representation of the solution operator sol(M, N) in (4.14), the estimates given in Remark 3.1.5 and a Neumann series argument. First of all, we state a closedness result for the weak operator topology, leading to estimates, which will come in handy for applying the Neumann series argument. Then C ν (r) ⊆ L w sev (X, Y) is closed. Proof An application of Lemma 4.1.9 shows that C ν (r) ⊆ L w sev (X, Y) is closed if and only if C ν (r) ⊆ L w sev,ν (X, Y) is closed. The latter, however, is easily seen using that the unit ball of L(L 2 µ (X); L 2 µ (Y)) is closed under the weak operator topology for all µ ν. Remark 4.3.4 In the sequel, Proposition 4.3.3 may be applied as follows. Let (S ι ) ι be a convergent net in L w sev (X, Y) with the property that for all ι we have S ι ∈ C ν (r) for some r, ν > 0. Then lim ι S ι ∈ C ν (r). ⋄ The next proposition gives a more precise estimate of the limit operator in relation to its net converging to it. For this, we define for a net of non-negative real numbers (s ι ) ι the number lim inf ι s ι := inf{t ∈ [0, ∞]; t accumulation value of (s ι ) ι }. Hence, as the only accumulation value of (| ψ, For having a solution theory for the limiting equation as well, we need to warrant the conditions imposed in Theorem 3.1.4 for M and N respectively interchanged by M ∞ and N ∞ . As a part of this, we will need to study the inverse of the limit of (M −1 ι ) ι in more detail. In particular, we want the respective inverse to satisfy an estimate analogous to (4.9) for some suitable c > 0. This will be addressed next. Proof From (4.15), we get for all ι ∈ I that M −1 ι ∈ L sev,ν (X) with (M −1 ι ) µ 1/c for all µ ν, by Corollary 2.3.13 and Proposition 2.3.14. In particular, by Theorem 2.2.9, we read off where we used Hence, we obtain M ∞ = O −1 ∈ L sev,ν (X) and (4.16), by Corollary 2.3.13 and Proposition 2.3.14. Next, using (4.15) again, we get for all t ∈ R, ψ ∈ D(X), ι ∈ I, µ ν, Hence, computing the limit in ι and using Proposition 4.3.5 for Y = C and S ι = Q t M −1 ι ψ, we arrive at Substituting Oψ = φ we arrive at We turn back to the derivation of the limit of (sol(M ι , N ι )) ι in Corollary 4.3.2. So, assume the hypothesis of Corollary 4.3.2 to be in effect and denote for all k ∈ N: With these definitions at hand, we can describe the limit of (sol(M ι , N ι )) ι . Thus, the following theorem may be read as a sequel to Corollary 4.3.2. Proof Without loss of generality we will assume c < 1 in the following. Using the representation in (4.14) and recalling the argument in Corollary 4.3.2 (or Theorem 4.3.1), we get In the following we will show that R L(L 2 µ ) < 1 for eventually all µ large enough. Moreover, in order that sol being well-defined, we need to show that M ∞ satisfies the positive definiteness estimate (4.9) for some (possibly different) c > 0. Having shown all these statements, it is then a straightforward computation to verify that We address the norm estimate for R first. From Remark 3.1.5, we get for all sufficiently large η 0, ι ∈ I where θ = sup ι∈I N ι η /(νc) and N ι η := sup µ η N µ ι . Hence, for the limit in ι, we obtain with Remark 4.3.4 So, there exists η ν such that for every µ η, we obtain We are left with showing the positive definiteness type estimate (4.9) for M ∞ . For this, we compute Next, T := ∑ ∞ ℓ=1 R ℓ is a norm convergent limit of causal and evolutionary operators, and, thus, T is causal and evolutionary itself. In particular, for Q t denoting multiplication by 1 (−∞,t) and using Theorem 2.2.9, we get for every φ ∈ D(X) and µ ν Next, by Theorem 4.3.6 applied to O −1 together with the estimate just derived, we obtain for all t ∈ R, φ ∈ D(X) and µ η Remark 4.3.8 In the statement of Theorem 4.3.7 it is somewhat awkward that the solution operator converges to sol(M ∞ , N ∞ ) with N ∞ = 0. In fact, if ∂ −1 t commuted with M ι and N ι as -for instance -in the case of time translation-invariant coefficients (see Section 1.2), we have the more natural statement that In fact, this representation is indeed more natural, as (M ∞ , N ∞ ) ∈ SO c,ν (X), by Theorem 4.3.6 (see (4.17) in particular), whereas the positive definiteness constant for M ∞ given in Theorem 4.3.7 has to be adjusted (cf. the concluding lines of the proof of Theorem 4.3.7). ⋄ If the Hilbert space X is separable, a combination of the compactness and metrizability result for the weak operator topology, that is, Theorem 4.1.11 and Corollary 4.1.12, immediately yields the following statement for sequences of coefficients instead of nets. Then there exists a strictly increasing sequence (n k ) k in N such that (sol(M n k , N n k )) k converges in L w sev (X × Y) with sol being defined in (4.11). Proof It suffices to observe that, by relative sequential compactness of bounded sets in L w sev (combine Theorem 4.1.11 and Corollary 4.1.12), we may choose a strictly increasing sequence (n k ) k in N such that converge in L w sev,η , T n k = −(∂ t M n k ) −1 R n k and R n k = N n k ,00 − N n k ,01 N −1 n k ,11 N n k ,10 for all k ∈ N. Hence, the assertions follows from Theorem 4.3.1. The Drude-Born-Fedorov Model Maxwell's equations · curl · admissible domain · convergence of multiplication operators and the strong operator topology · Theorem 4.4.8 · Theorem 4.4.9 As a first major application of the theory developed so far, we will treat the Drude-Born-Fedorov model for electromagnetism. As it has been found in [PF13], this formulation of Maxwell's equations may be written as an ordinary differential equation in an infinite-dimensional Hilbert space. Hence, in the present chapter, it is of interest to apply the results of the preceding sections to this particular example. Before, however, applying the abstract theory of evolutionary equations, we need to frame the Drude-Born-Fedorov model into a proper functional analytic setting. We note that the results in [PF13] on the well-posedness for the Drude-Born-Fedorov model apply to a more general situation than the one discussed in the present exposition. Throughout this section, let Ω ⊆ R 3 be open. Formally, the equations may be written as follows. where for simplicity, we assume homogeneous initial conditions. Some comments on the constituents of (4.20) are in order. The unknowns of (4.20) are the two components of the electromagnetic vector field (E, H) : R 0 × Ω → R 3 × R 3 . The mapping J : R 0 × Ω → R 3 models the source term, that is, external electric currents. The 3 × 3-matrix valued functions ε and µ defined on R 0 × Ω are the dielectricity and the magnetic permeability of the underlying medium, β is a non-zero real number. The expression curl is the differential operator acting on the spatial variables of E and H only, which is formally given by for any smooth vector field φ : Ω → R 3 . We will use curl φ also for L 2 -vector fields φ in the distributional sense. We will need the following assumptions on the ingredients of (4.20). Definition 4.4.1 We say that Ω is an admissible domain, if there exists is self-adjoint and −1/β ∈ ρ(curl ⋄ ). The operator curl ⋄ is called an admissible realization (of curl). △ We shall elaborate on the relationship of admissible domains to the Drude-Born-Fedorov model as follows. Remark 4.4.2 A particular realization of the curl-operator can be found in [Pic98a,Pic98b]. The operator curl with this boundary condition is used for the description of the Drude-Born-Fedorov model, see [PF13]. It can be shown that this realization is selfadjoint provided Ω has finite Lebesgue measure, see [Fil00]. Another consequence is that the spectrum of this particular realization of curl is countable. In particular, for bounded open sets Ω there are uncountably many β ∈ R such that Ω is an admissible domain. We shall also refer to [Pic98a,Pic98b] for a corresponding treatment of unbounded Ω. ⋄ We emphasize that the precise selfadjoint realization curl ⋄ of curl is not important for the analysis to follow as long as −1/β ∈ ρ(curl ⋄ ). This observation together with Remark 4.4.2 leads to a generalized treatment of the Drude-Born-Fedorov model. Hypothesis 4.4.3 (on ε and µ) We assume there existsε,μ ∈ L ∞ (R × Ω) 3×3 witĥ for almost every (t, x) ∈ R × Ω. △ Remark 4.4.4 In the following, we will not distinguish between ε, µ and its respective extensionsε,μ to the whole real line. Thanks to causality, we will see that a solution to (4.20) will be independent of the extension of the coefficients to the negative reals. ⋄ In what follows, in order to ease readability considerably, we will identify ε with the corresponding multiplication operator on L 2 ν (R; L 2 (Ω) 3 ) for all ν ∈ R. Likewise, we shall do so for µ. Moreover, we identify curl ⋄ with its lifting to L 2 ν (R; L 2 (Ω) 3 ) as an (abstract) multiplication operator with L 2 ν (R; dom(curl ⋄ )) as domain for all ν ∈ R, see also Example 2.3.11. The well-posedness theorem corresponding to (4.20) reads as follows. Proof We want to apply Theorem 3.1.4. More precisely, take Hypothesis 4.4.3 guarantees inequality (4.9). Indeed, this follows from Hypothesis 4.4.3, the fact that Q t commutes with M and Example 2.1.1. Example 2.1.1 also yields (standard) evolutionarity of N, since, as curl ⋄ is an admissible realization, the operator curl ⋄ (1 + β curl ⋄ ) −1 considered in L 2 (Ω) 6 is bounded. ⋄ Next, we will apply the results on the continuous dependence to the Drude-Born-Fedorov model. We will need a prerequisite, which is particularly useful for the result corresponding to the strong operator topology. 89 Proposition 4.4.7 Let p, d ∈ N, (ε n ) n a sequence in L ∞ (R × Σ) p×p for some measurable Σ ⊆ R d , T ∈ L(L 2 (R × Σ) p ). Assume that T ε n → T as n → ∞ in the strong operator topology of L(L 2 (R × Σ) p ), where we recall that T ε n denotes the associated multiplication operator of ε n . Then (a) (T ε n ) n is bounded, there exists ε ∈ L ∞ (R × Σ) p×p such that T = T ε , and (b) T ε n → T ε in L s sev (L 2 (Σ) p ). Proof We start out with (a). Being strongly convergent, the sequence (T ε n ) n is bounded in L(L 2 (R × Σ) p ), by the uniform boundedness principle. Next, the convergence asserted implies convergence in the weak operator topology, which, in turn, for multiplication operators is easily seen to be equivalent to convergence in the weak* topology of L ∞ . But, by separability of L 1 (R × Σ), the unit ball of L ∞ (R × Σ) p×p is sequentially compact under the weak* topology. Hence, there exists ε ∈ L ∞ (R × Σ) p×p being the limit of a weakly* convergent subsequence of (ε n ) n . Since (any subsequence of) (T ε n ) n also converges in the weak operator topology, the strong operator topology limit of (T ε n ) n is induced by multiplication by ε. For the proof of (b), we will use Theorem 4.1.5. For this, observe that for all φ ∈ D(L 2 (Σ) p ) = ν∈R L 2 ν (L 2 (Σ) p ), we have that Since, ε n and ε commute with multiplication by functions of the type t → e ξt , ξ ∈ R, which is a bijection on D(L 2 (Σ) p ), we get that for all ν ∈ R ε n φ → εφ as n → ∞ in L 2 ν (R; L 2 (Σ) p ). Thus, by Theorem 4.1.5 employing the boundedness of (T ε n ) n again, we infer T ε n → T ε in L s sev (L 2 (Σ) p ) as n → ∞. Theorem 4.4.8 Let Ω an admissible domain, curl ⋄ an admissible realization, (ε n ) n , (µ n ) n in L ∞ (R × Ω) 3×3 . Assume that ε n and µ n satisfy Hypothesis 4.4.3 and that there exists c > 0 such that for all n ∈ N the mappings ε n and µ n satisfy the inequalities (4.21). Let S DBF be as in Theorem 4.4.5. (b) If (ε n ) n , (µ n ) n are such that the associated multiplication operators on L(L 2 (R × Ω) 3 ) converge in the strong operator topology, then Proof For the proof of (a), we want to apply Theorem 4.2.2. For this, we observe that convergence in L ∞ (R × Ω) 3×3 implies convergence of the associated multiplication operators in L(L 2 ν (R; L(Ω) 3 )) for all ν ∈ R. In particular, the sequences (ε n ) n and (µ n ) n are bounded and M n := ε n 0 0 µ n → lim n→∞ ε n 0 0 lim n→∞ µ n ∈ L n sev (L 2 (Ω) 6 ); Thus, the assertion indeed follows from Theorem 4.2.2. In order to prove (b), we deduce by Proposition 4.4.7 that (the multiplication operators induced by) (ε n ) n and (µ n ) n are bounded and converge in L s sev (L 2 (Ω) 3 ). The assertion follows from Theorem 4.2.3. Our next aim is to derive a result corresponding to Theorem 4.4.8 for the weak operator topology. We are aiming at a result, which provides a complete description of the limiting equation. For this, we need to confine ourselves with a restricted class of multiplication operators. Theorem 4.4.9 Let Ω an admissible domain, curl ⋄ an admissible realization. Let ε, µ : R → C bounded, measurable, 1-periodic functions with Re ε(t), Re µ(t) c for some c > 0. For n ∈ N define ε n (t) := ε(nt), µ n (t) := µ(nt), t ∈ R. Then, with S DBF as in Theorem 4.4.5, The proof of Theorem 4.4.9 is an application of Theorem 4.3.7. For this, we need to compute the limits in (4.18). First of all, however, we recall a well-known statement of more general nature, which is related to periodic mappings and will be stated without proof. Next, we recall a result from [Wau14a]. Proof Without loss of generality, we may assume X = C. Let ν > 0. For n ∈ N let T n := T 1,n ∂ −1 t,ν T 2,n ∂ −1 t,ν T 3,n · · · ∂ −1 t,ν T k,n . For K, L ⊆ R bounded, measurable, we compute with the help of Theorem 1.1.6 Moreover, the mapping (t 1 , . . . , t k ) → ∏ k j=1 a j (t j ) is (0, 1) k -periodic. Thus, by Theorem 4.4.10, we conclude that as n → ∞ for all K, L ⊆ R bounded and measurable. A density argument yields we want to compute O and P k , k ∈ N, as defined in (4.18). By Theorem 4.4.10, we obtain that Next, observe that C := curl ⋄ (1 + β curl ⋄ ) −1 commutes with M n , O and∂ −1 t as the former only acts on the spatial variables and the latter only act on the temporal ones. Hence, for k ∈ N, where we used Theorem 4.4.11. Thus, we may apply Theorem 4.3.7 to get So,∂ t M ∞ =∂ t O −1 + N yields the assertion. Comments In [Wau11], we have introduced a topology on possible coefficients M on equations of the form where M was assumed to be translation-invariant and causal. Hence, M = M(∂ −1 t,ν ) for an operator-valued, bounded, analytic function M : B(r, r) → L(X), r > 1 2ν , X Hilbert space. We gather the coefficients treated in the set H ∞ := H ∞ (B(r, r); L(X)) := {M : B(r, r) → L(X); M bounded, analytic}. We endowed H ∞ with the topology induced by where H (B(r, r)) is the space of scalar valued analytic functions on B(r, r) endowed with the compact open topology, that is, the topology of uniform convergence on compacts, φ, ψ ∈ X. It turns out that is compact under this topology, see [Wau14b,Theorem 4.3]. Moreover, one can show that, if (M ι ) ι in B H ∞ converges to some N ∈ B H ∞ , then, for all µ > 1/2r, we obtain (M ι (∂ −1 t,µ )) ι converges to N(∂ −1 t,µ ) in the weak operator topology of L(L 2 µ (X)), see [Wau12, Lemma 3.5]. Thus, is continuous. And, by compactness of B H ∞ , the mapping just defined is even a homeomorphism on its image. Thus, the results derived in the previous chapter and the results to follow are proper generalizations of the results being initially restricted to H ∞ . The study in [Wau11] has been developed for treating problems in homogenization theory. We address the general idea of homogenization theory by means of an example in Section 5.5. For now, we mention that, as a by-product of the functional analytic point of view developed, it is possible to explain memory effects occurring due to the process of homogenization: We consider for ε > 0 the solution u ε ∈ L 2 ν (R; L 2 (R)) of for some f ∈ C c (R × R). By the variation of constants formula we obtain Hence, multiplying the latter formula by φ ∈ L 2 (R) and integrating over x ∈ R, by Theorem 4.4.10, we infer , is the 0th order Bessel function of the first kind (Note that 1 0 e −(t−s) sin(2πx) dx = J 0 (i(t − s))). Is it possible to find a differential equation, which is solved by u and which has f as a given source term? In fact, using the Theorems 4.4.10 and 4.3.7, we obtain with a rather lengthy but straightforward computation We mention here that causal, translation-invariant coefficients for ordinary differential equations have been dealt with intensively in [Wau14a,Wau12]; in [Wau14a] we also treated causal evolutionary coefficients. However, we focused merely on sequences converging in the weak operator topology and did not choose the general perspective of discussing the continuity of the solution operator in the coefficients. The limiting equation is an equation of intergro-differential type. Hence, memory effects occur. A functional-analytic explanation is that computing the inverse of an operator is not a continuous process in the weak operator topology. For an account of homogenization theory with regards to ordinary differential equations, we refer to [Mas84,Tar90,Pet98,Ant93] for a non-exhaustive list. In the bulk of these studies, however, the description of the limiting equation uses the notion of Young measures, see also [Wau14a,Remark 3.8]. We briefly elaborate on Young measures as follows. Given a bounded sequence (a n ) n of [α, β]-valued L ∞ (R)-functions for some real α < β. For an appropriately chosen subsequence (a n k ) k it is possible to describe the weak*-limit of (g • a n k ) k for any continuous real function g ∈ C(R) by means of a family of measures (ν x ) x∈R in the way that The derivation of the Young measure (ν x ) x∈R (associated to (a n k ) k ) is not constructive and relies on a compactness theorem for the weak* topology for measures, see also [Bal89, Theorem 2]. The limit equation of (∂ t,ν − a n k )u k = f for some appropriate f can then be described by with a 0 being the weak*-limit of (a n k ) k and For other treatments of homogenization theory for ordinary differential equations, we refer to the list of references in [Wau14a]. The Continuous Dependence on the Coefficients in Partial Differential Equations In this chapter we will treat partial differential equations with regards to varying coefficients. More precisely, in Chapter 3 Section 3.2, see (3.12), we discussed equations of the form for some bounded M, N and a possibly unbounded A. So, in view of applications discussed later on, we ask for continuous dependence on M and N under the various topologies introduced in Section 4.1. In the first section to follow we will study both the norm and the strong operator topology. The second section will be concerned with the weak operator topology. Similar to the case of ordinary differential equations, the result for the weak operator topology is somewhat more involved. This chapter is concluded with continuous dependence results for partial differential equations and an application to homogenization theory. The Norm and the Strong Operator Topologies and Partial Differential Equations the set SP s c,ν,r · continuity result for the strong operator topology · continuity estimate for the norm topology · Theorem 5.1.3 · Theorem 5. 1.4 Similar to the case of ordinary differential equations in the previous chapter, we define the solution operator according to (5.1). For this, we recall the assumptions that lead to a solution theory for (5.1). We start out with the conditions on A. Hypothesis 5.1.1 Let X Hilbert space, ν > 0, A ∈ C ev,ν (X). Assume for all µ ν: and Next, we define the solution operator. We note that we only show a continuous dependence result for the strong operator topology. The corresponding result for the norm topology is a mere variant of a continuity estimate. where Q t denotes multiplication by 1 (−∞,t) and With the help of Theorem 3.4.6, the following mapping is well-defined. whereǍ := µ ν A µ and∂ t,ν := µ ν ∂ t,µ , see Example 2.3.11. △ In this section, we aim for establishing the following two results on the continuous dependence on M and N: For the norm topology, we have the following announced quantitative estimate. Norm and Strong Topologies and Partial Differential Equations Remark 5.1.5 Note that in the notation of sol(M, N), we did not keep explicit reference to M ′ . Once existent this operator is uniquely determined by the inclusion ⋄ Both the results Theorem 5.1.3 and Theorem 5.1.4 have their roots in the following fundamental identity. We recall that we will suppress the superscript of evolutionary mappings indicating the spaces, where the closure is computed. We compute in the space L 2 µ for some µ ν. Note that∂ −1 t,ν B −1 j φ ∈ dom(Ǎ ν ) (see e.g. Lemma 3.3.5). We compute with the help of Remark 3.3.4 is densely defined. When we come to the proof of Theorem 5.1.3, we will elaborate on the convergence of (M ′ ι ) ι . Proof First of all note that for all ε > 0, we obtain Indeed, εǍ satisfies Hypothesis 5.1.1 and (ε∂ t,ν M, 1 + εN) ∈ SP s 1,ν,r (X). Let µ ν. We make all computations in the space L 2 µ (X) and consider the closures of all evolutionary operators involved in this space, without explicitly recording it in the notation. As in Remark 3.2.3, it is readily seen that S ε → 1 in the strong operator topology of L(L 2 µ (X)). Proof (of Theorem 4.2.3) Recalling the reasoning right before Lemma 5.1.7 and Lemma 5.1.7 itself, we are left with showing the following: Let (M ι ) ι be a convergent net in L s sev (X) with the property that there exists r > 0 such that for all ι there is M ′ ι ∈ C ν (r) with the property Then lim ι M ′ ι exists in C ν (r) as a limit in L s sev (X) and lim First of all note that (M ′ ι ) ι lies C ν (r). So, if ((M ′ ι ) η ) ι converges in the strong operator topology for some η ν, then ((M ′ ι ) η ) ι converges in the weak operator topology of L(L 2 η (X)). But, C ν (r) w is closed in L w sev (X) by Proposition 4.3.3, this implies that (M ′ ι ) ι converges in C ν (r) w by Theorem 4.1.6. Hence, any accumulation value of (M ′ ι ) ι under the strong operator topology lies in C ν (r). Next, from (5.3), we read off that Let η ν be such that (M µ ι ) ι converges in the strong operator topology of L(L 2 µ (X)) for all µ η. For all φ ∈ dom(∂ t,µ ) we get The left hand side of equation (5.5) converges in L 2 where the latter is endowed with the graph norm of ∂ t,µ and so (∂ −1 t,µ (M ′ ι ) µ ) ι converges in the strong operator topology of L(L 2 µ (X), dom(∂ t,µ )). Next, as converges in the strong operator topology of L(L 2 µ (X)). Finally, from (5.5), by performing the limit in ι, we get (5.4). The Weak Operator Topology and Partial Differential Equations continuity result for the weak operator topology · the set SP w c,ν,r · Theorem of Aubin-Lions · weak-strong principle · Theorem 5.2.3 Similar to our way of presenting the case of ordinary differential equations, we now seek a result analogous to the Theorems 5.1.3 and 5.1.4 for the weak operator topology. As the rationale in the previous chapter shows, the weak operator topology is likely to be more involved due to the missing continuity statements in Theorem 4.1.13 (a) and (b) or a corresponding result of Theorem 4.1.15 for the weak operator topology. In this section, we will provide a criterion roughly saying the following: If (M ι ) ι converges in the weak operator topology, In view of Corollary 4.3.2 or Theorem 4.4.9, the limit to equal (∂ t,ν lim ι M ι + A) −1 might be somewhat unexpected: One might suspect that the limit should involve some sort of harmonic mean (as in Theorem 4.4.9). Indeed, if we formally set As (lim n→∞ M −1 n ) −1 = lim n→∞ M n for convergence in the weak operator topology in general, a result of the type (5.6) can only be true under additional assumptions on A. In fact, this is where the unboundedness of A comes into play: Hypothesis 5.2.1 We assume the conditions in Hypothesis 5.1.1, that is, X Hilbert space, ν > 0, A ∈ C ev,ν (X). Assume for all µ ν and φ ∈ dom(A), ψ ∈ dom((A µ ) * ), In addition, assume there exists a Hilbert space Y compactly embedded into X such that The solution operator to study in this section reads as follows. where C ν (r) = {S ∈ L sev,ν (X); sup µ ν S µ r} and Q t denotes multiplication by 1 (−∞,t) . Due to Theorem 3.4.6, we may define Next, we will state the most important result for a proof of Theorem 5.2.3, which is essentially a consequence of the Aubin-Lions Theorem and causality. Beforehand, recall the notation H 1 µ (R; X) for the domain of dom(∂ t,µ ) defined in L 2 µ (R; X) endowed with the graph norm of ∂ t,µ , X Hilbert space. Moreover, for a bounded interval J ⊆ R, we have that the canonical embedding L 2 (J; X) ֒→ L 2 µ (R; X), where we extend any function in the left-hand side by zero to the whole real line, is continuous for all µ ∈ R. Remark 5.2.9 (a) We note that there is also a way of including the topology of L w sev (X) as the underlying topology for SP w c,ν,r (X) and in the target space of sol in Theorem 5.2.3 in the following sense. Let (M ι ) ι∈I be a bounded and convergent net in L w sev (X) with the property that M ι ∈ SP w c,ν,r (X) for all ι ∈ I. By convergence of (M ι ) ι∈I , there exists η ν such that (M ι ) ι∈I converges in L w sev,η (X). Observe that from SP w c,ν,r (X) ⊆ SP w c,η,r (X), we get that lim ι M ι ∈ SP w c,η,r (X), as the space SP w c,η,r (X) is a closed subset of L w sev,η (X), by Remark 5.2.7. Hence, sol(M ι ) converges to sol(lim ι M ι ) in L w sev (X), by Theorem 5.2.3 applied to SP w c,η,r (X) instead of SP w c,ν,r (X). (b) It should be noted that a slightly more detailed analysis than the one done in the proof of Theorem 5.2.3 shows the following stronger statement. with weak convergence in L 2 µ (X) for all µ ν, where (M ι ) ι is any bounded and convergent net in SP w c,ν,r (X). ⋄ The concluding sections of this chapter are devoted to examples. The Eddy-Current Approximation in Electromagnetic Theory Maxwell's equations · electric boundary condition · Theorem 5.3.7 In this section, we will consider Maxwell's equations in matter. For this, let throughout this section Ω ⊆ R 3 be an open set. The equations are formally given by on the space time cylinder R × Ω subject to homogeneous electric boundary conditions for E of vanishing tangential components. As in the Drude-Born-Fedorov model discussed in Section 4.4, the unknowns are the two components of the electromagnetic field (E, H) : R × Ω → R 3 × R 3 . The material's properties are gathered in the coefficients ε, µ, σ : R × Ω → C 3×3 , which respectively are the dielectricity, magnetic permeability and the electric conductivity of the underlying medium. The given righthand side J : R × Ω → R 3 is a source term modeling external currents. We think of the system (5.19) of being given on the whole time line R bearing in mind that -thanks to causality -the consideration of the real half-line as time parameter space eventually merely results in a restriction on the support of J, see also Remarks 4.4.4 and 4.4.6. In the study of eddy-currents (see e.g. [You12]), the dielectricity ε is observed to be rather small compared to the other operators σ and µ involved in (5.19). That is whyfor simplicityε is often neglected to the effect that the resulting system, the so-called eddy-current approximation, formally reads on R × Ω subject to the electric boundary condition. Substituting the equation for σE into the one of ∂ t µH, one obtains an equation of parabolic type. As an application of our results in Section 5.1, we will study the "distance" of solutions from (5.19) to (5.20). Before, however, doing so we set up the functional analytic framework for both the equations (5.19) and (5.20). For this, we introduce the L 2 -operator realization of the curl-operator with homogeneous electric boundary condition: Definition 5.3.1 Denote C 1 c (Ω) 3 the set of continuously differentiable vector fields and define Note that curl c ⊆ curl * c = curl, where curl is the (maximal) L 2 (Ω)-realization of the (distributional) curl operator, that is, curl : {φ ∈ L 2 (Ω) 3 ; curl φ ∈ L 2 (Ω) 3 } ⊆ L 2 (Ω) 3 → L 2 (Ω) 3 acting as the distributional curl operator, see also Definition 4.4.1. Define We note that curl * 0 = curl. As we do not assume any regularity of the boundary of Ω, in general, there is no continuous (tangential) trace operator. So, the replacement of the homogeneous electric boundary condition for E is that E ∈ dom(curl 0 ). A first step towards a solution theory for (5.20) and (5.19) is the following almost trivial observation: Lemma 5.3.2 The operator Proof The result is an application of the observation that for densely defined, closed linear operators B 1 : dom(B 1 ) ⊆ X → Y and B 2 : dom(B 2 ) ⊆ Y → X for some Hilbert spaces X and Y, we have As in the case of the Drude-Born-Fedorov model, we lift the operator A defined in Lemma 5.3.2 as an (abstract) multiplication operator to the space time setting discussed: We set for all ν ∈ R. We record the following facts regarding A ν : (l1) for all ν ∈ R the operator A ν is skew-selfadjoint; (l2) for all ν ∈ R and φ ∈ L 2 ν (R; dom(A)), we infer Q 0 φ ∈ L 2 ν (R; dom(A)) = dom(A ν ), Q 0 multiplication by 1 (−∞,0) , and Next, we study the assumptions on the operators of multiplying by ε, µ and σ: , that is, ε and µ are continuously differentiable with bounded derivatives and attaining values in the selfadjoint, positive definite matrices: Next, assume that there is ν ∈ R, c > 0, such that for all η ν we have for almost every x ∈ Ω and t ∈ R. △ Before applying the continuous dependency results from Section 5.1, we show that the assumptions in Hypothesis 5.3.3 together with the operator A ν introduced in (5.21) lead to a proper solution theory for both equations (5.19) and (5.20). For this, we provide the following two lemmas: Lemma 5.3.4 Let ν ∈ (0, ∞), A ν given by (5.21). Then the following assertions hold true. Proof For (a), we observe that the skew-selfadjointness of A (Lemma 5.3.2) implies the same for A ν (see (l1)). Hence, dom(A ν ) = dom(−A ν ) = dom((A ν ) * ). We recall that for a skew-selfadjoint operator, the respective real-part vanishes. This together with (l2) implies for all φ ∈ dom(A ν ) In order to prove part (b), we apply Theorem 1.1.6 to X = A, that is, the Hilbert space A considered as a closed subspace of L 2 (Ω) 6 × L 2 (Ω) 6 (see Lemma 5.3.2). By Theorem 1.1.6, we infer that ∂ −1 t,ν is a continuous linear operator from into itself. This, in fact, is the assertion. As in the section on the Drude-Born-Fedorov model discussed earlier, for the sake of readability, we identify ε, µ and σ with their respective multiplication operators in L 2 ν (R; L 2 (Ω) 3 )). Proof Note that (a) has already been proven in Example 2.1.1. Thus, we are left with proving (b): Taking φ ∈ dom(∂ t,η ), we compute using integration by parts So, we come to a solution theory for both (5.19) and (5.20). Note that for the coefficients ε, µ and σ satisfying Hypothesis 5.3.3 both equations to study in this section are covered as ε = 0 satisfies inequality (5.22) as well as the regularity requirements asked for in Hypothesis 5.3.3. We stress that we will not record the parameter ν in the notation of the operators curl and curl 0 of the respective liftings to L 2 ν (R; L 2 (Ω) 6 ). is standard evolutionary at ν. More precisely, there exists r > 0 such that where the latter space is given in Definition 5.1.2. Proof By Lemma 5.3.4, the operator A ν given in (5.21) satisfies all assumptions on A in Hypothesis 3.4.4. Moreover, Lemma 5.3.5 ensures that , t ∈ R (apply Lemma 5.3.5 to µ − c in place of ε); Re Q t σ cQ t is easy to see. The rest of the conditions needed for Theorem 3.4.6, that is, the remaining conditions in Hypothesis 3.4.4, have been established in Lemma 5.3.5 as well. The continuous dependency result, now, justifies the eddy-current formulation of Maxwell's equation as a proper approximation of the equations given originally. We state the result not in its most general form, as we will leave µ and σ being fixed. We focus on variations in the dielectricity only: Theorem 5.3.7 Let µ, σ as in Hypothesis 5.3.3, (ε n ) n bounded in C 1 b (R; L ∞ (Ω) 3×3 pd ) such that for all n ∈ N the map ε n satisfies inequality (5.22) for all η ν for some ν ∈ R; assume that ε n → 0 in C b (R; L ∞ (Ω) 3×3 pd ). Let S MAX be given as in Theorem 5.3.6. Then S MAX (ε n , µ, σ) → S MAX (0, µ, σ) in L s sev (R; L 2 (Ω) 6 ) as n → ∞, (5.24) and, for all n ∈ N, we have for all η ν Proof First of all observe that for all η ∈ R the operator norm of T f the multiplication operator associated to sup t∈R f (t) L ∞ (Ω) 3×3 . The latter inequality implies the continuity of the embedding Hence, (the multiplication operators associated to) ε n converge in L n sev (L 2 (Ω) 3 ) to 0 . Thus, ε n → 0 in L s sev (L 2 (Ω) 3 ) as n → ∞, by (4.2). Hence, as n → ∞ and by Theorem 5.3.6, we get as n → ∞ for r := sup n∈N ε ′ n ∞ . Therefore, we infer (5.24) from Theorem 5.1.3. For the proof of the estimate, employing Theorem 5.1.4 and using that sol we compute for all η ν 1/η and S MAX (0, µ, σ) L(L 2 η ) 1/c. Remark 5.3. 8 We remark here that in order that the solution operators of the originally given Maxwell's equation strongly converge to its respective eddy-current approximation where the dielectricity is formally set to 0, one only needs to ensure the dielectricity to be uniformly small in the norm in C b and the respective derivatives being bounded. The Continuous Dependence on the Conductivity in Non-autonomous Thermodynamics non-autonomous heat equation with rough coefficients · div · grad · homogeneous Dirichlet boundary conditions · Theorem 5.4.6 In this section, we present an application of Theorem 5.1.3. We focus on the heat equation with time dependent, non-symmetric and rough coefficients on general domains Ω ⊆ R d . For the rest of this section, let Ω ⊆ R d be open. We mention that there exists a deep theory for this type of equations with equally general coefficients in an L p -type setting on R d as underlying spatial domain, see [AMP15] and the profound list of references therein. Concerning the solution theory, we do not claim any originality here, however, we stress the comparatively easy way of deriving the present well-posedness result. The equations to be discussed in this section read as on R × Ω. The vector analytic operators div and grad are computed with respect to the spatial variables x ∈ Ω only. The map a : R × Ω → C d×d models the thermal conductivity, θ and q are the heat and the heat flux, respectively. The right-hand side f : R × Ω → R is a given external heat source. Subject to homogeneous Dirichlet boundary conditions to be satisfied by θ on the boundary of Ω, we try to solve equation (5.25) for (θ, q). Having done so, we address the question of continuity of the solution operator in the thermal conductivity subject to an appropriate topology. As in the Sections 4.4 and 5.3, we will build up a proper functional analytic framework for (5.25) first and provide a solution theory for this problem. After that, we will apply Theorem 5.1.3 in order to address the continuity in the thermal conductivity. For the functional analytic set up, we introduce the vector analytic operators div and grad: Definition 5.4.1 We set is true. We gather these assertions in one lemma, which we will state without proof as the reasoning follows the lines of the one in the Lemmas 5.3.2 and 5.3.4 upon replacing curl 0 by grad 0 and curl by div. The assumptions on the conductivity a are gathered in the following hypothesis. a(t, x). Then, a(t, x) is an invertible matrix, and ζ = a(t, x)a(t, x) −1 ζ a(t, x) a(t, x) −1 ζ yields for ζ ∈ C d with ξ := a(t, x) −1 ζ: which implies the assertion. We are now in the position to provide a solution theory for (5.25). Proof By Lemma 5.4.2, the operator A ν (= 0 div grad 0 0 ) defined in Lemma 5.4.2 meets the requirements imposed on A in Hypothesis 3.4.4. Moreover, integration by parts, shows that Re Q t ∂ t,ν φ, φ L 2 η ν Q t φ, φ L 2 η for all φ ∈ C 1 c (R; L 2 (Ω)) and η ν > 0, t ∈ R. Further, from Lemma 5.4.4 it follows that Hence, Thus, all requirements in Hypothesis 3.4.4 are warranted. So, Theorem 3.4.6 applies and S HEAT is evolutionary at ν and causal, or, equivalently, standard evolutionary at ν. The remaining assertion follows from the estimates just derived together with the fact that M ′ = 0 for M = 1 0 0 0 . The continuous dependence result on the conductivity is presented next. We recall that we again identified a ∈ L ∞ (R × Ω) d×d with its associated multiplication operators on L 2 ν (R; L 2 (Ω) d ) for all ν ∈ R. Theorem 5.4.6 Let (a k ) k be a sequence in L ∞ (R × Ω) d×d and assume that b := lim k→∞ a k exists in the strong operator topology of L(L 2 (R × Ω)). Assume there exists c > 0 such that a k satisfies Hypothesis 5.4.3 with this c for all k ∈ N. where S HEAT is given in Theorem 5.4.5. Proof By Theorem 5.4.5, we may apply Theorem 5.1.3. For this, we have to ensure that But, this is the same as saying that (a −1 k ) k is convergent to b −1 in L s sev (L 2 (Ω) d ). By Proposition 4.4.7, we infer boundedness and convergence of the sequence (a k ) k to b in L s sev (L 2 (Ω) d ). Note that also Re Q t bφ, φ L 2 ν c Q t φ, φ L 2 ν for all φ ∈ D(L 2 (Ω) d ) = 1/c for all ν ∈ R, see Corollary 2.3.13 and Proposition 2.3.14, we infer from Theorem 4.1.15 that a −1 k → b −1 in L s sev , which together with Theorem 5.1.3 eventually proves the assertion. On the Homogenization of Acoustic Wave Propagation in Bounded Domains the set M(α, β) · G-convergence · solution theory for elliptic type equations · relationship to the weak operator topology · Theorem 5.5.6 The last application of the results developed concerns the weak operator topology. The motivation of this kind of problems stems from homogenization theory. The idea is to consider heterogeneous materials that have highly oscillatory material coefficients. The aim of homogenization theory is to determine the 'effective' properties of the material by looking at the behavior of the solutions of the respective equations when the frequency of oscillations becomes infinitely large. To be more precise, on a bounded domain Ω ⊆ R d consider the wave equation formally given as follows (see also [CD10,Example 5.3]) ∂ 2 t u − div a grad u = f (5.28) on R × Ω, where a : Ω → C d×d pd (taking values in the symmetric, positive definite d by d matrices) describes the material properties, f : R × Ω → C is a given source term, u : R × Ω → C describes the unknown wave propagation subject to homogeneous Dirichlet boundary conditions. In the theory of homogenization, for ε > 0, one is interested in the problem often with a ε (x) := a(x/ε) for x ∈ Ω and addresses the question, whether (u ε ) ε converges as ε → 0 and, if so, whether the respective limit solves an equation of 'similar' type as in (5.28). There exists a vast literature on homogenization theory, we only refer to [BLP78], [CD10] and [Tar09] to mention a few. In order to tackle problems in homogenization theory Spagnolo [Spa67,Spa68] introduced the concept of G-convergence. We recall this concept here. For this, we use the vector analytic operators given in Definition 5.4.1. The literature also gives an account on what conditions are needed for a such that a(·/ε) is 'G-convergent' and provides formulas for the respective limits as well as quantitative estimates for the rate of convergence related to ε. {a ∈ L ∞ (Ω) d×d ; a ∞ β, a(x)ξ, ξ C d α ξ 2 (ξ ∈ C d , a.e. x ∈ Ω)}. Then (a ε ) ε is said to G-converge to b ∈ M(α, β) as ε → 0, if for all f ∈ H −1 (Ω) = (H 1 0 (Ω)) * the solution u ε ∈ dom(grad 0 ) of − div a ε grad 0 u ε = f is such that (u ε ) ε converges weakly in H 1 0 (Ω) to u 0 satisfying − div b grad 0 u 0 = f . △ In order to put the latter definition into perspective of the results developed so far, we insert a short interlude on the solvability of elliptic problems, in particular to those mentioned in Definition 5.5.1. Although a solution theory for equations discussed in Definition 5.5.1 is well-known, we like to point out a slightly more abstract point of view, which has proved useful for general (non-linear) elliptic type problems in [TW14a]. It rests on the following observations. Let X, Y be Hilbert spaces and let S : X → Y be a continuous bijection. Then, by the closed graph theorem, S −1 is continuous as well. Hence, S is Banach space isomorphism from X to Y. Thus, we change the scalar product in Y to be Y × Y ∋ (φ, ψ) → S −1 φ, S −1 ψ X resulting in a scalar product on Y, which is equivalent to the original one. We call Y S the Hilbert space endowed with this modified scalar product. It is easy to see that S : X → Y S is unitary. Thus, the dual operator S ′ : Y * → X * is a Banach space isomorphism. By identifying Y * = Y via the unitary Riesz isomorphism R : Y * → Y, we infer that the (modified) dual is a Banach space isomorphism again. We apply this rationale to (a modification) of the vector-analytic operators introduced in Definition 5.4.1. For this, observe that, as Ω is bounded we have a Poincaré inequality, that is, we find c > 0 such that u L 2 (Ω) c grad 0 u L 2 (Ω) d (u ∈ H 1 0 (Ω)). The latter ensures two-fold, on the one hand grad 0 is injective and on the other hand, by the closedness of grad 0 , the range of grad 0 is a closed subspace of L 2 (Ω) d . We denote π : L 2 (Ω) d → ran(grad 0 ) the orthogonal projection onto ran(grad 0 ). Applying the reasoning just developed to X = H 1 0 (Ω) and Y = ran(grad 0 ), we get that π grad 0 : H 1 0 (Ω) → Y π grad 0 , u → π grad 0 u is unitary. Proof The conditions (i) and (ii) are trivial reformulations of one another. Theorem 5.5.2 together with (5.30) and (5.31) imply that (ii) is true if and only if ((πa ε π * ) −1 ) ε converges to (πbπ * ) −1 in the weak operator topology of the space L(Y π grad 0 ). The latter, in turn, is equivalent to (iii). Before being able to apply our result concerning the weak operator topology to the homogenization type problem in (5.29), we need to warrant the compactness condition in Hypothesis 5.2.1. For this, the following observation comes in handy. The proof of which stems from [Wau13]. Theorem 5.5.4 ([Wau13, Lemma 4.1]) Let X, Y Hilbert spaces, S : dom(S) ⊆ X → Y densely defined, closed and assume that (dom(S), · S ) ֒→ X is compact. Then (dom(S * ) ∩ ker(S * ) ⊥ , · S * ) ֒→ Y is compact. where U : ran(S * ) → X is a linear isometry from ran(S * ) to ran(S). Note that by equation (5.32), we see that V : ran(S * ) → ran(S) : x → Ux is a linear isometry with dense range. Thus, V is unitary. Furthermore, we have V −1 x = U * x for all x ∈ ker(S * ) ⊥ = ran(S). Let (x n ) n be a bounded sequence in (dom(S * ) ∩ ker(S * ) ⊥ , · S * ). Adjoining equation (5.32) yields that (U * x n ) n is a bounded sequence in dom(|S|). Since dom(|S|) = dom(S), we may choose a convergent subsequence of (U * x n ) n , for which we use the same notation. Since V is unitary, we have that (VU * x n ) n also strongly converges and, thus, so does (x n ) n = (VV −1 x n ) n = (VU * x n ) n . In order to put Theorem 5.2.3 into the perspective of homogenization theory, we reformulate equation (5.28) according to our general setting of evolutionary equations developed in this exposition. So, assuming that a ∈ M(α, β) for some 0 < α < β and recalling (5.28) ∂ 2 t u − div a grad 0 u = f , (5.33) we set v := ∂ t u and p := πaπ * π grad 0 with π : L 2 (Ω) d → ran(grad 0 ) being the orthogonal projection. Hence, we obtain ∂ t 1 0 0 (πaπ * ) −1 − 0 div π * π grad 0 0 v p = f 0 . (5.34) In the reformulated fashion, we can now apply Theorem 3.4.6, where we employ the custom of not keeping track of the particular L 2 ν the respective operators are realized in as well as of identifying elements a ∈ M(α, β) with their corresponding multiplication operators. Finally, we come to the announced result on the homogenization of (5.29). Remark 5.5.7 The convergence of solution operators S WAVE (a ε ) proved in Theorem 5.5.6 implies the convergence of the solution operators in the weak operator topology of L 2 ν (R; L 2 (Ω) × ran(grad 0 )) for some ν ∈ R. The latter in turn is equivalent to the notion of G-convergence introduced by Spagnolo in the first place, see also [ZKON79,p 74]. The novelty element of Theorem 5.5.6 lies in the fact that the solution operators converge in the weak operator topology of evolutionary mappings. Thus, one might phrase the result of Theorem 5.5.6 as "evolutionary G-convergence" of the solution operators of the wave equation. Comments We want to give a brief account on the study of the continuous dependence of solutions to partial differential equations on the coefficients. There are some results for particular equations or with both stronger and weaker topologies the coefficients are considered in. The focus in the available literature is on non-linear equations. In [YG11] a particular non-linear equation is considered and the continuous dependence of the solution on some scalar factors is addressed. The so-called Brinkman-Forchheimer equation is discussed with regards to continuous dependence on some bounded functions under the sup-norm in [CKU06,FS03,TL07,PS99,Liu09]. The local sup-norm has been considered in [BFGP09], where the continuous dependence on the (non-linear) constitutive relations for particular equations of fluid flow in porous media is discussed. A weak topology for the coefficients is considered in [Kim09]. However, the partial differential equations considered are of a specific form and the underlying spatial domain is the real line. Dealing with time dependent coefficients in a boundary value problem of parabolic type, the author of [Pen91] shows continuous dependence of the associated evolution families on the coefficients. In [Pen91], the coefficients are certain functions considered with the C 1 -norm. The author of [Tud76] studies the continuous dependence of diffusion processes under the C 0 -norm of the coefficients. Also with regards to strong topologies, the authors of [KvN11,KvN12] studied continuous dependence results for a class of stochastic partial differential equations. All in all, the results developed in this exposition merely complement the research on the continuous dependence on the coefficients. A main reason being that the class focused on in this exposition is rather general and, thus, applies to many particular settings. In turn, the advantage of being applicable to many equations at the same time inherits the drawback of being certainly not optimal, if one restricts oneself to a specific class of both equations and coefficients. A particular instant for this fact can be found in [CC15]. With regards to homogenization theory, we shall particularly refer to [Tar09,CD10,BLP78], where the continuous dependence of the coefficients has been addressed in the particular situation of homogenization problems. For the concept of H-convergence generalizing G-convergence we also refer to [Mur78b,Mur78a]. The present exposition does not treat the case of Maxwell's equations with regards to the weak operator topology. We refer to [Wau16a] instead for the rather involved treatment. Note that Maxwell's equations have the particular property that memory effects (as in the case of ordinary differential equations) are likely to occur due to the homogenization procedure. This has also been observed in [BS03,Wel01]. The reason being that -in contrast to the example treated in Section 5.5 -both the spatial derivative operators (curl and curl 0 ) have an infinite-dimensional nullspace. Hence, a projection technique similar to the one in Section 5.5 applied to Maxwell's equation leads to a system of a partial differential equation and an ordinary differential equation with infinite-dimensional state space, see [Wau16a] for more details. There is still plenty of research to be addressed in the future: the weak operator topology and Maxwell's equations have only been dealt with for time-shift invariant coefficients in [Wau16a]. The results concerning the norm and the strong operator topology being entirely new, there is certainly room for optimizing the respective results. Regarding numerical treatments of evolutionary equations discussed here, a natural way of complementing the present results is to ask for continuity in the unbounded operator A in the solution operator induced by (∂ t M + A)u = f : Does for instance strong resolvent convergence of a sequence (A n ) n imply convergence of the corresponding solution operators S n = (∂ t M + A n ) −1 ? In fact, this would give way for the study of Galerkin approximations and hence form a part of numerical analysis for evolutionary equations. Apart from Theorem 5.1.4, all results on the continuity of the solution operator in the coefficients were non-quantitative. Future research may thus be concerned with convergence rate or moduli of continuity of the mappings considered.
2016-06-24T15:51:51.000Z
2016-06-24T00:00:00.000
{ "year": 2016, "sha1": "332726c08ce8d4b054d2498623ddc1d0af08e3ed", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "332726c08ce8d4b054d2498623ddc1d0af08e3ed", "s2fieldsofstudy": [ "Mathematics", "Physics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
54937232
pes2o/s2orc
v3-fos-license
Large insulating nitride islands on Cu3Au as a template for atomic spin structures We present controlled growth of c(2$\times$2)N islands on the (100) surface of Cu$_3$Au, which can be used as an insulating surface template for manipulation of magnetic adatoms. Compared to the commonly used Cu(100)/c(2$\times$2)N surface, where island sizes do not exceed several nanometers due to strain limitation, the current system provides better lattice matching between metal and adsorption layer, allowing larger unstrained islands to be formed. We show that we can achieve island sizes ranging from tens to hundreds of nanometers, increasing the potential building area by a factor 10$^3$. Initial manipulation attempts show no observable difference in adatom behaviour, either in manipulation or spectroscopy. Introduction The ability to position individual magnetic adatoms into a specific arrangement on a surface holds great potential for atomic scale studies of quantum magnetism [1]. A particularly successful template for the placement of transition metal atoms is the c(2×2) reconstruction of nitrogen on the Cu(100) crystal surface [2], which provides a self-terminated insulating monolayer, separating the atomic spins from the conduction electrons in the metal below [3]. Due to its covalent structure, the copper-nitride surface provides significant magneto-crystalline anisotropy [4] and allows for tunable spin-spin coupling between neighbouring atoms, both ferromagnetic and antiferromagnetic [5,6]. The combination of these techniques has given rise to a range of seminal experiments, including the construction of a 96-atom magnetic byte [7], the observation of spin waves in a one-dimensional spin chain [8], and the atomically precise study of various highly entangled spin systems [9,10]. As atom manipulation techniques become more reliable [11], the size of atomic structures is only limited by the maximum available continuous building area. In the case of copper-nitride, this limit is imposed by the nitrogen islands. Due to a 3% lattice mismatch between the adsorption layer and the underlying Cu(100) crystal, island sizes are strain-limited to ∼5 nm × 5 nm -or, on saturated surfaces up to 20 nm × 20 nm [12] -hampering the assembly of any spin structure larger than that. Here, we present growth of nitride islands on a different metal substrate: the Cu 3 Au(100) surface. With a lattice constant a = 0.375 nm [13], its lattice much better matches the one of coppernitride (a = 0.372 nm [14]) than the Cu(100) surface (a = 0.359 nm [15]) does. By properly tuning growth conditions, we can routinely grow islands ranging from tens to hundreds of nanometres across, vastly increasing the area on which spin structures can be assembled. Experimental details The experiments were performed in a scanning tunnelling microscope (STM) operating in ultra-high vacuum (UHV) and cryogenic condi- tions. During measurements the pressure was <5 × 10 −10 mbar and the temperature was between 1.4 K and 1.5 K. Sample preparation was performed in situ in a UHV chamber connected to the STM, which has a base pressure of <4 × 10 −10 mbar. The preparation chamber is equipped with standard sputtering and e-beam annealing equipment, and has inlets for pure argon and nitrogen (99.999%). We monitor the sample temperature during annealing by means of a pyrometer. Due to stray radiation originating from the filament behind the sample, the actual temperature readout, while reliable, is overestimated. In order to approximate the real temperature of the sample during annealing, we record the cooling curves after turning off the filament, and extrapolate back. We used a commercial Cu 3 Au crystal grown by Surface Preparation Laboratory, which was cut along the (100) plane with ∼0.1 • accuracy and polished to a roughness <0.03 µm. Prior to growing the nitride islands, the crystal was cleaned with multiple rounds of argon sputtering at 1 kV followed by annealing. This process was repeated until a clean surface with large plateaus was observed in STM images. Nitrogen was subsequently implanted into the superficial layer by sputtering N + ions onto the surface. We used an sputtering voltage of 500 V and a current of 1 µA to achieve coverages in the order of a monolayer per minute. To favor the mobility of the implanted nitrogen atoms and repair possible damage to the surface, we follow the sputtering by an annealing process, leading to the formation of a c(2x2)N reconstruction on the Cu 3 Au(100) surface, similar to that reported for Cu(100) [2]. A Cu 3 Au crystal can be in two distinct phases: an ordered L1 2 phase [16,17] upon annealing below a critical temperature T c = 663 K [18,19,20], and a disordered phase above this temperature [21,22]. While both phases have the FCC crystal structure, in the L1 2 phase the Au atoms are periodically distributed over the crystal whereas in the disordered phase they are not. The transition between the two phases is reversible [23,24]: the crystal can be brought back into the L1 2 phase in a matter of hours by annealing at temperature near T c [23,25]. The lattice constant of the disordered phase is slightly larger than the ordered L1 2 phase (0.3762 nm and 0.3754 nm) respectively [26]). Results and discussions In a first series of experiments, a clean and ordered L1 2 sample was sputtered with nitrogen for 45 s at a current of 0.8 µA and an accelerating voltage of 0.5 kV. The sample was then annealed for 5 min. The annealing temperature was kept at T > T c for only short periods of time, preserving the order in the bulk of the crystal. The surface is faster to both order and disorder when crossing the critical temperature, taking place on a broader temperature range [27,28]. Figure 1 shows the effect of different annealing temperatures (as determined via the process described above) on the size and distribution of the islands. The resulting islands vary in size from 10 nm to 100 nm in their longest direction where the largest islands appear only at a higher temperature. We observe a trend towards larger islands for higher temperatures. The edges of the island are mostly straight and oriented along the crystallographic axes (rotated between 5 • and 10 • clockwise relative to the image frame), as is observed on Cu(100). The island size is strongly increased with respect to the case of Cu(100) owing to a reduced strain accumulation, due to the better match in lattice parameters. A second series of experiments was performed after a prolonged high temperate treatment of the crystal. We annealed the crystal for 15 hours at >900 K. This temperature is well above the critical temperature of 663 K, driving the crystal from the ordered L1 2 into a disordered FCC phase. The disordered crystal was then prepared in a similar fashion as the ordered crystal. The amount of sputtered nitrogen is similar to the amount sputtered in the preparations in Figure 1 and the annealing time was kept unchanged at 5 min for each preparation. The resulting surfaces at different annealing temperatures can be seen in Figure 2. The island sizes follow the same trend as on the ordered crystal, with islands ranging from 10 nm to 100 nm where larger islands are observed for higher temperatures. A major difference is found Before scanning at 5V After scanning at 5V in the island geometry. In the ordered case the islands have mostly straight edges. In contrast, the islands of the disordered crystal are more rounded, showing no clear preferable orientation with regards to the crystallographic directions, indicating a more isotropic diffusion. This can be explained by disorder creating slight local variations in the lattice parameters and allowing the strain of the nitrogen reconstruction to be released. This strain release process could allow the reconstruction of islands without fundamental limits in their sizes. Scanning the surface at higher bias voltage V b reveals features that were not evident for V b < 0.5 V. Figure 3 shows the same island scanned at different bias voltages. At V b = 1.5 V we observe bright spots appearing in the nitrogen reconstruction. At V b = 10 mV and with atomic resolution, we can see that those bright spots corresponding to defects in the nitride lattice (see insets of Figure 3). The exact nature those defects is unknown, but they were observed only in the disordered phase. We suggest that they are Au atoms that are incorporated into the copper-nitride layer as substitutions of Cu atoms. In the L1 2 phase, every other layer in the (100) direction consists exclusively of Cu atoms; after saturation with nitrogen, the surface is terminated on a Cu-only layer [13]. The defect distribution gives us an indication on the formation and merging of islands. On round islands, the defects are mostly around the edges, where during growth new nitrogen joins the island and where the c(2 × 2) reconstruction is therefore not completed. Due to Brownian movement, the islands diffuse on the surface and will eventually collide. This process of coalescence is visible in Figure 3, where two islands were frozen in the process of merging. Longer annealing time or higher temperature would allow the island to properly merge and adopt a round shape. This establishes a clear relation between the elongation of the islands and the coalescence between multiple islands. Raising the annealing temperature allows for faster dynamics, accelerating the island merging process which consequently leads to larger and rounder islands. Figure 4a shows a ∼145 000 nm 2 island observed on the disordered crystal. The total area of this island is an improvement of three orders of magnitude respect to the maximum area of a nitrogen islands on Cu(100) [12]. However, in the inset of Figure 4a, taken at higher bias, we can observe a high density of defects on the nitrogen reconstruction evenly distributed along the island, suggesting a higher Au-Cu substitution at the surface at elevated growth temperatures. Nonethe- less, we successfully evaporated and manipulated Fe atoms and on this island (white dots), and were able to engineer well-behaved spin structures (see Figure 6). The area surrounding this island presents an irregular topography, highlighted in Figure 4c-e, which we will denote as percolation regions. We observe this kind of behaviour for the preparations we performed at highest temperatures. They consist of a square pattern broken up by irregular channels connecting larger islands, as well as clean patches of exposed Cu 3 Au surface. We have seen the percolation region to be unstable for V b > 4 V both while scanning (see Figure 4e, f) and during spectroscopy (see Figure 5). We note that on the same sample preparation, it is possible to observe areas in the percolation regime and areas with regular nitride islands by macroscopically displacing the STM tip accros the sample surface, indicating that various phases can coexist on a single crystal. By starting the annealing at a higher temperature and gradually lowering the temperature, we are able to create round large islands with smooth nitrogen reconstruction, where the defects are mostly on the edges. Figure 5 shows constant current dI/dV spectroscopy measurements on both regular and percolated areas. As seen in Figure 5b, the nitride islands in the regular region behave analogously to those reported for Cu(100) [29]. In the percolated region (Figure 5c,d), three distinct phases are observed: two that behave similarly to the regular region (red, green) and the phase with the square pattern (black), where the spectroscopy is mostly featureless (apart of its instability). The nitrogen islands on Cu 3 Au(100) are suitable for adatom manipulation. We assembled many structures of Fe adatoms -from dimers to longer chains and blocks. Examples of such successfully assembled structure can be seen in Figure 6a. In Figure 6b we show spectroscopy measurements on the three atoms of a Fe trimer, which are quantitatively the same as for a trimer assembled on nitride on Cu(100) [30]. Atomic manipulation is performed vertically, by moving an atom from the surface to the tip and subsequently form the tip to the surface on the desired position [3]. Conclusion We have studied the growth of c(2 × 2) nitride islands on the Cu 3 Au(100) crystal surface, which results in island sizes that are much larger than on the well-studied Cu(100) surface. When the crystal is prepared in the ordered phase, we observe mostly rectangular nitride islands, which increase in size with temperature. On the disordered phase we see a similar relation between annealing temperature and island size, but in this case the islands are round, indicating that effects of strain due to lattice mismatch have diminished. Measurements at higher voltages reveal defects, the distribution of which gives information about the coalescence of islands during growth. The nitride islands on Cu 3 Au(100) are found to be equally suitable for vertical manipulation of magnetic adatoms as their counterparts on Cu(100).
2018-07-20T17:15:05.000Z
2018-07-20T00:00:00.000
{ "year": 2019, "sha1": "7780c90a3fa5a70a0a3b2c89e9d0e2ed6e825428", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1807.07943", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "7780c90a3fa5a70a0a3b2c89e9d0e2ed6e825428", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
232773876
pes2o/s2orc
v3-fos-license
Physical, Chemical, Mechanical, and Biological Properties of Four Different Commercial Root-End Filling Materials: A Comparative Study Commercial mineral trioxide aggregate (MTA) materials such as Endocem MTA (EC), Dia-Root Bio MTA (DR), RetroMTA (RM), and ProRoot MTA (PR) are increasingly used as root-end filling materials. The aim of this study was to assess and compare the physicochemical and mechanical properties and cytotoxicity of these MTAs. The film thicknesses of EC and DR were considerably less than that of PR; however, RM’s film thickness was greater than that of PR. In addition, the setting times of EC, DR, and RM were shorter than that of PR (p < 0.05). The solubility was not significantly different among all groups. The three relatively new MTA groups (EC, DR, and RM) exhibited a significant difference in pH variation and calcium ion release relative to the PR group (p < 0.05). The radiopacity of the three new MTAs was considerably less than that of PR. The mechanical strength of RM was not significantly different from that of PR (p > 0.05); however, the EC and DR groups were not as strong as PR (p < 0.05). All MTA groups revealed cytocompatibility. In conclusion, the results of this study confirmed that EC, RM, DR, and PR exhibit clinically acceptable physicochemical and mechanical properties and cell cytotoxicity. Introduction Calcium silicate-based cements (CSCs) have various indications for use in endodontics, and their clinical applications have increased over the years. Mineral trioxide aggregate (MTA) is a calcium silicate-based material commonly considered ideal for endodontic treatment due to its excellent biological and physicochemical properties [1][2][3][4]. The first commercially available MTA, Pro Root MTA (Dentsply, Tulsa, OK, USA), is composed of Portland cement and bismuth oxide [5,6]. MTA is recommended for a number of clinical applications in endodontic treatment, such as pulp capping, pulpotomy, apexification, apicogenesis, apical barrier, repair of root perforations, formation in teeth with necrotic pulps and open apexes, root-end filling, and orthograde root canal filling [1,4]. An ideal endodontic repair material should be biocompatible, easy to handle, insoluble in body fluids, economical, and dimensionally stable for long-term clinical success [1,4]. MTA has several advantages in terms of biocompatibility, bioactivity, sealing ability, and dimensional stability [7,8]. Conventional MTA, ProRoot MTA (PR), has supplanted other endodontic materials because of its superior physico-chemical and biological properties that are due to its composition of fine hydrophilic powders of tricalcium silicate, tricalcium aluminate, tricalcium oxide, and other oxides [1,4,9]. Although it has a variety of favorable properties, conventional MTA (PR) has been reported to have several drawbacks in clinical settings because of Film Thickness, Setting Time, and Solubility The film thickness, setting time and solubility tests of the materials were measured based on the International Organization for Standardization (ISO) 6876 standard methods. To investigate the film thickness, two flat glass plates with 25 mm square (contact surface area of approximately 625 ± 50 mm 2 ) were combined to measure the thickness of the two glass plates in contact. Three minutes after starting the mixing, a load of 150 N was applied vertically on the upper plate. Ten minutes after the start of mixing, the thickness to an accuracy of 1 µm of the space between the two glass plates that was filled with experimental material was measured using a digital caliper (Mitutoyo Model CD-15CPX; Mitutoyo Co., Kawasaki, Japan). The test was repeated 3 times for each group. To analyze the setting time, the initial and final setting times were measured by evaluating the absence of indentations caused by Gillmore needles. The mixed experimental material was filled into a mold with internal diameter of 20 mm and height of 2 mm. Before the test, all the apparatus was conditioned for 24 h under 100% relative humidity at 37 ± 1 • C. The setting time test was performed under 100% relative humidity at a temperature of 37 ± 1 • C using the Gillmore needle. Each of the two indenters with flat ends and a mass of 100 ± 0.5 g (initial setting time) or 453.6 g (final setting time) were loaded vertically onto the top surface of the specimens. The test was repeated 3 times for each group. To measure the solubility, the mixed experimental material was placed in a mold with 20 mm diameter and 2 mm height, and the excess was removed. The filled mold was placed under 100% relative humidity at 37 ± 1 • C for 7 d. For each tested experimental material, two specimens were used (total 6, n = 3). The initial mass of the two specimens (m 0 ) and the container (M 0 ) were measured to the nearest 0.001 g by an analytical balance (XS105, Mettlertoledo AG, Greifensee, Switzerland). The two specimens were immersed in 50 mL distilled water and placed in a water bath maintained at 37 ± 1 • C for 24 h. After 24 h, the specimen was removed from the container, washed with distilled water, and then placed in an oven at 80 ± 2 • C for drying. Subsequently, the desiccator was cooled and weighed to determine the final mass (M 1 ). The final mass of each specimen was deter-mined, and the loss of mass was calculated by the following equation: Radiopacity Radiopacity evaluation of the set materials was performed using the ISO 6876 and 13116. Each mixed MTA material was filled into a mold 10.0 ± 1.0 mm in diameter and 1.0 ± 0.1 mm height. To obtain the radiographic images, both specimens and an aluminum step wedge were placed on a digital sensor and exposed to an X-ray unit (Carestream CS7600, Siemens, Munich, Germany) at 65 ± 5 kV and 10 mA with a 300 ± 100 mm focus-film distance. The grey pixel values of each specimen were determined using the Photoshop program (Adobe, San Jose, CA, USA), and the equivalent radiopacity of the cement sample was calculated in mm of aluminum (Al mm). Compressive Strength at 7 Days Each MTA materials was mixed and filled into a mold 4 mm in diameter and 8 mm in height. Seven cylindrical samples for each group were prepared. The specimens were incubated under 100% relative humidity at a temperature of 37 ± 1 • C. The specimens for compressive strength were ground with a wet 600 grit 7 d after preparation. A computercontrolled universal testing machine (Model 3366; Instron ® , Norwood, MA, USA) was used to compress the specimens. The compressive strength was measured at a crosshead speed of 0.25 mm/min. The maximum load of the compressive strength was recorded and calculated in MPa as: where CS is the compressive strength, p is the maximum force applied in Newtons (N), and D is the mean diameter of the specimen in millimeters (mm). pH, Calcium Ion Release, and Bioactivity To analyze the pH variation of the soaking solution and calcium ion release, each MTA material was prepared and filled into a mold with an internal diameter of 10 mm and a height of 1 mm. After filling, the specimen was stored at 37 ± 1 • C for 24 h. The specimen was separated from the mold and was then immersed in 10 mL of Hank's balanced salt solution (HBSS; H6648, Sigma Aldrich, St. Louis, MO, USA) [24]. A pH meter (Orion 4 Star, Thermo Fisher Scientific Inc., Singapore) calibrated using buffer solutions of pH 4.01, 7.00, and 10.01 was used. The pH variation of the HBSS-immersed specimen was measured at 3 h, 6 h, 24 h, 72 h, and 168 h. The same solutions used to test the pH variation were used to test for calcium ion release. After 168 h, to analyze calcium ion release from the specimens, the HBSS was filtered with a 0.22-µm syringe filter (DISMIC 25CS, Advantec, Osaka, Japan) and subjected to inductively coupled plasma optical emission spectrometry (ICP-OES, Optima 8300, PerkinElmer, Waltham, MA, USA). The measurements of the pH and calcium ion release were repeated 3 times, and mean and standard deviations were used. The morphology of the specimens before and after immersion in HBSS solution was analyzed by field emission scanning electron microscopy (FE-SEM, MERLIN, Carl Zeiss, Oberkochen, Germany) after ion sputtering (Leica EM ACE600) to coat the specimen with platinum. The specimen of each MTA material was formed under aseptic conditions in a sterile cylindrical mold 5 mm in diameter and 2 mm high and sterilized using ultraviolet irradiation (UV) for 30 min before storage in an incubator at 37 ± 1 • C for 24 h to achieve complete setting. The ratio of material surface area to medium volume was set at approximately 3 cm 2 /mL in accordance with the ISO 10993-5 and 12 [25,26]. The extraction medium was filtered through a 0.22 µm syringe filter, and three concentrations (50%, 25%, and 12.5%) were prepared and applied to the cells. At 24 h, the cytotoxicity was determined using a 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT; Sigma-Aldrich, St. Louis, MO, USA) assay. MTT solution was added to the cells and incubated at 37 • C for 2 h in the dark. The MTT solution was then removed, and 100 µL of dimethyl sulfoxide (0231, VWR Life Science, Radnor, PA, USA) was added to each well. The optical density (OD) at 570 nm was measured using a plate reader (Epoch, BioTek, Winooski, VT, USA). The experiments were performed in triplicate. Statistical Analysis The results of film thickness, setting time, solubility, radiopacity, compressive strength, pH variation, calcium ion release, and cell viability were analyzed using the SPSS 25 software program (IBM Corp., Armonk, NY, USA). To calculate the mean and standard deviation (SD), descriptive statistics was conducted. In addition, to analyze the significant difference among the MTA groups, one-way analysis of variance (ANOVA) and Tukey's honest significant difference (HSD) test were performed. p-vales less than 0.05 were considered statistically significant. Table 2 shows the means, standard deviations, and statistical comparisons of the film thickness, setting time, and solubility tests of the commercial materials studied. The mean film thickness values of EC, DR, RM, and PR were 0.28, 0.26, 0.96, and 0.58 mm, respectively. RM had the thickest film, and EC and DR had the thinnest films among the tested groups (p < 0.05). Film Thickness, Setting Time, and Solubility Regarding initial setting times, RM had the shortest initial setting time, and DR had the longest setting time (p < 0.05). Additionally, RM had the shortest final setting time, and PR had the longest final setting time (p < 0.05). RM presented the lowest mean final setting time values among the materials tested (p < 0.05), followed by EC, DR, and PR (p < 0.05). For the results of the solubility test, no significant differences were observed among the evaluated materials (p > 0.05). EC showed the highest value over 3% (8.11%). pH Variation, Calcium Ion Release, and Bioactivity The means, standard deviations, and statistical comparisons for pH and calcium ion release (mg/L) are shown in Table 3. The pH values measured for DR were slightly higher at all time points. PR had a lower pH value than the other groups during the initial period (3, 6, and 24 h). After 72 h, all of the materials had similar pH values, except for EC, which showed lower pH values (p < 0.05). DR and RM had the highest pH, followed by PR and EC, at 168 h (p < 0.05). Only RM showed a statistically significant difference in relation to the interaction between the storage solution and materials at all times (p < 0.05). There was no significant difference between the pH values obtained for all groups at 168 h immersion time (p > 0.05). With regard to the release of calcium ions, all materials released considerable amounts at 7 days. Additionally, the results of the ICP-OES analysis showed that RM and PR had significantly more calcium release than EC and DR (p < 0.05). The morphology of the surfaces formed after the pH test of each of the samples can be assessed using the SEM images presented in Figure 1. SEM analysis revealed the presence of precipitates with various morphologies (Figure 1). All specimens had prismatic, hexagonal, cubical, needle-like, globular-like, petal-like, and scale-like crystalline precipitates on their surface ( Figure 1A-D), which were not revealed in specimens that had not been immersed in HBSS ( Figure 1E-H). With regard to the release of calcium ions, all materials released considerable amounts at 7 days. Additionally, the results of the ICP-OES analysis showed that RM and PR had significantly more calcium release than EC and DR (p < 0.05). The morphology of the surfaces formed after the pH test of each of the samples can be assessed using the SEM images presented in Figure 1. SEM analysis revealed the presence of precipitates with various morphologies (Figure 1). All specimens had prismatic, hexagonal, cubical, needle-like, globular-like, petal-like, and scale-like crystalline precipitates on their surface ( Figure 1A-D), which were not revealed in specimens that had not been immersed in HBSS ( Figure 1E-H). Radiopacity The results for the radiopacity values are presented in Figure 2. All MTA groups achieved the minimum required radiopacity value of 3 mm of Al, as recommended by the ISO 6876 standard. PR showed the highest radiopacity values among the tested materials (p < 0.05), equivalent to 4.97 Al mm. The EC, DR, and RM groups were not significantly different among the tested MTA groups regarding their radiopacity (p > 0.05). The radiopacity of EC, DR, and RM were equivalent to 4.06, 3.88, and 3.84 Al mm, respectively. Radiopacity The results for the radiopacity values are presented in Figure 2. All MTA groups achieved the minimum required radiopacity value of 3 mm of Al, as recommended by the ISO 6876 standard. PR showed the highest radiopacity values among the tested materials (p < 0.05), equivalent to 4.97 Al mm. The EC, DR, and RM groups were not significantly different among the tested MTA groups regarding their radiopacity (p > 0.05). The radiopacity of EC, DR, and RM were equivalent to 4.06, 3.88, and 3.84 Al mm, respectively. Materials 2021, 14, x FOR PEER REVIEW 7 of 13 Figure 2. The radiopacity of the specimens was expressed as the equivalent thickness of aluminum (mm Al), and radiographic images of the specimens were obtained. The same lowercase letters indicate statistically significant differences (p < 0.05). Compressive Strength at 7 Days After 7 d of setting, the specimens were tested for compressive strength (MPa). The results for the mechanical properties of the specimens are presented in Table 2. Overall, the compressive strength values of PR (mean = 62.70 ± 15.92 MPa) and RM (mean = 77.48 ± 22.49 MPa) were significantly greater than those of EC (mean = 18.51 ± 9.18 MPa, p < 0.05) and DR (mean = 23.45 ± 9.65 MPa, p < 0.05). PR was not significantly different in compressive strength from RM (p > 0.05), and EC was not significantly different in compressive strength from DR (p > 0.05). Cell Cytotoxicity The cell cytotoxicity of the extract from four MTA specimens at different extract concentrations is shown in Figure 3. The cells cultured on MTA eluates for 24 h were measured by MTT assays, considering cells cultured in the absence of the specimen extract as a blank control. When all specimen extract concentrations were diluted to 50%, an effect on fibroblast cell cytotoxicity was detected (below 70%), and when the extract concentration was diluted to 25% or lower, no effect on cell viability was shown (above 70%). EC eluents, diluted 50%, exhibited higher cell viability than the other MTA groups (p < 0.05). The viability of cells treated with EC, RM, and PR was similar in the 25 and 12.5% diluted extracts (p > 0.05). The viability of cells treated with DR was significantly lower than those treated with PR in 25 and 12.5% diluted extracts (p < 0.05), whereas there was no significant difference in cell viability compared with EC and RM (p > 0.05). Additionally, for DR, cells incubated in 12.5% diluted extract had significantly lower cell viability than cells incubated at the same extract concentrations of the other groups (p < 0.05). Cell viability was significantly affected in the presence of a dilution factor between 50% and 25% (p < 0.05); however, there was no significant difference between 25% and 12.5% (p < 0.05). For all MTA groups, significant differences were detected between the 50% and 25% dilutions (p < 0.05). As expected, extract dilution in medium decreased MTA cytotoxicity. Compressive Strength at 7 Days After 7 d of setting, the specimens were tested for compressive strength (MPa). The results for the mechanical properties of the specimens are presented in Table 2. Overall, the compressive strength values of PR (mean = 62.70 ± 15.92 MPa) and RM (mean = 77.48 ± 22.49 MPa) were significantly greater than those of EC (mean = 18.51 ± 9.18 MPa, p < 0.05) and DR (mean = 23.45 ± 9.65 MPa, p < 0.05). PR was not significantly different in compressive strength from RM (p > 0.05), and EC was not significantly different in compressive strength from DR (p > 0.05). Cell Cytotoxicity The cell cytotoxicity of the extract from four MTA specimens at different extract concentrations is shown in Figure 3. The cells cultured on MTA eluates for 24 h were measured by MTT assays, considering cells cultured in the absence of the specimen extract as a blank control. When all specimen extract concentrations were diluted to 50%, an effect on fibroblast cell cytotoxicity was detected (below 70%), and when the extract concentration was diluted to 25% or lower, no effect on cell viability was shown (above 70%). EC eluents, diluted 50%, exhibited higher cell viability than the other MTA groups (p < 0.05). The viability of cells treated with EC, RM, and PR was similar in the 25 and 12.5% diluted extracts (p > 0.05). The viability of cells treated with DR was significantly lower than those treated with PR in 25 and 12.5% diluted extracts (p < 0.05), whereas there was no significant difference in cell viability compared with EC and RM (p > 0.05). Additionally, for DR, cells incubated in 12.5% diluted extract had significantly lower cell viability than cells incubated at the same extract concentrations of the other groups (p < 0.05). Cell viability was significantly affected in the presence of a dilution factor between 50% and 25% (p < 0.05); however, there was no significant difference between 25% and 12.5% (p < 0.05). For all MTA groups, significant differences were detected between the 50% and 25% dilutions (p < 0.05). As expected, extract dilution in medium decreased MTA cytotoxicity. Discussion When nonsurgical endodontic treatment fails or cannot be performed, surgical root canal treatment should be conducted. This surgical procedure includes the placement of a retrograde filling material in close contact with the peri-radicular tissue. Therefore, the physical and chemical properties as well as the biocompatibility of the retrograde filling material are very important for the success of apical surgery [26]. Generally, MTA is considered the gold standard material in clinical applications. Several previous studies found that MTA as a root-end filling material had excellent physicochemical properties and demonstrated its supremacy over other commonly used materials [26][27][28]. Recently, several new MTA-based endodontic treatment materials were introduced to improve clinical practice [9,13,17,[29][30][31]. We evaluated the physicochemical and mechanical properties and cell cytotoxicity of three new commercial MTAs in comparison with conventional MTA (PR). The physical properties of MTA, such as setting time, film thickness, and solubility, strongly affect the material's clinical performance. In particular, these properties are important clinical factors affecting their sealing ability. For instance, the thicker the film, the lower the chance of material penetration into the accessory root canal system [32]. Several previous works reported that a thinner layer of sealer positively affected the sealing ability of the root canal filling [33][34][35]. In this study, the results showed that all MTA groups had a thinner film than PR (p < 0.05), except for RM (p > 0.05). A long setting time of materials provides an adequate working time when performing surgical treatment, such as retrograde filling or perforation repairs. However, in certain clinical situations, such as apexification and particularly apical surgery, unset material may be washed out by body fluid and/or blood in the surgical field, which may lead to treatment failure and cytotoxicity [36][37][38][39]. A fast setting time for materials ensures that the MTA has the least amount of interaction time with the contaminants present in the oral cavity, making it easier to place a second restorative material on top of the MTA [36,40,41]. The MTA setting time could be a factor that is directly related to surgical root canal treatment success [39]. Hence, in some Discussion When nonsurgical endodontic treatment fails or cannot be performed, surgical root canal treatment should be conducted. This surgical procedure includes the placement of a retrograde filling material in close contact with the peri-radicular tissue. Therefore, the physical and chemical properties as well as the biocompatibility of the retrograde filling material are very important for the success of apical surgery [26]. Generally, MTA is considered the gold standard material in clinical applications. Several previous studies found that MTA as a root-end filling material had excellent physicochemical properties and demonstrated its supremacy over other commonly used materials [26][27][28]. Recently, several new MTA-based endodontic treatment materials were introduced to improve clinical practice [9,13,17,[29][30][31]. We evaluated the physicochemical and mechanical properties and cell cytotoxicity of three new commercial MTAs in comparison with conventional MTA (PR). The physical properties of MTA, such as setting time, film thickness, and solubility, strongly affect the material's clinical performance. In particular, these properties are important clinical factors affecting their sealing ability. For instance, the thicker the film, the lower the chance of material penetration into the accessory root canal system [32]. Several previous works reported that a thinner layer of sealer positively affected the sealing ability of the root canal filling [33][34][35]. In this study, the results showed that all MTA groups had a thinner film than PR (p < 0.05), except for RM (p > 0.05). A long setting time of materials provides an adequate working time when performing surgical treatment, such as retrograde filling or perforation repairs. However, in certain clinical situations, such as apexification and particularly apical surgery, unset material may be washed out by body fluid and/or blood in the surgical field, which may lead to treatment failure and cytotoxicity [36][37][38][39]. A fast setting time for materials ensures that the MTA has the least amount of interaction time with the contaminants present in the oral cavity, making it easier to place a second restorative material on top of the MTA [36,40,41]. The MTA setting time could be a factor that is directly related to surgical root canal treatment success [39]. Hence, in some clinical conditions, an accelerated setting time is required to avoid dissolution of the materials under oral conditions [36]. The results of this study confirmed that the final setting time of the three new MTAs was shorter than that of PR (p < 0.05). A previous study reported that the proper setting time is considered to be 10 and 15 min in clinical situations [36]. This result is in agreement with a previous setting time test and indicates that both EC and RM had the proper setting time [7,15,16,18]. The main advantage of RM over PR includes its reduced setting time because of the fast setting of its calcium silicate-based materials, which form calcium zirconia complexes [15,16]. The EC group sets quickly without the addition of a chemical accelerator because it contains fine-particle pozzolan cement [6,42]. Solubility is an acceptable property for endodontic treatment materials, since it allows the release of ions. However, it is important that excess solubilization of the material does not occur [43]. Most endodontic failures occur as a result of the leakage of irritants from pathologically involved root canals into the periapical tissues [44]. Hence, endodontic and restorative materials should also have a long-term seal and prevent leakage from the oral cavity and/or the periapical tissue [45]. To provide long-term stability and prevent microleakage from the periapical tissue, root-end filling material must have a low solubility [44]. Thus, a low solubility in distilled water, as proposed in the Standard of the International Standard Organization (ISO) 6876, is required [46]. Following this test, the weight loss of each specimen is indicated as the percentage of the original mass, and the ideal recommendation is a value less than 3% [45,46]. Radiopacity is a very important characteristic required for pulp treatment materials [47]. Root-end filling and endodontic repair materials must have radiopacity to allow for evaluation of the quality of the filling for patient safety. A radiopaque material is essential to identify the location of the material in the root canal and to allow for filling failures to be corrected before final restoration [48]. MTA with a radiopacity value lower than 3 mm Al is hardly distinguishable from dentine. Clinically acceptable values of radiopacity, i.e., higher than 3 mm Al, are mandatory for controlling the quality of root canal filling [43,49]. In the present study, all MTA groups achieved the minimum required radiopacity value of 3 mm of Al, as recommended by the ISO 6876 standard. When MTA contacts fluids, it rapidly releases calcium and hydroxyl ions and creates an alkaline pH on its external surface, leading to the nucleation and crystallization of apatite on the material's surface [50,51]. Numerous previous studies reported that MTA has the ability to form calcium phosphate apatite crystals on its surface after contact with phosphate-containing simulated body fluid solution [50][51][52]. Consequently, the deposition of calcium and phosphate apatite into voids and spaces between the dentin, root canal systems, and root filling material enables MTA to encourage regeneration and remineralization of adjacent hard tissues while also improving its sealing capacity [50,53,54]. Thus, apatite-forming ability may provide clinical advantages by improving the sealing via the deposition of apatite at the interface and inside the dentinal tubules of the root canal when MTA is used as a root canal filling material [13,50,54]. HBSS solution was used in the present study as a storage solution to simulate the clinical environment. In this study, all materials showed alkalinizing activity and the formation of crystalline apatite on the surface of the specimens. MTA is a hydraulic cement consisting of fine hydrophilic particles that gradually harden in a wet environment [11,55]. The compressive strength of MTA which was set for 28 days is considered an indicator of the progression of the hydration reaction and a reflection of the setting process. In this study, compressive strength of each sample was measured following 7 days of storage. The number of days of storage was determined in accordance to a draft of ISO 6876 that is currently under revision and in order to compare MTA following adequate hydration. Still, the study is limited as the optimal days of storage were not predetermined which would have resulted in a higher level of compressive strength. In accordance with clinical perspectives, a greater compressive strength of MTA is considered to be an important feature when this material is used as pulp capping or as a coronal restorative material when it is submitted to occlusal and mastication forces [56]. However, when MTA is used as a root-end filling material, where minimal forces are applied, a low compressive strength will not be a major clinical drawback [40,57]. Cell cytotoxicity of endodontic materials is of great concern because irritation of the surrounding tissue can affect periapical tissue regeneration [22,58,59]. In an in vitro study, after application in medium, MTA suffers a hydration reaction that results in the formation of calcium hydroxide and subsequent ionic dissociation into calcium and hydroxyl ions, which is responsible for an increase in pH value and an elevated calcium concentration in the cell medium [60]. In this study, the four commercial MTAs maintained a cell viability rate above 70% for all dilutions, except for the 50% diluted eluent. When comparing the cell cytotoxicity of the four commercial MTAs in this study, PR resulted in the highest cell viability, and DR was significantly more cytotoxic than the other two types (p < 0.05). This cytotoxic effect of DR may be attributed to the differences in the initial amount of various ions released from the materials. Additionally, another factor may be caused by the characteristics of the DR material itself, which can increase the pH value. According to the manufacturer, DR has a strong antibacterial effect with a high alkaline pH (above pH 12). However, the DR group achieved the minimum required recommendation of a 70% relative cell viability rate, as recommended by the ISO 10993-5 standard. In terms of the initial null hypotheses of this study, the hypothesis about their physicochemical properties was partially accepted. When the PR group was compared with the EC, DR, and RM groups, the results of this study showed statistically significant differences in film thickness, setting time, pH variation, calcium ion release, and radiopacity (p < 0.05), although no significant differences were found for the solubility (p > 0.05). However, their solubility was not significantly different from that of the PR group (p > 0.05). The second null hypothesis was partially rejected because the EC and DR groups revealed a significant difference in compressive strength (p < 0.05) when compared to the PR group; however, the RM group revealed no significant difference compared to PR (p > 0.05). Finally, when the three new MTAs were compared with PR, the results of this study showed a statistically significant difference in cell cytotoxicity for all dilutions (p < 0.05). Therefore, the third null hypothesis was rejected. This study has several limitations. This in vitro study could not sufficiently simulate the clinical situation, which involves a complex and variable biomechanical environment. Also, as stated earlier, some of the methods are based on ISO 6876 where clinical relevance may be different between conventional root canal filling material and MTA. Thus, additional studies are needed for long-term and simulated clinical situation evaluations while such limitations are also something that would be useful when considering revision of the current version of ISO 6876. Conclusions In conclusion, the present study confirmed that EC, RM, and DR exhibit clinically acceptable physicochemical and mechanical properties and cell cytotoxicity relative to PR. Therefore, we confirmed that the EC, RM, and DR have the physicochemical and biocompatible characteristics that could be alternatives to conventional MTA as a retrograde filling material.
2021-04-04T06:16:29.333Z
2021-03-30T00:00:00.000
{ "year": 2021, "sha1": "2309f646c3f05e55b779a62b7b39c645f339fcc8", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/14/7/1693/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "333ca919f164750e0dc57726d31d67ec2ce68d4f", "s2fieldsofstudy": [ "Materials Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233388434
pes2o/s2orc
v3-fos-license
Identifying Conserved Generic Aspergillus spp. Co-Expressed Gene Modules Associated with Germination Using Cross-Platform and Cross-Species Transcriptomics Aspergillus spp. is an opportunistic human pathogen that may cause a spectrum of pulmonary diseases. In order to establish infection, inhaled conidia must germinate, whereby they break dormancy, start to swell, and initiate a highly polarized growth process. To identify critical biological processes during germination, we performed a cross-platform, cross-species comparative analysis of germinating A. fumigatus and A. niger conidia using transcriptional data from published RNA-Seq and Affymetrix studies. A consensus co-expression network analysis identified four gene modules associated with stages of germination. These modules showed numerous shared biological processes between A. niger and A. fumigatus during conidial germination. Specifically, the turquoise module was enriched with secondary metabolism, the black module was highly enriched with protein synthesis, the darkgreen module was enriched with protein fate, and the blue module was highly enriched with polarized growth. More specifically, enriched functional categories identified in the blue module were vesicle formation, vesicular transport, tubulin dependent transport, actin-dependent transport, exocytosis, and endocytosis. Genes important for these biological processes showed similar expression patterns in A. fumigatus and A. niger, therefore, they could be potential antifungal targets. Through cross-platform, cross-species comparative analysis, we were able to identify biologically meaningful modules shared by A. fumigatus and A. niger, which underscores the potential of this approach. Introduction The genus Aspergillus consists of at least 450 species [1] that occur worldwide and are members of all habitats. They grow in soil, are associated with plants, and even colonize oceans. The genus coexists with other living organisms, and this association regularly develops into an (opportunistic) pathogenicity, for example Aspergillus sydowii on coral reefs and Aspergillus niger on onions [2,3]. One of the important means of distribution within the genus are single-celled survival structures, called conidia, that are released into the air. Conidia of fungal species in different genera, including Aspergillus, are globally distributed [4], and, due to their small size, can enter the lungs of animals, including humans [5]. These conidia contain a rodlet layer that effectively shields them from the human immune system, but rarely causes infections [6,7]. Nevertheless, with Aspergilli that can grow at hypoxic conditions and body temperatures, and with a person with immune deficiencies, a serious risk for infection develops. In the case of A. fumigatus, Orthology Inference Using Reciprocal Best Hits (RBH) Method To compare the transcriptomic profiles of A. fumigatus and A. niger, pairs of genes were identified in two different genomes using the RBH method. This method entails that the pairs of genes between two species are more similar to each other than to any other gene in the other genome [38]. NCBI's BLAST (version 2.10.1+) was first used to create two databases of the protein sequences of A. fumigatus af293 [39] and A. niger CBS513.88 [40]. The A. fumigatus annotated protein sequences were downloaded from Ensembl Fungi (available online: http://ftp.ebi.ac.uk/ensemblgenomes/pub/release-30/fungi/fasta/ aspergillus_fumigatus/, accessed on 06/10/2020 (Aspergillus_fumigatus.CADRE. 30.pep. all.fa)). The A. niger annotated protein sequences were downloaded from Aspergillus Genome Database (available online: http://www.aspgd.org/, accessed on 06/10/2020 (A_niger_CBS_513_88_orf_trans_all.20110819.fasta)). The specific command lines used to build the protein sequence databases of A. fumigatus and A. niger are presented in Table S1. A BLASTp all vs. all was performed using the A. fumigatus protein sequences as the query and the A. niger protein database as the subject, and vice versa. The command lines used for this query protein to subject protein comparison are presented in Table S1. The additional options for blastp were a final Smith-Waterman alignment and a maximum e-value threshold of 1 × 10 −6 [41]. Additional requirements were a query coverage per subject of 60%, a minimum bit score of 80, and a minimum percent identify of 30. For selecting the RBH hits results, the query protein to subject protein comparisons were first sorted from lowest to highest e-values, then from highest to lowest bit scores. After sorting the results, the first hit for each query was therefore the best hit. Finally, each first hit in the first direction was compared with the first hit for each query in the opposite direction. Using the RBH method, we identified 6,598 orthologous genes between A. fumigatus and A. niger. Data Integration and Exploratory Analysis The gene pairs were used to integrate the RNA-Seq dataset with the microarray dataset. The datasets of A. fumigatus and A. niger contained 19 and 15 samples, respectively. In the first step, the merge function from the base R package was used to combine both sets based on the identified gene pairs [42]. The integrated dataset was log transformed using the log1p function from the R base package, which computes log e (1 + x) [42]. A principal component analysis was performed on the log transformed data. The pca and biplot functions from the R package PCA tools were used to generate the principal components and corresponding plots [43]. Next, normalization of the integrated dataset was applied using the NormalizeBetweenArrays function from the limma R package [44], as previously proposed by Castillo et al. [45]. The R package ggplot2 was used to plot the data before and after normalization [46]. Consensus Weighted Gene Co-Expression Network Analysis (consensusWGCNA) The integrated dataset was analyzed using a constructed consensus network. The network was built using the WGCNA R package [47,48]. The integrated dataset was put into a multi-set format suitable for consensus analysis. The trait data were matched with the expression samples for which they were measured. The corresponding traits were dormant (0 h), isotropic growth (2 h, 4 h), and polarized growth (6 h, 8 h). Construction of the weighted gene network entails the choice of a soft thresholding power β. The power β of 13 (R 2 of 0.82) was chosen based on the criterion of approximate scale-free topology [49]. The function blockwiseConsensusModules was used for network construction and consensus module detection. Other thresholds included a minimum module size of 30, a cut height for the merging of modules of 0.30 (modules whose eigengenes were correlated above 0.7 were merged), correlation option Pearson (corType), adjacency function option (networkType) signed hybrid, and a topological overlap option (TOMType) signed. The soft power β was used to calculate the Pearson's correlation between all genes. To minimize the effects of noise and spurious associations, the adjacency matrix was converted to a consensus topological overlap measure (TOM). For calculation of the consensusTOM, individual TOMs were scaled by full quantile normalization (networkCalibration = full quantile). The modules detected by the blockwiseConsensusModules function were assigned a random color. For each module, a module eigengene was calculated by the first principal component. The module eigengene could be regarded as the best representation of the gene expression patterns of that module. Next, trait data and module eigengenes were used to calculate the module-trait relationships by using the Pearson's correlation between the trait of interest and the module eigengene. To summarize the two sets into one (i.e., for detection of modules with similar correlation to the external traits), we used a conservative method: for each module-trait pair, we took the correlation that had the lower absolute value in the two sets if the two correlations had the same sign, and zero relationship if the two correlations had opposite signs. Modules with a correlation ≥ |0.70| and p-value ≤ 0.01 were selected for further analyses. Functional Classification Consensus modules that were correlated with any of the growth phases were selected for functional enrichment analysis to understand the biological function of the modules. To analyze the corresponding gene lists, we used the online webtool FungiFun2 (version 2.2.8) and used the functional ontologies from the Functional Catalogue (FunCat) [50,51]. The default settings of the FungiFun2 webtool were used, except for the background; as the background, the 6598 identified orthologous genes were used. Orthology Inference, Data Integration, and Exploratory Analysis The transcriptomic data used in this comparative study were generated by two different transcriptional profiling platforms (i.e., Illumina NextSeq500 and Affymetrix A. niger Genome Genechips). For identification of orthologous gene pairs between the two species, the RBH method was used, which resulted in 6598 orthologous gene pairs. Next, the identified orthologs were used to integrate the datasets from both RNA-Seq and microarray technologies. The first two principal components are plotted in Figure 1 to visualize the similarities and dissimilarities between the samples. The integrated dataset was log transformed using the log1p function to avoid the variance measure being dominated by highly expressed, highly variable genes [42,52]. Variation between the two species was explained by the first principal component, whereas variation between the different time points was explained by the second and third principal component. Variation in microarray data between germinating A. niger conidia (2-8 h) were small, as those samples were clustered together on PC2. Only the 0 h samples of A. niger were substantially different from all other time points. Larger variations were observed between A. fumigatus RNA-Seq samples, with dissimilarities between the 2-4 h samples and 6-8 h samples. The 0 h samples were substantially different from the 6-8 h samples. The third principal component was plotted to better explore the variation between the samples of A niger. Variation between A. niger samples was explained by PC3 rather than PC2, whereas variation between A. fumigatus samples was explained by PC2 and PC3. The raw data, i.e., RNA-Seq counts and microarray fluorescence intensities, are plotted in Figure 2A to show the difference of the dynamic range between the datasets. A larger dynamic range of the RNA-Seq samples was observed compared with microarray samples. Figure 2B shows the results of the joint normalization, where the dynamic range between the samples has been corrected. The outliers were left out of Figure 2 for better visualization of the data. Normalized data including outliers are plotted in Figure S1. The raw data, i.e., RNA-Seq counts and microarray fluorescence intensities, are plotted in Figure 2A to show the difference of the dynamic range between the datasets. A larger dynamic range of the RNA-Seq samples was observed compared with microarray samples. Figure 2B shows the results of the joint normalization, where the dynamic range between the samples has been corrected. The outliers were left out of Figure 2 for better visualization of the data. Normalized data including outliers are plotted in Figure S1. Consensus Co-Expression Network Analysis To examine the transcriptomic similarities between germinating A. fumigatus and A. niger conidia, we constructed a consensus gene co-expression network. Co-expression networks constructed from gene expression data suggest functional relationships between genes [49,53]. Consensus modules may contain shared biological pathways between the compared datasets. The consensusWGCNA detected 25 highly co-expressed gene modules that varied greatly in size (41-992 genes). Each module was labelled by a color, and, henceforth, we will refer to each module by its corresponding color. Next, we used the module eigengenes to relate the consensus modules to external sample information. An eigengene is the first principal component of that module, and may be regarded as a representative of the gene expression patterns in the corresponding The raw data, i.e., RNA-Seq counts and microarray fluorescence intensities, are plotted in Figure 2A to show the difference of the dynamic range between the datasets. A larger dynamic range of the RNA-Seq samples was observed compared with microarray samples. Figure 2B shows the results of the joint normalization, where the dynamic range between the samples has been corrected. The outliers were left out of Figure 2 for better visualization of the data. Normalized data including outliers are plotted in Figure S1. Consensus Co-Expression Network Analysis To examine the transcriptomic similarities between germinating A. fumigatus and A. niger conidia, we constructed a consensus gene co-expression network. Co-expression networks constructed from gene expression data suggest functional relationships between genes [49,53]. Consensus modules may contain shared biological pathways between the compared datasets. The consensusWGCNA detected 25 highly co-expressed gene modules that varied greatly in size (41-992 genes). Each module was labelled by a color, and, henceforth, we will refer to each module by its corresponding color. Next, we used the module eigengenes to relate the consensus modules to external sample information. An eigengene is the first principal component of that module, and may be regarded as a representative of the gene expression patterns in the corresponding Consensus Co-Expression Network Analysis To examine the transcriptomic similarities between germinating A. fumigatus and A. niger conidia, we constructed a consensus gene co-expression network. Co-expression networks constructed from gene expression data suggest functional relationships between genes [49,53]. Consensus modules may contain shared biological pathways between the compared datasets. The consensusWGCNA detected 25 highly co-expressed gene modules that varied greatly in size (41-992 genes). Each module was labelled by a color, and, henceforth, we will refer to each module by its corresponding color. Next, we used the module eigengenes to relate the consensus modules to external sample information. An eigengene is the first principal component of that module, and may be regarded as a representative of the gene expression patterns in the corresponding module. The external trait information was matched with the expression samples for which they were measured. The defined traits were dormant (0 h), isotropic growth (2 h, 4 h), and polarized growth (6 h, 8 h). Each gene was assigned to a single module, but each module had two consensus module eigengenes. This was because each orthologous gene had a particular expression pattern in A. fumigatus and a different expression pattern in A. niger. To determine if any of the 25 modules were associated with the traits, we calculated the correlation of the module eigengenes with each trait for A. fumigatus and A. niger ( Figure 3A,B). To identify the modules that were highly correlated to any of the traits in both species (consensus modules), the two sets were summarized into one: for each module-trait, pair we took the correlation that had the lower absolute value in the two sets if the two correlations had the same sign, and zero relationship if the two correlations had opposite signs ( Figure 3C). Only modules that had a significant correlation with an external trait are shown in Figure 3A-C. The turquoise and black modules were highly correlated to the dormant phase (0.89 and −0.72, respectively), midnightblue was correlated to isotropic growth (0.82), and the darkgreen and blue modules were highly correlated to polarized growth (0. 73 module. The external trait information was matched with the expression samples for which they were measured. The defined traits were dormant (0 h), isotropic growth (2h, 4 h), and polarized growth (6 h, 8 h). Each gene was assigned to a single module, but each module had two consensus module eigengenes. This was because each orthologous gene had a particular expression pattern in A. fumigatus and a different expression pattern in A. niger. To determine if any of the 25 modules were associated with the traits, we calculated the correlation of the module eigengenes with each trait for A. fumigatus and A. niger ( Figure 3A,B). To identify the modules that were highly correlated to any of the traits in both species (consensus modules), the two sets were summarized into one: for each module-trait, pair we took the correlation that had the lower absolute value in the two sets if the two correlations had the same sign, and zero relationship if the two correlations had opposite signs ( Figure 3C). Only modules that had a significant correlation with an external trait are shown in Figure 3A-C. The turquoise and black modules were highly correlated to the dormant phase (0.89 and −0.72, respectively), midnightblue was correlated to isotropic growth (0.82), and the darkgreen and blue modules were highly correlated to polarized growth (0. 73 To retrieve the biological function of the highly correlated consensus modules, we performed a functional enrichment analysis using functional ontologies from FunCat [51]. The consensus module midnightblue did not show any significant results (p > 0.05). The turquoise module was found to represent mostly secondary metabolism and fatty acid and carbohydrate metabolism. The black module was highly enriched with protein synthesis genes, the darkgreen module was enriched with ubiquitin-related genes, and the blue module was highly enriched in polarized growth genes. The detailed results are shown in Figures 4 and 5 and Table S2. The specific biological processes and molecular functions identified in each of the significant modules (i.e., turquoise, black, darkgreen, and blue) are described in the next section. Closely related co-expression modules may form a biologically meaningful meta-module. In Figure 6, the clustering dendrogram of consensus module eigengenes is plotted for identifying meta-modules. The meta-modules are further described in next section. Turquoise The turquoise module contained 992 genes, and was the largest module detected b the consensusWGCNA. The gene expression patterns in this module showed that tran scripts were high in dormant conidia, and then decreased during isotropic and polarize growth ( Figure 7A). The module was mostly enriched with genes involved in metabolism (secondary metabolism, fatty acid and carbohydrate metabolism), but other FunCat cat gories were also enriched, such as transcription and cellular transport ( Figure 6). Th darkgrey module was closely related to the turquoise module, and contained 41 gene However, the FunCat enrichment did not show any significant hits (p > 0.05). Turquoise The turquoise module contained 992 genes, and was the largest module detected by the consensusWGCNA. The gene expression patterns in this module showed that transcripts were high in dormant conidia, and then decreased during isotropic and polarized growth ( Figure 7A). The module was mostly enriched with genes involved in metabolism (secondary metabolism, fatty acid and carbohydrate metabolism), but other FunCat categories were also enriched, such as transcription and cellular transport ( Figure 6). The darkgrey module was closely related to the turquoise module, and contained 41 genes. However, the FunCat enrichment did not show any significant hits (p > 0.05). Black The black module contained 308 genes that were mostly involved in protein synthesis (12), but categories associated with energy (02) were also enriched ( Figure 4). Highly enriched categories were electron transport and membrane-associated energy conversion (02.11), ribosome biogenesis (12.01), ribosomal biogenesis (12.01.01), aminoacyl-tRNAsynthases (12.10), protein binding (16.01), and electron transport (20.01.15). Conidial outgrowth involves a fermentative metabolism, followed by a switch to respiration [33]. Concomitant with respiratory metabolism during the breaking of dormancy is protein synthesis, which is one of the earliest measurable biochemical changes during germination [54]. Transcriptomic and proteomic analyses have shown that the breaking of dormancy is characterized by an immediate onset of protein synthesis [19,25,30]. The transition from dormant conidia to isotropically expanding conidia and eventually germ tube formation involves biosynthetic machineries for which protein synthesis is required [13]. The gene expression patterns in the module were slightly different between A. fumigatus and A. niger ( Figure 7B). In A. fumigatus, transcripts increased after 2 h, then decreased slightly after 4 h. This pattern was also observed during polarized growth; after 6 h, transcripts increased, then decreased slightly after 8 h. In A. niger, transcripts were low during the dormant phase, then increased and remained high during isotropic and polarized growth. These differences during isotropic and polarized growth between A. fumigatus and A. niger can also be seen in Figure 3, as the correlation scores are low. Darkgreen The darkgreen module contained 65 genes, and was the second smallest module detected by the consensusWGCNA. The gene expression pattern in this module increased during isotropic and polarized growth ( Figure 7C). Functional enrichment showed that only two categories were enriched in this module ( Figure 4). The categories were conjunction of sulfate (01.02.03.04) associated with metabolism (01) and modification by ubiquitin-related proteins (14.07.07) associated with protein fate (14). The magenta module was closely related to the darkgreen and blue modules. This module contained 271 genes, and the FunCat analysis showed only three categories enriched: proteasomal degradation (14.13.01.01) and protein processing (14.07.11) associated with protein fate (14) and degradation of leucine (01.01.11.04.02) associated with metabolism (01). Blue The blue module contained 770 genes, and was the second largest module detected by the consensusWGCNA. This module was highly enriched in genes involved in polarized growth, together with genes involved in cell cycle and DNA processing (10) ( Figure 7D, Figure 5). Mitosis and Septum Formation Highly enriched categories were mitotic cell cycle and cell cycle control (10.03.01), mitotic cell cycle (10.03.01.01), M phase (10.03.01.01.11), cytokinesis (cell division)/septum formation and hydrolysis (10.03.03), and nuclear migration (10.03.04.09). In A. fumigatus, the first mitosis was completed in 22% of the cells before polarized growth with the formation of a germ tube was initiated [27]. The number of nuclei increases during isotropic growth, and continues to increase during polarized growth [16]. Nuclear division is followed by the migration of nuclei into the elongating germ tube and septum formation. The septin aspC was found in the blue module, and plays a role in normal development and morphogenesis [55]. In ∆aspC strains, germ tubes emerged early, and multiple germ tubes, together with early branching, were observed. Onset of Polarization Highly enriched categories important for germ tube elongation were protein transport (20.01.10), vesicular transport (20.09.07), vesicle formation (20.09.07.25), cytoskeletondependent transport (20.09.14), cell growth/morphogenesis (40.01), cytoskeleton/structural proteins (42.04), intracellular transport vesicles (42.09), and budding, cell polarity, and filament formation (43.01.03.05). Polarized growth is characterized by the restriction of expansion of the cell wall on the swollen spore, which leads to a tubular outgrowth, the germ tube. Before this, a position on the plasma membrane has to be confined [22] for localized vesicle fusion and membrane extension. This will include orchestration of the cytoskeleton to traffic and deliver vesicles, which will lead to localized membrane expansion and cell wall deposition. This extension is the first appearance of germination, and the bulge will expand to a small tube, the germ tube, which extends by tip growth. During this stage, a septum is delineated at the base of the germ tube, and nuclei are transported into the growing cell. Several studies have identified a marked increase in the growth speed of the germ tube that will branch. In many cases, a second germ tube is formed on the swollen spore. Hyphae that grow at a higher velocity possess a so-called vesicle supply center [56], also designated as the Spitzenkörper (SPK) [57,58]. This is a dynamic structure containing different types of vesicles and cytoskeletal elements that maintain polarized growth. At early germination, similar organization is expected, but operating in a more diffuse way [59]. All of these processes were confirmed in the FunCat analysis, which showed categories growth/morphogenesis (40.01) and directional cell growth (morphogenesis) (40.01.03) enriched in the blue module. Additionally, the categories cytoskeleton/structural proteins (42.04), actin cytoskeleton (42.04.03), microtubule cytoskeleton (42.04.05), and bud/growth tip (42.29) were enriched. Microtubules are primarily responsible for the transport of secretory vesicles to the location of localized expansion and in fully growing hyphae, the SPK, while actin filaments primarily control the organization of vesicles and facilitate transport/delivery to the plasma membrane [60]. Enriched FunCat categories associated with transport were protein transport (20.01.10), vesicular transport (20.09.07), vesicle formation (20.09.07.25), tubulin-dependent transport (20.09.14.01), actin-dependent transport (20.09.14.02), exocytosis (20.09. 16.09.03), and receptor-mediated endocytosis (20.09.18.09.01), among others (Figure 7). Several genes encoding secretion-related GTPases and interacting proteins were present in the blue module, such as An01g04040/Afu1g04940, An01g06060/Afu1g02190, An08g03690/Afu1g11730, An14g00010/Afu4g04810, and An18g 02490/Afu5g11900 [40] (Table 1). In A. nidulans, homologs of An14g00010/Afu4g04810 (RabD) and An01g06060/Afu1g02190 (RabE) were detected in the SPK [61]. Microtubule and actin-dependent transport involves the trafficking and delivery of vesicles to the plasma membrane, which will lead to localized expansion of the cell membrane. Other proteins are also involved in the distribution of vesicles to their final destination. In S. cerevisiae, members of the Cdc42 complex are Cdc42, Cdc24, Bem1, Cla4, and Ste12. In the blue module, homologs of the Rho-type GTPase cdc42 (cftA) and its guanidine exchange factor (GEF) cdc24 (An04g05150) were present, together with Rho-type GTPase racA (racA) ( Table 1). These homologues have been studied in A. nidulans (modA and racA) and Neurospora crassa (CDC-42 and RAC-1) [62,63]. In A. nidulans, ModA (Cdc42) and RacA (Rac1) share an overlapping function required for polarity establishment. The double knockout ∆cdc42∆rac1 appeared to be synthetically lethal. Additionally, GEF Cdc42 was required for the establishment of hyphal polarity, and localized to hyphal tips [64]. In N. crassa, the spatial distribution of the two Rho-type GTPases Cdc42 and Rac1 changes during the various differentiation stages. Before the breakage of symmetry in conidia, the localization and localized activation of Cdc42 and its GEF Cdc24 occur. After emergence of the germ tube, Rac1 is recruited at the developing tip. Together, Cdc42 and Rac1 regulate the negative chemotropism displayed during germ tube formation. The SPK is vital for polarity maintenance during hyphal tip extension. The polarity maintenance machinery consists of cytoskeleton components, such as microtubules and actin filaments, and several groups of proteins termed the Cdc42 complex, polarisome, and Arp2/3 complex. These complexes are located in the growing tip area close to the apical plasma membrane [66]. Microtubules regulate the position of proteins, such as cell end markers. The cell end marker teaR (An18g04780) was found in the blue module. In A. nidulans, TeaR is anchored to the plasma membrane, and directly interacts with TeaA, another cell end marker [67]. This interaction at the apical membrane is important for the recruitment of additional downstream components, including the formin SepA, which is involved in the polymerization of actin filaments for targeted vesicle transport [68]. Components of the polarisome act downstream of the Cdc42 complex, and are conserved from yeast to filamentous fungi [69]. Only one of three components of the A. niger polarisome was found in the blue module, spaA (Table 1). SpaA localizes exclusively at the hyphal tip and plays a role in polarity maintenance [70]. The Arp2/3 complex is another group of proteins involved in polarity maintenance, endocytosis, and actin polymerization. The complex includes Arp2, Arp3, Arc40, Arc35, Arc18, Arc19, and Arc15 in S. cerevisiae [71]. Several Arp2/3 homologs were identified in the blue module, such as Arp2 (An08g06400), Arc35 (An01g05510), Arc19 (An12g08380), and Arc18 (An16g01570) ( Table 1). Vesicles transport cell wall-modifying enzymes, substrates, and the cell membrane required for expansion to the growing tip. The exocyst is a protein complex involved in vesicle docking and fusion to the plasma membrane, and was originally identified in the budding yeast S. cerevisiae [72,73]. This complex consists of eight proteins, Sec3, Sec5, Sec6, Sec8, Sec10, Sec15, Exo70, and Exo84, and interacts with Rho-type GTPases Cdc42, Rho1, and Rho3, as well as with Rab GTPase Sec4, which is present on the membrane surface of vesicles [72]. In the blue module, we identified homologs of Sec5 (An08g05570/Afu1g12790), Exo84 (An08g07370/Afu6g11370), Sec4 (An14g00010/Afu4g04810), and Rho1 (An18g05980/ Afu6g06900) ( Table 1). In N. crassa, a mutation in the Sec5 homolog resulted in swollen conidia and altered hyphal growth, indicating its role in polarity establishment and maintenance [74]. The Rab GTPase SrgA (Sec4) was involved in vesicle secretion and filamentous growth in A. niger [75,76]. Another group of proteins that facilitate vesicle docking and fusion to the plasma membrane are soluble N-ethylmaleimide-sensitive fusion protein (NSF) attachment protein receptors (SNAREs). At the hyphal tip, SNAREs present on the target membrane (t-SNAREs) pair with SNAREs present on the vesicles (v-SNAREs) to mediated the fusion of membranes [77]. Several SNAREs and SNARE interacting genes were identified in the blue module, such as An02g01580/Afu2g12870, An04g07020/Afu4g10040, An07g02170/Afu7g05735, An07g09960/Afu1g07420, and An15g01380/Afu6g04150 [40] ( Table 1). Endocytosis is the reverse process, characterized by the formation of membrane vesicles that are invaginated and included in the vesicle transport routings. It occurs in germinating spores [78,79] at the onset of polarization. In the case of growing hyphae, endocytotic vesicles are internalized most strongly in a collar-like region behind the hyphal apex, and fuse with early endosomes [80], which participate in tip growth. Discussion The process of germination of conidia involves the transition from a dormant, stressresistant cell with low metabolic activity into a vegetatively active fungal hypha. In this study, the transcriptomic changes are studied throughout this transition. In this crossplatform, cross-species comparative analysis, we studied conidial germination of two Aspergillus species: A. niger and A. fumigatus, enabling us to integrate the transcriptional expression of two related species and two different techniques and providing biological insights in germination of conidia. Firstly, to perform this cross-platform, cross-species comparative analysis, we used the following bioinformatic approach: (i) a selection of 6598 ortholog genes was necessary to integrate the two datasets, which enclosed nearly 50% of the A. niger genes and over 50% of the A. fumigatus genes. A comparison of 34 ascomycete genomes, including 19 Aspergillus species, showed that~8500 genes were pan-fungal, which was inferred from MCL clustering of proteins [81]. For our analysis, the two datasets needed to be integrated one to one based on orthologous genes. Finding best hits using the RBH method involved sorting the results from lowest to highest e-values, then, from highest to lowest bit-scores. The first hit within the sorting would be the best hit. If the next best hit had the same bit-score and e-value, there would be more than one best hit (co-orthologs). In our study, the co-orthologs were discarded, which may be the cause of the difference in identified orthologous genes, together with the different methods used in both studies. (ii) Normalization of the intensities was done, as both techniques have different expression values, i.e., RNA-Seq counts and microarray fluorescence intensities. (iii) Expression patterns during germination stages were compared. Presently, RNA-Seq has emerged as the technology of choice for gene expression profiling [82]. RNA-Seq is able to detect novel transcripts, map exon/intron boundaries (if full genomes are available), and reveal splice variants. Additionally, RNA-Seq provides more resolution to detect extreme expression values, such as genes with very low transcript counts and genes with extremely high transcript counts [83]. However, microarrays are reliable, and the variety of datasets publicly available offers an exceptional opportunity to perform cost-effective and insightful comparisons. Secondly, the cross-platform, cross-species analysis confirmed the occurrence of conserved, generic, and functionally important biological processes during germination, which are independent of a single technology. This is more interesting, as A. fumigatus belongs to subgenus Fumigati, section Fumigati and A. niger to subgenus Circumdati, section Nigri [1]. A phylogenetic analysis showed that section Nigri is more closely related to subgenus Nidulantes than to Circumdati [84]. However, based on phenotypic and extrolite data and their phylogenetic analysis, Houbraken et al. [1] maintained section Nigri in subgenus Circumdati until more data supporting the analysis of Steenwyk et al. [84] become available. Additionally, experimental conditions of both studies were different, such as the pre-culture medium and germination medium. Nutritional environment during sporulation, as well as during germination, affects the rate of the breaking of dormancy and growth in A. fumigatus [85]. A. fumigatus strains were cultivated on Sabouraud agar slants (dextrose 40 g/L, peptone 10 g/L, agar 20 g/L, pH = 5.6) for five days at 30 • C, and A. niger was cultivated on complete medium (CM) (1.5% agar, 6.0 g NaNO3, 1.5 g KH2PO4, 0.5 g KCl, 0.5 g MgSO4, 4.5 g D-glucose, 0.5% casamino acids, 1% yeast extract, and 200 µL trace elements per liter) for 12 days at 25 • C. RNA was extracted after growth in liquid RPMI 1640 (Gibco ® life technologies) and liquid CM for A. fumigatus and A. niger, respectively. Morphological changes during germination, such as swelling and germ tube formation, were observed at similar time points, despite different pre-culture and germination conditions. The bioinformatic analysis was focused on identifying biological similarities associated with germination between A. fumigatus and A. niger. The different experimental setup, together with differences in RNA extraction and library preparation of the original studies, will doubtlessly cause biological differences. By integrating the expression data one on one based on orthologous gene pairs and constructing a consensus gene co-expression network, we excluded the biological differences and identified biological similarities. A consensusWGCNA was performed to examine the transcriptomic similarities between germinating A. fumigatus and A. niger conidia. Recently, numerous studies have been published using WGCNA to identify co-expressed genes related to an external trait in various fields [86][87][88][89][90][91]. In this study, we detected co-expressed consensus modules between A. fumigatus and A. niger and investigated the module relationships with the different morphological phases in germination. Five co-expression modules associated with either the dormant, isotropic, or polarized phase were identified. Genes within each module were considered to be related to each other in function, or could work cooperatively in specific molecular processes. Functional enrichment on the larger modules, such as the black and blue modules, adequately showed clustering of genes involved in similar or identical pathways. However, the midnightblue module (130 genes) did not show enrichment of a functional category, which could be because of the poorly characterized A. fumigatus and A. niger genomes. The module contained~40 genes without annotation and almost all other annotations were putative. Additionally, in both studies, RNA was extracted from a population of germinating conidia. Conidial germination was relatively synchronous when appropriate exogenous nutrients were present [16]. However, in the transition from dormant conidia to swelling to germ tube elongation, conidial swelling is the middle phase, and therefore difficult to characterize when analyzing the transcriptome of a population of conidia. The gene expression pattern in the black module, associated with protein synthesis, showed an increase after 2 h. Lamarre et al. showed similar results in A. fumigatus; 30 min post-dormancy transcripts were identified from protein synthesis, carbohydrate metabolism, protein complex assembly, and RNA binding protein [25]. This was recently confirmed by Danion et al., where germination was induced by the presence of nutrients, such as carbon and nitrogen, without the novo RNA transcription [16]. The proteome profile of A. flavus at the conidial germination stage resulted in overrepresented protein synthesis categories [26]. The gene expression pattern in the blue module, associated with polarized growth, showed a strong increase after 6 h. In several Aspergillus spp, germ tube formation was observed after 6-8 h of germination [18,[23][24][25][26]. Our co-expression network analysis using only A. fumigatus RNA-Seq data also identified a module associated with the onset of polarized growth. However, the consensus co-expression analysis performed in this study identified detailed functional categories associated with the tip growth and formation of an SPK [65,[92][93][94]. Oda et al. found that transcript levels were steady for genes encoding the Cdc42 complex and polarisome during the switch from isotropic to polarized growth, such as modA (An02g14200), cdc24 (An04g05150), and spaA (An07g08290) [24], whereas our data showed a strong increase of these transcripts in the blue module in both A. fumigatus and A. niger. Besides the modules significantly correlated to a trait, closely related modules, termed meta-modules, were analyzed, as these modules may be biologically similar [53]. The higher-level cluster of the black and brown module showed a relationship between the gene transcripts in each module. The black module was highly enriched with protein synthesis genes, and, similarly, the brown module was highly enriched with transcription and protein synthesis genes. Thirdly, these insights enable us to define processes that might be used as therapeutic targets to suppress antifungal development in the lungs. Antifungals targeting germination processes would only work as prophylaxis, as only established infections consisting of hyphae will be diagnosed and treated. Conidia swelling inside causes the lungs to need to remodel their cell wall, therefore, the enzymatic activity of glycoside hydrolases and glycoside transferases may be potential targets for prophylaxis. Before germ tube formation, a position on the plasma membrane has to be confined for localized vesicle fusion and membrane extension. This ergosterol-enriched cap in germinating conidia at the site of germ tube formation could be an attractive target, as the prevention of this ergosterol patch may disturb the localized transport to the site of polarization. Processes important for germ tube formation and tip elongation, such as vesicle transport and exocytosis/endocytosis, might also be feasible as antifungal development targets. Conclusions In this study, we demonstrated the possibility for comparative analysis between Aspergillus spp using two different transcriptional profiling platforms, which introduces the opportunity to perform cost-effective insightful comparisons. The consensus gene coexpression network detected modules associated with transcription and protein synthesis and polarized growth. Through cross-platform, cross-species comparative analysis, we were able to identify biologically meaningful modules shared by A. fumigatus and A. niger, which underscores the potential of this approach. Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/jof7040270/s1, Figure S1: Expression profiles of the A. fumigatus and A. niger datasets after normalization without outliers removed. Figure S2: Module-trait relationships. Figure S3: The correlation of the consensus modules with the external trait. Table S1: Command lines. Table S2: Functional enrichment analysis.
2021-04-26T05:14:58.855Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "e845eedb9ebd40a2724909944f7d74110bbc7560", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/jof7040270", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e845eedb9ebd40a2724909944f7d74110bbc7560", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
108640314
pes2o/s2orc
v3-fos-license
Community Environmental Mapping Using User–Friendly GIS:A Case Study in Muko Neighborhood, Amagasaki Abstract This study examines a method for producing a ″Community Environmental Map″ through public participation by using a Community Environmental Mapping Support System (CEMSS; user–friendly GIS). The authors developed a CEMSS and, as a case study, hosted a Community Environmental Mapping (CEM) Project employing the CEMSS. They describe the CEMSS and CEM project within the context of the following points: (1) Effectiveness of the CEM. (2) Effectiveness of the CEMSS in CEM. (3) Future challenges for improving the CEMSS. The findings revealed that participants were able to produce accurate maps using the CEMSS and that they were able to increase their knowledge of environmental design through CEM. For these reasons, the CEM Project was considered effective for community–scale spatial planning. Results from a questionnaire survey showed that CEMSS appears to be relatively easy for non–expert users of GIS to adopt, but that sufficient scope exists for improving the CEMSS. The potential for using CEMSS on a PDA with GPS in the field was considered particularly important as several participants found that they were unable to find their location on paper maps in areas with no obvious landmarks. Introduction Public participation is becoming increasingly important in the field of urban planning, particularly in Japan where it is possible to observe community-scale, public participation at the level of district planning. In addition, environmental considerations in planning are also becoming more important. For these reasons, it is important to consider the local environment during community-scale spatial planning initiatives through public participation. In order to increase the effectiveness of community-scale spatial planning initiatives that consider the environment, it is useful for stakeholders to conceptualize, based on personal experience, a birds-eye view of the local environment. The production of environmental maps through public participation is particularly effective in allowing participants to gain a clearer understanding of their local environment through the physical experience of the mapping process. Given these benefits, development of a method for producing "Community Environmental Maps" through public participation and the Community Environmental Mapping Support System (CEMSS; user-friendly GIS) may therefore be beneficial. As a first step, the authors developed a CEMSS and organized a Community Environmental Mapping (CEM) Project with public participation using the CEMSS. Here the authors present the findings of the CEMSS and the CEM project, and also discuss the following: (1) Effectiveness of the CEM (2) Effectiveness of the CEMSS in CEM (3) Future challenges for improving CEMSS In this study, a "Community Environmental Map" is defined as "a map that represents environmental elements, such as vegetation, fauna, water, air, and similar environmental attributes in the community", while "CEM" is defined as "The production of a Community Environmental Map through public participation". Previous Studies and This Study Conventionally, this kind of mapping has been conducted within the context of environmental education using paper maps (Hamaguchi, 1998). However, recent advances in GIS technology have meant that it is currently easier to use than before. Furthermore, GIS is well suited to CEM, because GIS can be used to produce and store numerous map layers (survey results) that can then be used for future planning. In addition, stakeholders can superimpose these layers in order to understand the relationships among a myriad of environmental elements. In the U.S. and Japan, several environmental mapping projects using GIS have been undertaken to date (Ludwig and Audet, 2000;Itoh and Ugawa, 2001). However, these studies were not undertaken within the context of urban planning or environmental design, but rather for education. GIS has been widely applied in the field of urban planning, particularly in western countries (Greene, 2000;Craig et al., 2002;Geertman and Stillwell, 2002), where "Participatory GIS" exercises have been used to collect and collate available local knowledge for planning purposes. On the other hand, CEM is used to facilitate a participant's understanding of the local environment for urban planning and environmental design. Interestingly, there are few actual examples of "Participatory GIS" in Asian countries. This may be due to differences in the planning cultures of western and Asian countries, and even within Asian countries, planning cultures differ from one another (Sanyal, 2005). Therefore, it is necessary to examine "Participatory GIS" within a Japanese context. In Japan, web-based mapping systems have been developed to map the locations of sites of interest in towns by citizens (Manabe, 2003). Generally, however, web-based GIS systems are difficult for average Japanese citizens and local communities to develop. Consequently, a CEMSS was developed as a standalone software program for use on a personal computer. Given that most of the stakeholders involved in community-scale planning initiatives in Japan are relatively old and not familiar with operating complicated software, the design of the CEMSS was kept simple and functionality was limited. While several environmental maps have been produced for urban planning Miura et al., 2005), the separate map layers were not stored as GIS data. However, if such layers were stored in a GIS, they would be more useful and effective for planning. In the U.S., CEM-like projects have employed normal GIS applications which are expensive and complicated to use (Knapp et al., 2003). However, in Japan, citizens do not have access to normal GIS, which is why a CEMSS (user-friendly GIS) was developed for this study. Fig.1. shows the study area, which consists of a part of the Muko neighborhood in Amagasaki City. The area is mainly residential, but also contains some agricultural land. The irrigation channels used for agriculture in this area also serve as habitats for numerous species of flora and fauna. Method for CEM The authors organized this CEM Project with public participation as a case study on Saturday, September 30, 2006. At the event, the participants focused on recording the occurrence and distribution of flora and fauna in the channels because channels are important environments for biota in this area. The project outline is as follows: ) Field Survey Methods The authors divided participants into three groups, each containing citizens. Similarly, the channels in the neighborhood were divided into three areas, each of which was assigned to a group. Since the focus of the CEM on this day was on the natural environment of the study area, groups carefully examined the biotic components of the channels to which they were assigned. During two hours, each group walked along the channels and documented the occurrence of target species (Fig.2.). Target species (flora and fauna) consisted of the following: 1) Egeria densa (plant) 2) Potamogeton crispus (plant) 3) Potamogeton oxyphyllus (plant) 4) Calopteryx atrata (insect) 5) Other Damselflies (insect) 6) Zacco platypus (fish) The authors selected these target species from an ecological perspective (ecological health, biodiversity, and so on). Before the survey, they distributed photographs of target species (Fig.3.) and paper maps (scale: 1/1000) of the area among the groups. Upon sighting any of the target species, participants marked the corresponding point on a map. During the open discussion, each group viewed the integrated map on their CEMSS, using some of the mapping functions and reported on their findings. The integrated map was also projected on screen, which enabled all of the participants to examine and discuss their findings and aspects related to future environmental designs for the area. CEMSS The CEMSS software developed by the authors and used in this study was based on a GIS that was developed using MapObjects Ver. 2.2 (ESRI Inc., Redlands), which is a set of GIS components, and Visual Basic.NET (Microsoft Corp., Redmond) a development language (Ralston, 2001). In addition, the GIS data created for the CEMSS can be used on the ArcGIS (ESRI Inc.) platform, which is popular among GIS specialists. Fig.5. shows the user interface of the system, which was designed to be as user-friendly as possible to facilitate operation by the average citizen. The functions of this system therefore consisted of the following: to Layer) for each species on a separate layer For this system, the authors used scanned and geo-referenced cadastral maps produced by local governments as base maps. Since all local governments are obliged to produce maps such as these and make them available to the public, maps can be obtained anywhere in Japan. This also means that this system can be used in other Japanese cities. In addition, the authors provided a brief introduction During the open discussion, participants projected this system on the screen. Fig.7. shows two of the resultant maps produced by participants. Each species was observed to have a unique distribution pattern. Discussions 4.1 Effectiveness of the CEM After the open discussion, questionnaires were administered to participants in the afternoon. The when viewing the overlaid maps that were produced using CEMSS during the open discussion, participants became aware of the fact that each species had a unique distribution pattern. The maps also revealed that C. atrata lives on E. densa or P. oxyphyllus and that they also require a large forest nearby. It became clear that if we want to live with C. atrata, we must ensure the survival of E. densa or P. oxyphyllus in the channels and maintain a large forest nearby. This is one example of how knowledge can be acquired by participants in a CEM project and illustrates its potential application to environmental design (Fig.10.). In addition, the open discussion was important for CEM, because participants were able to share knowledge through the discussion. Fig.11. shows the responses to Question 3. All participants felt that this kind of project is useful for future community planning initiatives. However, some participants identified the need for continuous surveys in order to collect sufficient data for planning. For example, some participants mentioned that the flow and velocity of runoff in the channel will also affect C. atrata habitat and that this should be measured in subsequent projects; continuous CEM projects therefore seem to be necessary. Table 1. shows the responses to Question 4 that were organized into four categories (Ecosystem, Channel, Culture, and Artificial Environment). The results show that the potential application of this CEM is broad. There may therefore be a need for local communities to prioritize a set of targets for the CEM, based on the characteristics of the local environment. Effectiveness of the CEMSS in CEM Every group was able to input all of their field survey results into the CEMSS accurately and correctly. During the open discussion, by overlaying some of the layers in the CEMSS, participants were able to find that C. atrata require E. densa or P. oxyphyllus to live on, and need to have a large forest nearby. Participants were able to locate forested areas by viewing aerial photos on the CEMSS. Given this example, the CEMSS seems to be useful for CEM. In addition, "Visible/invisible control for the map layer" is important in CEMSS and aerial photos appeared to be an essential component of the CEMSS because they contain valuable qualitative information. After the open discussion, questionnaires, related to the CEMSS were also distributed among participants. As before, eight valid responses were obtained (Citizens, 5; Students, 3). The questions were as follows: Question 5 Did you operate the CEMSS? Question 6 Was the operation of the CEMSS easy? (This question is intended only for participants who answered "Yes" to Question 5.) Fig.12. shows the responses to Question 5. We can see that, of the participants that operated the CEMSS, two were citizens. All CEMSS operators use personal computers in their daily lives, and from a practical viewpoint, for future CEM projects, every group should have one person that uses a personal computer daily. This should not be difficult because personal computers are very popular in Japan. Fig.13. shows responses to Question 6. All CEMSS operators, even those who were not experienced GIS users, felt that the CEMSS was very easy to use. Therefore, it appears that participants are able to use CEMSS easily if they can use a personal computer. Future challenges for improving CEMSS After the open discussion, questionnaires related to subsequent challenges and necessary improvements to the CEMSS were also administered to participants. A total of 8 effective responses were obtained (Citizens, 5; Students, 3). The questions were as follows: These functions appear to be necessary for CEMSS. Especially, the potential for using CEMSS on a PDA with GPS in the field, as some participants stated that they were unable to ascertain their location on paper maps in areas where there were no landmarks. Such Table 1. Responses to Question 4 "What is a useful (or interesting) target for the next CEM project in this area?" GPS technology already exists for PDAs and tablet PCs and has already been used in the field (Clegg et al., 2006). The next step, therefore, is for the authors to incorporate this into the CEMSS. In addition, to make the CEMSS available anywhere in Japan, the authors need to add functions that permit the addition of new layers to increase CEMSS flexibly. Additional Case Study As an additional case study, the authors convened another CEM project using the CEMSS in the relatively rural neighborhood of Sugo in Takizawa Village in Iwate prefecture. The outline of the CEM is shown in Table 2. In this CEM project, participants who were not GIS specialists were also able to produce maps by using a CEMSS accurately and correctly (Fig.14.). As before, when viewing maps that had been overlaid and produced using the CEMSS during the open discussion, participants became aware of the fact that each species had unique distribution patterns. The maps also revealed that target species inhabited trees along the side of the road and a small stream. Participants therefore realized that coexisting with these species would require conservation of these roadside trees and the small stream environment. This exercise also demonstrated how knowledge can be acquired by participation in a CEM project and illustrates the potential application of the method to environmental design. In addition this CEM project also revealed the following: (1) Participants who are not expert GIS users were able to produce informative maps using CEMSS. (2) Participants gained new insights about the local environment. Consequently, CEM appears to be an effective method for increasing and transferring knowledge about the local environment among participants. (3) During the open discussion, the CEMSS was found to be useful for acquiring the knowledge required for environmental design. Conclusion The most significant outcomes of this study were: (1) The authors developed a CEMSS (user-friendly GIS) for study areas. communities may need to assess the priority of targets for CEM, and that this depended on their local environment. (4) The CEMSS appeared to be easy for non-expert GIS users to use. Challenges for future work include the following: (1) The authors should increase the number of case studies and questionnaires given the relatively small sample size in this study. The CEM was the same as that employed in the Muko neighborhood. However, in Sugo, participants focused on all accessible areas, not just on channels as in the case of Muko neighborhood.) (2) The authors should extend this project using public participation to collect environmental data that can be applied to ecosystems, local culture, river channels, and built environments in this study area. The effectiveness of CEM in the long-term planning process also needs to be assessed. (3) Several new functions need to be incorporated into the CEMSS. Specifically, the potential to use the CEMSS on a PDA with GPS in the field is important because some participants were unable to determine their location on the paper maps in the absence of landmarks. (4) To make the CEMSS available anywhere in Japan, the authors need to increase the capacity in order to flexibly add new layers to it.
2019-04-12T13:58:28.797Z
2007-11-01T00:00:00.000
{ "year": 2007, "sha1": "5c38ef7031a3deecbe75b838f93eba9b6c6adef8", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3130/jaabe.6.363", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "73e921844ecedeaf46193fc999bfad81570fe72f", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
257247227
pes2o/s2orc
v3-fos-license
Impact of NG-Test CTX-M MULTI Immunochromatographic Assay on Antimicrobial Management of Escherichia coli Bloodstream Infections Rapid detection of extended-spectrum-β-lactamase (ESBL) is of paramount importance to accelerate clinical decision-making, optimize antibiotic treatment, and implement adequate infection control measures. This study was aimed at assessing the impact of direct detection of CTX-M ESBL-producers on antimicrobial management of Escherichia coli bloodstream infections over a 2-year period. This study included all E. coli bloodstream infection (BSI) events that were serially processed through a rapid workflow with communication to the clinicians of direct detection of CTX-M ESBL-producers and conventional culture-based workflow. Antimicrobial management was retrospectively analyzed to assess the contribution of the rapid test result. A total of 199 E. coli BSI events with a report of direct detection of CTX-M ESBL production results were included. Of these, 33.7% (n = 67) and 66.3% (n = 132) were reported as positive and negative CTX-M producers, respectively. Detection of CTX-M positive results induced more antibiotic therapy modifications (mainly towards carbapenem-containing regimens, p < 0.01), and antimicrobial susceptibility testing results of CTX-M ESBL-producing E. coli isolates induced more antibiotic escalations towards carbapenem-containing regimens (p < 0.01). Direct detection of CTX-M ESBL-producing E. coli resulted in a remarkable rate of antibiotic optimizations on the same day of blood culture processing. Observing antibiotic management following the availability of antimicrobial susceptibility testing results, additional early optimizations in escalation could probably have been made if the rapid test data had been used. Detection of CTX-M negative results resulted in few therapeutic changes, which could have probably been higher, integrating epidemiological and clinical data. Introduction In recent years several rapid non-molecular tests for the detection of the main antibiotic resistance enzymes in Gram-negative bacteria have been developed and introduced into the routine of many laboratories [1][2][3][4][5][6][7][8][9][10][11][12]. They have also been favorably evaluated directly from positive blood cultures (BCs) with the purpose of providing results on the same day of sample processing, at least 24 h earlier than conventional susceptibility testing [11]. Rapid detection of the main β-lactamases is of paramount importance to accelerate clinical decision-making, optimize antibiotic treatment, and implement adequate infection control measures [13]. Additionally, rapid characterization of these enzymes can help to guide therapy, as the type of β-lactamase confers different resistance spectra to carbapenems and novel β-lactam/β-lactamase inhibitor combinations. Routine infectious disease bedside consultation planned within antimicrobial stewardship programs on rapid susceptibility testing results was reported to change antimicrobial treatment in more than 50% of cases [14]. However, since bedside consultations are not routinely available in many hospitals, written diagnosis-treatment recommendations on microbiological test reports have also been implemented with no effect on mortality [15]. Antimicrobial resistance at the hospital level is an issue that is unlikely to be tackled if delegated to infectious disease and clinical microbiology specialists alone, as knowledge of local epidemiology, antimicrobial prescribing, as well as interpretation of susceptibility results affects all hospitalists. In this regard, it is not known how rapid test results on the detection of the main antibiotic resistance enzymes can impact antibiotic consumption and clinicians' confidence to change antibiotic therapy, especially to de-escalation, since other resistance mechanisms non-detectable by the rapid test used could be present. Extended-spectrum-β-lactamase (ESBL)-producing Enterobacterales (EB) infections represent a worldwide issue concerning public health, especially given their association with poor outcomes, growing community-onset, and high ecological treatment cost [16][17][18][19]. ESBL enzymes are, in fact, the main actors in EB in conferring resistance to penicillins, cephalosporins, and aztreonam. Third-generation cephalosporin-resistant Escherichia coli and Klebsiella pneumoniae have been recently reported to contribute to high numbers of attributable deaths and disability-adjusted life-years per 100,000 individuals [20]. Moreover, from both therapeutic and ecological points of view, the burden of ESBL-producing EB infections is very heavy since carbapenems are the proven treatment option [19]. Although the ESBL family is heterogeneous, the global pandemic of plasmids carrying CTX-M type genes, which started mainly in the 2000s, is the main driver of ESBL dissemination in EB and has replaced other ESBL enzymes (i.e., mostly TEM, SHV derivatives) [21]. A recent survey, as part of the International Network for Optimal Resistance Monitoring (INFORM) global surveillance program on EB and Pseudomonas aeruginosa isolates collected from 18 European countries, reported 18.5% of ESBL-producers in E. coli isolates, CTX-M-type enzymes being the most frequently detected [22]. Similarly, 35.5% of K. pneumoniae isolates were ESBL-producers, and CTX-M-15 enzymes comprised more than 70% of ESBLs detected. Of note, an elevated incidence of SHV-type ESBL-producing K. pneumoniae was found in Southern Europe (17%), reaching 64% of those identified in Greece [22]. The recent introduction of lateral flow immune assays into the market has brought about a real revolution in the field of antimicrobial resistance detection, as it has given every laboratory the opportunity to equip itself with reliable tools without the need to have technical expertise or expensive instrumentation [23]. The lateral flow NG-Test CTX-M MULTI assay (NG Biotech, Guipry, France) exploits monoclonal antibodies specific for CTX-M variants belonging to group 1 (including CTX-M-15), group 2, group 8, group 9 (including CTX-M-14), and group 25. It detects CTX-M-type ESBLs from both bacterial cultures and pellets, providing results in <15 min without discriminating CTX-M variant or subgroup, and requires no specific storage constraints, minimal hands-on time, and no additional equipment [8,10,11,23]. Given the ESBLs epidemiological context and with the aim of both providing reliable and rapid microbiological results and maximizing cost-effectiveness, the NG-Test CTX-M MULTI assay (NG Biotech, Guipry, France) has been implemented in the BC workflow of our laboratory since November 2019 [11]. This study was aimed at assessing the impact on the antimicrobial prescription of direct detection of CTX-M ESBL-producers in E. coli-positive BCs in an Italian University hospital over a 2-year period. Discussion Rapid tests for the detection of the main resistance enzymes in Gram-negative bacteria have been developed with the final objectives of implementing efficient infection control measures and identifying resistance mechanisms so that the most appropriate antibiotic treatment can be started. E. coli belongs to the small number of Gram-negative bacteria with significant clinical impact and is, therefore, one of the most studied [24]. This study reported a real-life experience assessing the impact on clinicians' confidence and antimicrobial prescription of a newly introduced diagnosis of direct detection of CTX-M ESBL-producers in E. coli-positive BCs in a hospital in which carbapenem-sparing strategies were implemented during the last years, mainly as educational interventions. The lateral flow NG-Test CTX-M MULTI assay showed to be well adapted to the Italian epidemiology since only 3.8% of E. coli isolates that tested negative to CTX-M expressed ESBL enzymes other than CTX-M-types. Direct detection of CTX-M positive result allowed optimizing more antibiotic therapies (mainly towards carbapenem-containing regimens) on the same day of BC processing than CTX-M negatives. However, in more than 20% of patients with direct detection of CTX-M positive result, the antibiotic escalation was only performed after the antimicrobial susceptibility testing results were available, carbapenemcontaining regimens being the most prescribed (79%). Direct detection of CTX-M negative results induced very few changes in therapy (13.6%). Antimicrobial susceptibility testing results obviously allowed more antibiotic de-escalations (mainly to 3rd-4th generation cephalosporin-containing regimens) in patients suffering from BSI caused by E. coli with CTX-M negative result. Implementation of rapid point-of-care diagnostic tests for antimicrobial resistance markers is considered mandatory to achieve efficient infection control measures, identification of resistance mechanisms, and appropriate antibiotic therapy [23]. The choice of the most appropriate rapid diagnostic workflow from BCs should consider laboratory organization as well as local epidemiology of resistance mechanisms [12,13]. Several approaches have been evaluated and shown to provide very accurate results that take into account the presence of multiple mechanisms of resistance. The whole-genome sequencing-based approach provided reliable results as those obtained by phenotypic AST [25,26]. Rapid AST was also favorably evaluated from BCs, conferring to the method greater potential, especially for antimicrobial de-escalation interventions [27]. However, both of these approaches are only rarely implemented at present for different logistical reasons. To the best of our knowledge, our study was the first that sought to quantify the degree to which clinicians received the microbiological report with CTX-M positive or negative results, verified by a cheap and easy-to-use rapid immunochromatographic testing directly from the vial in a dedicated workflow to E. coli. The decision to communicate by laboratory information system with an extremely simple text was made to (1) leave no room for interpretation or attempted interference about the change of antibiotic therapy given our unawareness of patients' clinical condition; (2) reach patients suffering from BSI caused by E. coli with CTX-M negative result and multi-susceptible phenotype, who are rarely included in antimicrobial stewardship programs. Our results highlighted that direct detection of CTX-M ESBL-producing E. coli persuaded clinicians to escalate antibiotic therapy to a significant but unsatisfactory extent, given the considerable number of carbapenem-containing prescriptions following antimicrobial susceptibility testing results. We speculated that this finding might be related to either a lack of confidence in rapid test results or "CTX-M" nomenclature, highlighting the need for multifaceted interventions targeting all the prescribers to inform about reliability [8,10,11] and operating principles of the rapid tests for the detection of the main resistance enzymes in Gram-negative bacteria. Conversely, communication of CTX-M negative results resulted in very few changes in therapy, mainly antibiotic therapy introduction and escalation, probably due to E. coli identification and specific clinical considerations, respectively. Although antibiotic de-escalation could present a dark side, reducing antimicrobial exposure is considered essential [28]. In our study, antimicrobial resistance patterns of E. coli with CTX-M positive and negative results were very different, being the latter resistance rates to 3rd generation cephalosporins, piperacillintazobactam, and fluoroquinolones very lower (<5%, <8% and <19%, respectively). This finding should prompt us to consider that the knowledge of local epidemiology (low number of AmpC/ESBL-producing E. coli) together with the knowledge of the patient's clinical condition might set the field on which antibiotic de-escalation may be implemented from the result of the rapid test. The attempt to quantify the immediate benefits and limitations provided by the implementation of a rapid test on antimicrobial prescriptions of septic patients is certainly a strength of our study. The lack of knowledge of both clinical contexts (e.g., severity of BSI, source of infection, source control rate, use of clinical scores such as the INCREMENT-ESBL score), which might have influenced the choice of antibiotic escalation or de-escalation and patients' outcomes were the main limitations of this study. Conclusions The provision of microbiological test report with the diagnosis of direct detection of CTX-M ESBL-producing E. coli resulted in a remarkable rate of antibiotic optimizations on the same day of BC processing. Moreover, observing antibiotic management following the availability of antimicrobial susceptibility testing results, this real-life study suggests that additional early optimizations in escalation could probably have been made if the rapid test data had been used. Direct detection of CTX-M negative results resulted in few therapeutic changes, which could have probably been in greater numbers integrating epidemiological and clinical data. Multifaceted interventions for all the prescribers to illustrate the potential of rapid tests for the detection of the main resistance enzymes in Gram-negative bacteria should be considered part of the implementation cost. Further studies on the impact on antibiotic prescribing of the written prediction of antibiotic susceptibility according to rapid resistance phenotype are desirable to establish a horizon in which to align clinical and ecological outcomes. Study Design This study was performed at the University Hospital Città della Salute e della Scienza di Torino, a 1900-bed tertiary care teaching hospital in Turin, northwestern Italy, from July 2020 to June 2022. This study included all E. coli positive BCs deemed representative of a single bloodstream infection (BSI) event that were serially processed through two microbiological diagnostics: (1) rapid workflow with communication of direct detection of CTX-M ESBL-producers report on the laboratory information system; (2) conventional culture-based workflow. The study considered only one BC bottle per patient/BSI event while excluding those collected from patients with previous Gram-negative BSI within the previous 15 days. Comparison of antimicrobial resistance patterns of E. coli isolates according to rapid phenotypic characterization and antimicrobial management (empirical antibiotic therapy, therapeutic modifications after the rapid diagnostic result, therapeutic modifications after the availability of conventional susceptibility testing results) were retrospectively analyzed to assess the contribution of the rapid test result on antimicrobial management. Conventional Blood Cultures Routine The Microbiology and Virology Unit is part of the Azienda Ospedaliera Universitaria "Città della Salute e della Scienza di Torino". Clinical samples are accepted seven days a week from 08:00 to 20:00. Outside these hours, routine clinical samples reaching the laboratory are processed the next day, while those deemed essential for the therapeutic management of patients are processed by an emergency laboratory technician and validated by a microbiologist. During the study period, BACT/ALERT ® FA Plus aerobic BACT/ALERT ® , FN Plus anaerobic, and BACT/ALERT ® PF Plus pediatric bottles (bioMérieux, Marcy l'Ètoile, France) were used to process BCs and incubated in the BACT/ALERT ® Virtuo ® (bioMérieux). BC bottles flagged positive by the BACT/ALERT ® Virtuo ® underwent subculture on solid media at 36 ± 1 • C and proper atmosphere conditions and slides preparation using WASPLab ® instrument (Copan, Brescia, Italy). The automated stainer Aerospray ® (ELITechGroup, Turin, Italy) was used to perform Gram staining of slides. Microbial identification was performed on overnight subcultures with MALDI-TOF MS following the manufacturer's instructions. Susceptibility of nonfastidious Gram-negative isolates to several antibiotics (cefotaxime, ceftazidime, cefepime, piperacillin/tazobactam, ceftolozane/tazobactam, ceftazidime/avibactam, meropenem, imipenem, ertapenem, gentamicin, amikacin, colistin, trimethoprim/sulfamethoxazole, ciprofloxacin, and levofloxacin) was tested with an automated microdilution assay (Panel NMDR on automated Microscan WalkAway 96 Plus System, Beckman Coulter, Switzerland) according to the manufacturer's instructions. EUCAST guidelines (https://www.eucast.org (accessed on 1 January 2023)) were used to identify ESBL-and carbapenemase-producing Enterobacterales strains, and confirmatory tests for resistance mechanisms were performed once the conventional antimicrobial susceptibility testing results became available. The phe-notypic test included in the NMDR microdilution panel, based on synergy of β-lactamases inhibitor clavulanic acid on cefotaxime and ceftazidime MICs, was used to detect ESBLs. Multiplex real-time polymerase chain reaction assay specific for blaCTX-M-like genes (ESBL ELITe MGB Kits, ELITechGroup Molecular Diagnostics, Turin, Italy) was used on isolates that tested positive by phenotypic test for ESBL detection. Eazyplex ® SuperBug AmpC (AmplexDiagnostics GmbH, Gars am Inn, Germany) was used for the detection of AmpC β-lactamases (ACC, CMY-II, DHA, MOX) on isolates with ceftazidime and/or cefotaxime MIC > 1 mg/L that tested negative by phenotypic test for ESBL detection. The genotypic assay Xpert Carba-R on the GeneXpert platform (Cepheid, Sunnyvale, CA, USA) was used to investigate the main carbapenemase genes in EB (blaKPC, blaNDM, blaVIM, blaIMP, and blaOXA-48-like) when meropenem and/or ertapenem MICs were >0.12 mg/L. Microbial identification and susceptibility results were promptly communicated to clinicians through the laboratory information system. Definitions Antibiotic therapy was deemed empirical when administered during the period prior to the receipt of conventional BC results. Combination therapy refers to the use of two or more antibiotics. Empirical antibiotic therapy was deemed active when a causative bacterial strain was susceptible in vitro to at least one prescribed drug. Antibiotic therapy introduction refers to starting antibiotic treatment in a patient who is not on empirical antibiotic therapy. Antibiotic escalation refers to the addition of a new antibiotic or a switch for a broader-spectrum agent. Antibiotic de-escalation refers to the discontinuation of an antibiotic or a switch for a narrower-spectrum agent. Statistical Analysis Descriptive data are presented as absolute (n) and relative (%) frequencies. Comparison involving dichotomous variables was tested using the χ2 test or Fisher Exact Test as appropriate. Statistical significance was set at p-value < 0.05. Informed Consent Statement: Informed consent was waived due to the retrospective nature of the study. Data Availability Statement: The dataset analyzed during the current study is available from the corresponding author upon reasonable request. Conflicts of Interest: The authors declare no conflict of interest.
2023-03-01T16:09:20.658Z
2023-02-27T00:00:00.000
{ "year": 2023, "sha1": "b31394be151339b30bb9e0945dbba4a1f420d649", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-6382/12/3/473/pdf?version=1677467601", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b711e617d78a19caaa0edfdd421701ae625365cc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
122247437
pes2o/s2orc
v3-fos-license
The minimal length uncertainty and the quantum model for the stock market We generalize the recently proposed quantum model for the stock market by Zhang and Huang to make it consistent with the discrete nature of the stock price. In this formalism, the price of the stock and its trend satisfy the generalized uncertainty relation and the corresponding generalized Hamiltonian contains an additional term proportional to the fourth power of the trend. We study a driven infinite quantum well where information as the external field periodically fluctuates and show that the presence of the minimal trading value of stocks results in a positive shift in the characteristic frequencies of the quantum system. The connection between the information frequency and the transition probabilities is discussed finally. Introduction Econophysics as an interdisciplinary research field was started in mid 1990s by physicists who are interested to apply theories and models originally developed in physics for solving the complex problems appeared in economics, specially in financial markets [1]. Because of the stochastic nature of the financial markets, the majority of tools for market analysis such as stochastic processes and nonlinear dynamics have their roots in statistical physics. Besides statistical physics, other branches of physics and mathematics have a major role in the development of econophysics. The sophisticated tools developed in quantum mechanics such as perturbation theory, path integral (Feynman-Kac) methods, random matrix and the spin-glass theories are shown to be useful for option pricing and portfolio optimization. Among theoretical physics, quantum field theory has a special role to reveal the intricacies of nature from quantum electrodynamics to critical phenomena. For instance, it can be used to model portfolios as a financial field and describes the change of financial markets via path integrals and differential manifolds [2,3]. The application of quantum mechanics to financial markets has attracted much attention in recent years to model the finance behavior with the laws of quantum mechanics and it is becoming now a rather established fact [4][5][6][7][8][9][10]. For instance, Schaden, contrary to stochastic descriptions, used the quantum theory to model secondary financial markets to show the importance of trading in determining the value of an asset [11]. He considered securities and cash held by investors as the wave function to construct the Hilbert space of the stock market. Another useful application of quantum theory to trading strategies is quantum game theory which is the generalization of classical game theory to the quantum domain [12,13]. This theory is primarily based on quantum cryptography and contains superimposed initial wave functions, quantum entanglement of initial wave functions, and superposition of strategies in addition to its classical counterpart. At this point, it is worth explaining why quantum mechanics is essential to study the behavior of the stock market. Classical mechanics which is described by Newton's law of motion is deterministic in the sense that it exactly predicts the position of a particle at each instant of time. This is similar to the evolution of a stock price with zero volatility (σ = 0) that results in a deterministic evolution of the stock price. However, in the context of quantum mechanics, the evolution of the position of the particle has a probabilistic interpretation which is similar to the evolution of a stock price with a non-zero volatility (σ = 0) [3]. Note that there is a close connection between the Black-Scholes-Merton (BSM) equation [14,15] and the Schrödinger equation: The position of a quantum particle is a random variable in quantum mechanics, and similarly, the price of a security is a random variable in finance. Also, the Schrödinger equation admits a complex wave function, whereas the BSM equation is a real partial differential equation which can be considered as the Schrödinger equation for imaginary time. Haven showed that BSM equation is a special case of the Schrödinger equation where markets are assumed to be efficient [16]. Indeed, various mathematical structures of quantum theory such as probability theory, state space, operators, Hamiltonians, commutation relations, path integrals, quantized fields, fermions and bosons have natural and useful applications in finance. In the language of Schaden, "The evolution into a superposition of financial states and their measurement by transaction is my understanding of quantum finance" [17]. Recently, Zhang and Huang have proposed a new quantum financial model in econophysics and defined wave functions and operators of the stock market to construct the Schrödinger equation for studying the dynamics of the stock price [18]. They solved the corresponding partial differential equation of a given Hamiltonian to find a quantitative description for the volatility of the Chinese stock market. In their formalism, the wave function ψ(℘, t) is considered as the price distribution, where ℘ denotes the stock price and t is the time. There, the stock price is approximately considered as a continuousvariable. However, the stock price is actually a discrete variable and admits a non-zero minimal price length (∆℘) min = 0 which depends on the stock market's local currency. In this paper, we incorporate the fact of discrete nature of the stock price with the quantum description of the stock market. We show that the uncertainty relation between the price and its trend and the form of the Hamiltonian should be modified to make the quantum formulation consistent with discrete property of the stock price. Note that, Bagarello has also tried to present quantum financial models which describe quantities which assume discrete values [6][7][8][9][10]. However, Bagarello's approach is mainly based on the Heisenberg approach rather than on the Schrödinger equation. The Quantum Model Before applying quantum theory to finance we need to identify the macro-scale and micro-scale objects of the stock market. Since the stock index is based on the share prices of many representative stocks, it is meaningful to consider the stock index as a macro system and take every stock as micro-scale object [18]. Note that each stock is always traded at a certain price which shows the particle behavior. Also, the stock price always fluctuates in the market which is the wave property. Therefore, because of this wave-particle duality, we can consider the micro-scale stock as a quantum system. Now we can construct the quantum model for the stock market based on the postulates of quantum mechanics. First, we introduce the wave function ψ(℘ 0 , t) as the vector in the Hilbert space which describes the state of the quantum system. More precisely, ψ(℘ 0 , t) is the state vector |ψ, t in price representation i.e. ψ(℘ 0 , t) ≡ ℘ 0 |ψ, t . Also, we take the modulus square of the wave function as the price distribution and demand that the superposition principle of quantum mechanics also holds where |φ n is the possible orthonormal basis states of the stock system and c n = φ n |ψ . Therefore, the state of the stock price before trading should be a superposition of its various possible states with different prices so-called a "wave packet". We can consider a trading process, buy or sell at some price, as a physical observation or measurement. So the trading process projects the state of the stock to one of the possible states with a definite price where |c n | 2 denotes its probability. In other words, we can interpret |ψ(℘ 0 , t)| 2 as the probability density of the stock price versus time, namely, which shows the probability of the stock price between a and b at time t. Hamiltonian which results in the following Schrödinger equation where the Hamiltonian is a function of price, trend, and time and generates the temporal evolution of the quantum system. The trade of a stock can be considered as the basic process that measures its momentary price. This measurement can only be performed by changing the owner of the stock which represents the Copenhagen interpretation of a quantum system [11]. Therefore, a measurement may change the outcome of subsequent measurements so that it cannot be described by ordinary probability theory. Indeed, we can never simultaneously know both the ownership of a stock and its price. The stock price can only be determined at the time of sale when it is between traders. Moreover, the owners decide to sell or buy the stock at higher or lower prices that determines the trend of the stock price. So, in the quantum domain, the stock price and stock trend operators satisfy the following uncertainty relation where T 0 = −i∂/∂℘ 0 . However, as we show in the next section, this relation should be modified when we take into account the discrete nature of the stock price. The Generalized Uncertainty Principle According to the above uncertainty relation, in principle, we can separately measure the price and the trend with arbitrary precision. However, since the price is a discrete variable there is a genuine lower bound on the uncertainty of its measurement. Thus, the ordinary uncertainty principle should be modified to so-called generalized uncertainty principle (GUP). Here we consider a GUP which results in a minimum price uncertainty where β 0 and ζ are positive constants which depend on the expectation value of the price and the trend operators. In ordinary quantum mechanics ∆℘ can be made arbitrarily small as ∆T grows correspondingly. However, this is no longer the case if the above relation holds. For instance, if ∆℘ decreases and ∆T increases, the new term β 0 (∆T ) 2 will eventually grow faster than the left-hand side and ∆℘ cannot be made arbitrarily small. Now the boundary of the allowed region in ∆℘∆T plane is given by which yields the following minimal price uncertainty (∆℘) min = (1 + ζ)β 0 . The above uncertainty relation can be obtained from the deformed commutation relation Because of the extra term β 0 T 2 , this relation cannot be satisfied by the ordinary price and trend operators since they obey the canonical commutation relation [℘ 0 , T 0 ] = i. However, we can write them in terms of ordinary operators as It is easy to check that using this definition, Eq. (8) is satisfied to first-order of GUP parameter i.e. where ℘ and T are given by Eqs. (9) and (10). Note that since [℘, T ] = i, we cannot further consider the generalized trend operator T as the derivative with respect to price 2 . Now using Eqs. (5,8) and ∆℘∆T ≥ (1/2)| [℘, T ] | we find ζ = β 0 T 2 . So using Eq. (7) the absolutely smallest uncertainty in price is when the expectation value of the trend operator (or ζ) vanishes, namely T = 0 = ζ. We can interpret (∆℘) min as the minimal price length and indicates that we cannot measure the price of a stock with uncertainty less than (∆℘) min which agrees with discrete nature of the stock price. It is worth mentioning that, the generalized uncertainty relation also appears in the context of quantum gravity where there is a minimal observable length proportional to the Planck length [19][20][21][22][23]. In the string theory one can interpret this length as the length of strings. Since ℘ and T do not exactly satisfy Eq. (8), our approach is essentially perturbative. Obviously, this procedure affects all Hamiltonians in quantum financial models. To proceed further, let us consider the following Hamiltonian: which using Eq. (9) can be written as where In most Chinese stock markets there is a price limit rule: the rate of return in a trading day compared with the previous day's closing price cannot be more than ±10%. So the stock price fluctuates between the price limits or in a one-dimensional infinite well (particle in a box). The size of the box is d 0 =℘×20% where℘ is the previous day's closing price. Now if we use a transformation of coordinate ℘ ′ = ℘ 0 −℘, the infinite square well will be symmetric with width d and we can define the absolute return as r = ℘ ′ /℘. So the rate of the return is the new coordinate variable and the well's width becomes d = 20%. Now we can write the GUP corrected Hamiltonian inside the well in the absence of external factors approximately asĤ = − 1 2m where m = m 0 /℘ 2 , β = β 0 /℘ 2 , and it is valid to first-order of GUP parameter (14). This Hamiltonian has exact eigenvalues and eigenfunctions [22] φ n (r) = 2 d sin where n = 1, 2, 3, . . .. To write the total Hamiltonian we need to add the potential which describes the effects of information on the stock price. The market information usually results either in the increase of the stock price or in the decrease of the stock price. Here, similar to Ref. [18], we consider a periodical idealized model which represents the two types of information. This form of potential also appears for a charged particle moving in an electromagnetic field with the difference that the information play the role of the external fields. So up to the dipole approximation we can write the GUP corrected Hamiltonian of this coupled system asĤ = − 1 2m where ω is the frequency of information and λ denotes the amplitude of the information field. The first two terms of the above equation represent the GUP corrected kinetic energy of the stock return and the last term corresponds to the potential energy due to presence of information in the stock market. Note that, the choice of the Hamiltonian in (18) is not the only one and we can replace cos(ωt) with sin(ωt), as well as with some other periodic functions. To find the temporal evolution of the wave function in price-representation, we need to solve the following Schrödinger equation ∂ 4 ∂r 4 + λ r cos ωt ψ(r, t). To solve this equation, we can use the perturbative procedure that is also used in Ref. [9] in connection with stock markets. Since the exact solutions for λ = 0 is presented in Eqs. (16) and (17), we can expand the solutions in terms of these state vectors [24] ψ(r, t) = n c n (t)e −iEnt φ n (r), where By repeatedly substituting this expression back into right hand side, we obtain an iterative solution where, for instance, c (0) n = c n (0) and the first-order term is If we take the initial wave function as the ground state of unperturbed Hamiltonian (a cosine distribution to simulate the state of stock price in equilibrium) i.e. ψ(r, 0) = r|1 = 2 d cos πr d , we have c n (0) = δ 1n which results in [9] c (1) where n|r|1 = − 8nd (n 2 − 1) 2 π 2 for n even and n|r|1 = 0 for n odd. By evaluating the time integral we get c (1) for n even and c we observe a large transition probability from ground state to (n − 1)th excited state, namely ω n = ω 0 n 1 + where ω 0 n = π 2 2md 2 (n 2 − 1) are the characteristic frequencies at the continuous limit. In the Chinese stock market the average stock price is approximately 10 Yuan and (∆℘) min = 0.01 Yuan. To find m we need to calculate the mean daily volatility from annual volatility given by for large n. Since ω n > 4 × 10 −3 s −1 if a single cycle of information fluctuation is larger than 25 minutes there is no large transition probability to other states and the probability density of the rate of stock return approximately maintains its shape over time (see Fig. 2 of Ref. [18] for ω = 10 −4 s −1 ). Note that, in quantum gravity, it is usually assumed that the minimal length is of the order of the Planck length ℓ P l ∼ 10 −35 m. However, the existence of this infinitesimal length is not yet confirmed by the experiment. On the other hand, in quantum finance the minimal trading value is not too small which makes essentially detectable effects. In other words, the application of GUP in quantum description of finance is more meaningful than in quantum physics. Conclusions We have studied the effects of the discreteness of the stock price on the quantum models for the stock markets. In this formalism, the minimum trading value of every stock is not zero and the stock price and its trend satisfy the generalized uncertainty relation. This modifies all Hamiltonians of the stock markets and adds a term proportional to the fourth power of the trend to the Hamiltonians. For the quantum model proposed by Zhang and Huang where there is a price limit rule and the information has a periodic fluctuation, we obtained the characteristic frequencies of the quantum system. If the frequency of information fluctuation coincides with ω n we have a large transition probability to (n − 1)th excited state. We also showed that the discrete nature of the stock price results in a positive frequency dependent shift in characteristic frequencies where for the Chinese stock market we have ω n > 4 × 10 −3 s −1 .
2012-01-13T04:46:10.000Z
2011-11-27T00:00:00.000
{ "year": 2011, "sha1": "9d5c80b9ba19b28a0a1ce5e4579163a50285a1f5", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1111.6859", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "9d5c80b9ba19b28a0a1ce5e4579163a50285a1f5", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Mathematics", "Economics", "Physics" ] }
119099652
pes2o/s2orc
v3-fos-license
Resonantly enhanced photoionization in correlated three-atomic systems Modifications of photoionization arising from resonant electron-electron correlations between neighbouring atoms in an atomic sample are studied. The sample contains atomic species A and B, with the ionization potential of A being smaller than the energy of a dipole-allowed transition in B. The atoms are subject to an external radiation field which is near-resonant with the dipole transition in B. Photoionization of an atom A may thus proceed via a two-step mechanism: photoexcitation in the subsystem of species B, followed by interatomic Coulombic decay. As a basic atomic configuration, we investigate resonant photoionization in a three-atomic system A-B-B, consisting of an atom A and two neighbouring atoms B. It is found that, under suitable conditions, the influence of the neighbouring atoms can strongly affect the photoionization process, including its total probabilty, time development and photoelectron spectra. In particular, by comparing our results with those for photoionization of an isolated atom A and a two-atomic system A-B, respectively, we reveal the characteristic impact exerted by the third atom. Introduction Photoionization of atoms and molecules is one of the most fundamental quantum processes. It played a key role in the early days of quantum mechanics and has ever since been paving the way towards an improved understanding of the structure and dynamics of matter on a microscopic scale. Today, kinematically complete photoionization experiments allow for accurate tests of the most sophisticated ab-initio calculations. Besides, photoionization studies in a new frequency domain are currently becoming feasible by the availability of novel xuv and x-ray radiation sources [1,2,3], giving rise to corresponding theoretical developments (see, e.g., [4,5,6]). Various photoionization mechanisms rely crucially on electron-electron correlations. Prominent examples are single-photon double ionization as well as resonant photoionization. The latter proceeds through resonant photoexcitation of an autoionizing state with subsequent Auger decay. In recent years, a similar kind of ionization process has been studied in systems consisting of two (or more) atoms. Here, a resonantly excited atom transfers its excitation energy radiationlessly via interatomic electron-electron correlations to a neighbouring atom leading to its ionization. This Auger-like decay involving two atomic centers is commonly known as interatomic Coulombic decay (ICD) [7,8]. It has been observed, for instance, in noble gas dimers and water molecules [9]. In metal oxides, the closely related process of multi-atom resonant photoemission (MARPE) was also observed [10]. We have recently studied resonant two-center photoionization in heteroatomic systems and shown that this ionization channel can be remarkably strong [11,12,13]. In particular, it can dominate over the usual single-center photoionization by orders of magnitude. Besides, characteristic effects resulting from a strong coupling of the ground and autoionizing states by a relatively intense photon field were identified. Also resonant two-photon ionization in a system of two identical atoms was investigated [14]. We note that photoionization in two-atomic systems was also studied in [15,16] and [17,18]. The inverse of two-center photoionization (in weak external fields) is two-center dielectronic recombination [19]. In the present contribution, we extend our investigations of electron correlationdriven interatomic processes by considering photoionization of an atom A in the presence of two neighbouring atoms B (see figure 1). All atoms are assumed to interact with each other and with an external radiation field. We show that the photoionization of atom A via photoexcitation of the system of two neighbouring atoms B and subsequent ICD can be by far the dominant ionization channel. Moreover, we reveal the characteristic properties of the process with regard to its temporal dependence and photoelectron spectra. In particular, by comparing our results with those for photoionization in a system of two atoms A and B, we demonstrate the influence which the presence of the second atom B may have. Atomic units (a.u.) are used throughout unless otherwise stated. Theoretical Framework Let us consider a system consisting of three atoms, A, B and B ′ , where B and B ′ are atoms of the same element and A is different. We shall assume that all these atoms are separated by sufficiently large distances such that free atomic states represent a reasonable initial basis set to start with. Let the ionization potential I A of atom A be smaller than the excitation energy ∆E B of a dipole-allowed transition in atoms B and B ′ . Under such conditions, if our system is irradiated by an electromagnetic field with frequency ω 0 ≈ ∆E B , the ionization process of this system (i.e., essentially of the atom A) can be qualitatively different compared to the case when a single, isolated atom A is ionized. Indeed, in such a case A can be ionized not only directly but also via resonant photoexcitation of the subsystem of B and B ′ , with its consequent deexcitation through energy transfer to A resulting in ionization of the latter. In the following, we consider photoionization in the system of atoms A, B and B ′ in more detail. For simplicity, we suppose that the nuclei of all atoms are at rest during photoionization. Denoting the origin of our coordinate system by O, we assume that the nuclei of the atoms B and B ′ are located on the Z-axis: R B = (0, 0, Z B ) and R B ′ = (0, 0, Z B ′ ). The coordinates of the nucleus of the atom A are given by R A = (X A , Y A , Z A ). The coordinates of the (active) electron of atom λ with respect to its nucleus are denoted by r λ , where λ ∈ {A, B, B ′ }. The total Hamiltonian describing the three atoms embedded in an external electromagnetic field reads whereĤ 0 is the sum of the Hamiltonians for the noninteracting atoms A, B and B ′ . We shall assume that the (typical) distances ∆R between the atoms are not too large, ∆R ≪ c/∆E B , where c is the speed of light, such that retardation effects in the electromagnetic interactions can be ignored. If transitions of electrons between bound states in atoms B and B ′ are of dipole character, then the interaction between each pair of atoms (λ, γ) (with λ, γ ∈ {A, B, B ′ }) can be written aŝ where R λ,γ = R λ − R γ and δ ij is the Kronecker symbol. Note that in (2) a summation over the repeated indices i and j is implied. In (2),Ŵ λ denotes the interaction of the atom λ with the laser electromagnetic field. The latter will be treated as a classical, linearly polarized field, described by the vector potential A(r, t) = A 0 cos (ω 0 t), where A 0 = cF 0 /ω 0 , ω 0 = ck 0 is the angular frequency and F 0 is the field strength. The interactionŴ λ then readŝ wherep λ is the momentum operator for the electron in atom λ. Our treatment of photoionization will be based on the following points: Oscillator strengths for dipole-allowed bound-bound transitions can be very strong. This means that, provided that the distances between all the atoms in our system are of the same order of magnitude, the interaction between atoms B and B ′ is much more effective than the interaction between atoms A and B (or A and B ′ ). Besides, atoms B and B ′ will, in general, couple much more strongly to a resonant laser field than atom A. In what follows, we shall assume that the intensity of the laser field is relatively low such that the interaction between atoms B and B ′ changes the states of the system more substantially than the coupling of these atoms to the laser field. Therefore, we shall begin with building states of the B-B ′ subsystem in the absence of the field. The second step of our treatment will be to include the interaction of the B-B ′ subsystem with the laser field and, in the third step, we complete the treatment of ionization by considering the interaction of atom A with both the laser field and the field-dressed subsystem of atoms B and B ′ . I. We denote the ground and excited states of the undistorted atoms B and B ′ by φ 0 , φ e and φ ′ 0 , φ ′ e , respectively. Let the corresponding energies of these states be ε 0 and ε e . The state ψ BB ′ of the B-B ′ subsystem can be expanded into the "complete" set of undistorted atomic states represented by the configurations In the approximation, which neglects the interatomic interaction, the configurations φ 0 φ ′ e and φ e φ ′ 0 are characterized by exactly the same value of the (undistorted) energy E 0e = ε 0 + ε e . The latter, in turn, strongly differs from the energies E 00 = 2ε 0 and E ee = 2ε e which are characteristic for the configurations φ 0 φ ′ 0 and φ e φ ′ e , respectively. Therefore, provided that the distance between the atoms is not too small, the interaction V BB ′ will strongly mix the configurations (ii) and (iii) only, while the other configurations (i) and (iv) will be affected only very weakly. Taking this into account, it is not difficult to find the states of the subsystem of interacting atoms B and B ′ which read These two-atomic states are normalized and mutually orthogonal. They posses energies given by Note that, for definiteness, v BB ′ has been assumed to be real and negative here, as will always be the case in our examples below (see section 3). II. Let us now consider two interacting atoms B and B ′ embedded in a resonant laser field. One can look for a state of such a system by expanding it into the new set of states given by Eq. (4), Inserting the expansion (5) into the corresponding wave equation, we obtain a set of coupled equations for the unknown time-dependent coefficients g(t), a + (t), a − (t) and b(t): The system of equations (6) can be greatly simplified by noting the following. First, all transition matrix elements of the interaction with the laser field, which involve the asymmetric state ϕ − , are equal to zero and, thus, only the remaining three states can be coupled by the field. Second, if we suppose that the frequency of the laser field is resonant to the transitions ϕ g ←→ ϕ + and that the field is relatively weak such that the non-resonant transitions ϕ + ←→ ϕ e are much less effective than the above resonant ones, the system (6) effectively reduces to which can be readily solved by using the rotating wave approximation. Assuming that the field is switched on suddenly at t = 0, we obtain two solutions and In the above equations, we have introduced where Ω R = (E + − E g − ω 0 ) 2 + 4 |W g,+ | 2 is the Rabi frequency, W g,+ = ϕ g |F 0 · (p B +p B ′ ) /(2ω 0 )| ϕ + and W +,g = (W g,+ ) * . The two solutions in (9) correspond to two different initial conditions: at t = 0 the system is either in the state ϕ g or in ϕ + . They are orthogonal to each other and form a "complete" set of field-dressed states of the subsystem B-B ′ . Note also that we have neglected the spontaneous radiative decay of the excited state ϕ + which, in our case, is justified as long as |W g,+ | ≫ Γ r , where Γ r is the radiative width of ϕ + . III. Now, as the last step, we shall add atom A to our consideration. Let χ 0 and χ p , where p is the electron momentum, be the ground and a continuum state of a single, isolated atom A. The wavefunction of the total system A-B-B ′ can be expanded into the following "complete" set of states Here, the initial conditions are given by α 0 (0) = 1, β 0 (0) = 0 and α p (0) = β p (0) = 0. The coupling of atom A to both the subsystem B-B ′ and the laser field involves boundcontinuum transitions which are normally much less effective than the bound-bound ones. For this reason, we may assume that the interactions of A with the laser field and the B-B ′ -subsystem is weak and consider ionization of atom A in the lowest order of perturbation theory in these two interactions. As a result, by inserting the expansion (11) into the corresponding Schrödinger equation we obtain where ǫ A g is the energy of the electron in the initial state χ 0 of atom A and ǫ A p is the electron energy after the emission. The probability for ionization of the three-atomic system, as a function of time, then reads Note that equations (12) are readily solved analytically. However, the resulting expressions are somewhat lengthy and will not be given here. Photoionization probability for Li, Li-He and Li-He-He systems in an external electromagnetic field, given as a function of time. The field strength is F 0 = 10 −5 a.u., the field is linearly polarized and its frequency is resonant to the corresponding transition in the He or He-He subsystem. The distances between Li and each of the He atoms is always 14 a.u. The atomic positions are aligned along the field polarization with the Li atom in the middle of the three-atomic system. The solid, dash and dot curves display results for Li-He-He, Li-He and Li systems, respectively. Note that the ionization probability for an isolated Li atom has been multiplied by a factor of 500. For more explanation see the text. Results and Discussion Based on the results obtained in the previous section, let us now turn to the discussion of some aspects of photoionization in a system consisting of one lithium and two helium atoms. We suppose that in our three-atomic system the positions of the lithium and helium atoms are given by the vectors R Li = (0, 0, 0), R He = (0, 0, Z) and R He ′ = (0, 0, −Z), respectively. Our system is initially (at time t = 0) in its ground configuration and is irradiated by a monochromatic laser field. The field is linearly polarized along the Z-axis and its frequency is resonant to the ϕ g -ϕ + transition in the He-He subsystem, i.e., E + − E g − ω 0 = 0. In figure 2, we present the probability for ionization of our system as a function of time. The probability shows a non-monotonous behaviour in which time intervals, when the ionization probability rapidly increases, are separated by intervals, when the probability remains practically constant, reflecting oscillations of the electron populations with the Rabi frequency Ω R He−He between the ground and excited states of the He-He subsystem in a resonant electromagnetic field. For comparison, we also show in figure 2 results for ionization of a single (separated) Li atom and for ionization in a two-atomic Li-He system. In the latter case, the lithium atom is located at the origin (R Li = (0, 0, 0)) and the coordinates of the helium atom are R He = (0, 0, 14 a.u.). The frequency of the laser field is assumed to be resonant to the 1s 2 1 S-1s2p 1 P transition frequency of the corresponding bound states of a single He atom. In contrast to the single-atom ionization, in both the two-and three-atomic cases the ionization probability demonstrates a step-wise temporal development in which time intervals of rapid probability growth are followed by intervals of almost constant probability. We point out that in the three-atomic case, however, the size of these time intervals is shorter by a factor of √ 2. Compared to ionization of a single Li atom, ionization in the two-atomic system is very strongly enhanced [11,12,13]. When the three-atomic system is irradiated, the enhancement increases even further. In the range of small values of t, where all ionization probabilities still increase monotonously, this additional enhancement is equal to a factor of 4. At larger t, however, when the two ionization probabilities exhibit stepwise behaviours, this additional enhancement due to the presence of the second He atom is reduced to a factor close to 2 on average, as can also be seen in figure 2. All the above features can be understood by noting the following: i) For the chosen set of parameters of our two-and three-center systems, the indirect channels of ionization, which involve two-or three-atomic correlations, are substantially stronger than the direct one. Therefore, these correlations have a dominating effect on the ionization. ii) At small t, ionization in the two-and three-atomic systems is basically a two-step process: the first step is photoexcitation in the He or He-He subsystem and the second step is a consequent energy transfer to Li. In each case, both these steps are described by basically the same dipole transition matrix element of the subsystem. Since, compared to a single He atom, this dipole element in He-He is by √ 2 larger than in He, one obtains a factor of 2 for the enhancement in the ionization amplitude, leading to a factor of 4 in the ionization probability (see also [20]). iii) At larger t, when Rabi oscillations show up, the second step "saturates" in the sense that the averaged probability to find the corresponding subsystem in the excited state becomes equal to 50%. Therefore, the ionization probability in the three-atomic system is now larger (on average) by a factor of 2 only. iv) The origin of the step-wise behaviours of the ionization probabilities for the two-and three-atomic systems lies in the oscillations of the population between the ground and excited states in the He atom (for the two-atomic case) or in the He-He subsystem (for the three-atomic case). The scale of these oscillation is set by the Rabi frequency and, because in the He-He subsystem the latter is larger by a factor of √ 2, the corresponding time intervals are shorter by the same factor. Additional information about the ionization process can be obtained by considering the energy spectrum of emitted electrons. Such a spectrum is shown in figure 3 for the same systems and parameters as in figure 2 and for a pulse duration of T = 100 ps. In panel (a), we compare the energy spectra of electrons emitted in the process of photoionization of Li-He-He and Li-He systems. In both cases, the main feature is the presence of three pronounced maxima. The origin of these peaks is similar to the splitting into three lines of the energy spectrum of photons emitted during atomic fluorescence in a resonant electromagnetic field [21]. In such a field, the ground and excited levels of the He and He-He subsystems split into two sub-levels, which differ by the corresponding Rabi frequency Ω R . As a result, the resonant electronic correlations between these subsystems and the Li atom lead to an energy transfer to the Li which peaks at ω 0 and ω 0 ± Ω R /2. Since, as was already mentioned, the Rabi frequencies of these subsystems differ by a factor of √ 2, the magnitude of the separation between the corresponding maxima in panel (a) of figure 3 also differs by this factor. Note also that the widths of these main maxima as well as the appearance of additional multiple maxima, seen in the figure, are related to the finiteness of the pulse duration; the distance between the latter is roughly given by 2π/T . The distinct influence, which the interatomic electron-electron correlations exert on the shape of the photoelectron spectra, is further highlighted in panel (b) of figure 3. It compares the energy spectra of photoelectrons emitted from our Li-He-He system and an isolated Li atom. In the latter case, there is only one main maximum, while the two main side peaks are missing, as one would expect (the additional multiple maxima are related again to the finiteness of the pulse duration). Conclusion We have studied resonant photoionization in a system A-B-B ′ consisting of three atoms, with two atoms B of the same element and one different atom A. We have shown that the mutual correlations among the atoms can largely enhance the ionization probability and distinctly modify also other properties of the process in a characteristic manner. In particular, as compared to the case of resonant photoionization in a two-atom system A-B, it has been demonstrated that the presence of a second atom B can (i) further enhance the photoionization process, (ii) change the time dependence of the ionization probability and (iii) move the side peaks in the photoelectron spectrum further apart.
2012-03-09T11:03:59.000Z
2012-03-09T00:00:00.000
{ "year": 2012, "sha1": "ad6309ed74ae6d69e5ce547e782202d0d2a79099", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1367-2630/14/10/105028", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "ad6309ed74ae6d69e5ce547e782202d0d2a79099", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
254104210
pes2o/s2orc
v3-fos-license
Overspinning Kerr-MOG black holes by test fields and the third law of black hole dynamics We evaluate the validity of the weak form of the cosmic censorship conjecture and the third law of black hole dynamics for Kerr-MOG black holes interacting with test scalar fields. Ignoring backreaction effects, we first show that both extremal and nearly extremal Kerr-MOG black holes can be overspun into naked singularities by test fields with a frequency slightly above the superradiance limit. In addition, nearly extremal Kerr-MOG black holes can be continuously driven to extremality by test fields. Next, we employ backreaction effects based on the argument that the angular velocity of the event horizon increases before the absorption of the test field. Incorporating the backreaction effects, we derive that the weak form of the cosmic censorship and the third law are both valid for Kerr-MOG black holes with a modification parameter α≲0.03\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\alpha \lesssim 0.03$$\end{document}, which includes the Kerr case with α=0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\alpha =0$$\end{document}. Introduction The singularity theorems developed by Penrose and Hawking imply that the gravitational collapse of a body leadsinevitably -to the formation of singularities [1]. The presence of these singularities precludes the definition of a welldefined initial value problem and thereby ruins the smooth, deterministic structure of space-times in general relativity. The fact that the formation of singularities cannot be avoided led Penrose to propose the cosmic censorship conjecture, which -in its weak weak form (Wccc) -asserts that the gravitational collapse of a body always ends up in a black hole rather than a naked singularity [2]. The singularities should be hidden behind the event horizons of black holes which disable their causal contact with distant observers. This way, the observers at asymptotically flat spatial infinity do not encounter any effects propagating out of the singulara e-mail: koray.duztas@ozyegin.edu.tr ity, and the smooth structure of space-times is preserved, at least locally. As a concrete proof of the cosmic censorship conjecture has been elusive, it has become customary to attack the closely related -though not identical -problem of the stability of event horizons. In these problems one modifies the background geometry of extremal or nearly extremal black holes with test particles and fields, and checks if this modification can lead to the destruction of event horizons which would imply that the singularities become naked. The first thought experiment in this vein was constructed by Wald [3]. There it was shown that test particles cannot overcharge or overspin an extremal Kerr-Newman black hole into a naked singularity. Following Wald many similar tests of Wccc were applied to black holes in electro-vacuum spacetimes, involving test particles [4][5][6][7][8][9][10][11][12][13], and fields [14][15][16][17][18][19][20][21][22][23][24][25]. The possibility to violate Wccc by the quantum tunnelling of test particles was discussed in [26][27][28][29][30][31][32]. The case of asymptotically anti-de Sitter black holes was also evaluated in the scattering of test particles and fields [33][34][35][36][37][38]. The evolution of singularities indicate the failure of general relativity at short length scales where quantum effects are expected to dominate. In addition, the fact that one needs to invoke the presence of dark matter and dark energy at large length scales motivated the quest for modified theories of gravity. One of the promising candidates to fill this gap is the Scalar-Tensor-Vector Gravity theory developed by Moffat [39]. This dark matter emulating theory of modified gravity has proved compatible with current observations regarding the rotation curves of galaxies and the dynamics of galactic clusters [40][41][42][43]. It also predicts the existence of gravitational waves which lends credence to its validity as an alternative theory of gravity [44,45]. The scalar-tensor-vector theory of modified gravity has a stationary and axi-symmetric black hole solution which is known as the Kerr-MOG black hole [46]. Kerr-MOG black holes are characterised by the mass parameter M, angular momentum J = Ma and the dimensionless parameter α which determines the modification from the Kerr solution. The thermodynamics of Kerr-MOG black holes, their observable shadows, and the quasi-normal modes have been studied [47][48][49]. Recently, it was also shown that energy can be extracted from Kerr-MOG black holes by a Penrose process [50]. The validity of Wccc was tested for Kerr-MOG black holes in the process of the absorption of a point particle by Liang, Wei, and Liu [51]. It was found that -though the extremal black holes cannot -nearly-extremal black holes can be destroyed by point particles. However, the authors argued that the event horizon will be restored when one considers the effect of the adiabatic process. Another intriguing problem at this stage is to test the validity of Wccc in the case of test fields scattering off Kerr-MOG black holes. In this work we evaluate the stability of the event horizons of Kerr-MOG black holes as they interact with test scalar fields. We consider the cases of both extremal and near-extremal black holes. Our analysis exploits the fact that superradiance occurs when scalar fields scatter off Kerr-MOG black holes, which was recently derived by Wondrak, Nicolini, and Moffat [52]. We also evaluate the validity of the third law of black hole dynamics which states that a nearly extremal black hole cannot be driven to extremality by any continuous process. Kerr-MOG black holes, scalar fields, Wccc In Boyer-Lindquist coordinates, the background geometry of the Kerr-MOG space-time is described by the metric where The MOG parameter α is a dimensionless measure of the difference between the Newtonian gravitational constant G N and the additional gravitational constant G The ADM mass and the angular momentum of the Kerr-MOG black hole are given by [53] The function Δ can be re-written in terms of the ADM mass where we have set G N = 1 without loss of generality. The spatial locations of the horizons are the roots of Δ Notice that the parameters of the Kerr-MOG space-time represent a black hole surrounded by an event horizon provided that where the equality corresponds to the case of an extremal black hole. In this work, we start with a Kerr-MOG black hole satisfying the main criterion (7), and allow it to interact with a test scalar field that is incident on the black hole from infinity. In this type of gedanken experiments it is a crucial assumption that the interaction of the black hole with the test scalar field does not alter the structure of the background geometry, but leads to modifications in the ADM mass and angular momentum parameters. At the end of the interaction the field decays away, leaving behind a space-time with modified parameters. If the final parameters of the space-time does not satisfy the inequality (7), one can conclude that the event horizon has been destroyed in the interaction of the scalar field with the black hole; i.e. Wccc is violated. The scattering of test scalar fields by Kerr-MOG black holes has recently been studied by Wondrak, Nicolini, and Moffat [52]. Analogous to the Kerr case, a neutral wave can be separated into variables in the form The contribution of the scattering wave to the mass and angular momentum parameters of the space-time are related by Superradiance occurs for scalar fields scattering off Kerr-MOG black holes as one would naively expect from Kerr analogy. If the frequency of the incoming wave is below the superradiance limit, the wave is reflected back with a larger amplitude, i.e. there is no net absorption of the wave by the black hole. The superradiance limit ω sl for Kerr-MOG black holes is also derived in [52] ω sl = mΩ = ma r 2 + + a 2 (10) where Ω is the angular velocity of the black hole and r + is the spatial location of the event horizon. Overspinning extremal Kerr-MOG black holes By definition, an extremal Kerr-MOG black hole satisfies where we have defined δ in . We send in a scalar field to an extremal black hole from infinity, to check if it is possible to overspin the black hole into a naked singularity. The contribution of the incoming wave to the energy and angular momentum parameters of the black hole are related by (9). A necessary condition for overspinning to occur is that one should be able to adjust the parameters of the incoming wave such that δ fin < 0 at the end of the interaction. To be more precise, we demand that By substituting δ J = (m/ω)δ E, and using (11), the condition (12) can be simplified in the form At this stage we make a choice for the energy of the incoming field. The energy of the incoming field contributes to the ADM mass of the black hole. To ensure that the test field approximation remains valid, the energy of the field should be much smaller than the mass of the black hole. For that reason we choose δ E = M for the incoming field with 1, so that the test field approximation is justified. With this choice, the Kerr-MOG metric (1) will retain its structure after the scattering of this test field, with its mass and angular momentum parameters modified. Now we can derive the maximum frequency of an incoming wave, which can be used to overspin an extremal Kerr-MOG black hole If the frequency of a scalar field is below the maximum value determined in (14), the scalar field can overspin an extremal Kerr-MOG black hole into a naked singularity. However, this condition is not sufficient for overspinning to occur. For that purpose one should also demand that the incoming wave is absorbed by the black hole; i.e. the frequency of the wave is larger than the superradiance limit. These two conditions should be simultaneously satisfied for overspinning to occur. The superradiance limit for extremal black holes can be derived by substituting r + = M and and a = M/(1 + α) in (10) For overspinning to occur ω max should be larger than the superradiant limit ω sl , so that the frequencies in the range (ω sl , ω max ) can be used to overspin an extremal Kerr-MOG black hole. It is manifest in equations (14) and (15) that ω max will larger than ω sl , if α > . The extremal Kerr-MOG black holes can be overspun into naked singularities by scalar test fields provided that the deformation parameter α is larger than the small parameter . Overspinning nearly-extremal Kerr-MOG black holes In the last decade it was shown that though extremal Kerr black holes cannot be overspun, nearly extremal Kerr black holes can be overspun into naked singularities by a discrete jump by test particles [5] and fields [16]. Recently we have shown that overspinning becomes generic in the case of neutrino fields which do not satisfy the weak energy condition [25]. In this section we attempt to overspin nearly-extremal Kerr-MOG black holes by test scalar fields, which satisfy the weak energy condition and exhibit superradiant scattering. We parametrise a nearly-extremal Kerr-MOG black hole in the form As in the case of extremal black holes, we send in a test field from infinity and demand that δ fin < 0 at the end of the interaction, so that the final parameters of the space-time represent a naked singularity. where we have used that δ J = (m/ω)δ E. Again we choose δ E = M for the energy of the incident wave, and impose (17) to simplify (18). The condition that δ fin < 0 can be expressed as Using (19), one directly derives the maximum frequency ω max for a scalar field incident on a nearly extremal Kerr-MOG black hole parametrised as (17), that could overspin the black hole into a naked singularity As we mentioned in the case of extremal black holes, the condition (20) is not sufficient for overspinning to occur. We should also demand that the frequency of the incoming wave is larger than the limiting frequency for superradiance. If (ω sl < ω max ), there exists a range of frequencies (ω sl , ω max ) which can be chosen to overspin a nearlyextremal Kerr-MOG black hole. To compare ω sl and ω max , one has to express ω sl for a nearly-extremal black hole in terms of the small parameter . Notice that for the nearlyextremal Kerr-MOG black hole parametrised as (16) and which leads to Though it is not quite manifest in Eqs. (20) and (23), the maximum frequency for an incident wave to overspin a Kerr-MOG black hole is actually larger than the limiting frequency for superradiance. To clarify this we have set ω max = (m/2M) f (α) and ω sl = (m/2M)g(α) and plotted f (α) and g(α) for = = 0.01, in the Fig. 1. The frequencies ω max and ω sl almost coincide for α = 0. However, as α increases the range of frequencies that can be used to overspin a Kerr-MOG black hole enlarges. Backreaction effects In a seminal paper, Will has argued that when test particles approach the event horizon of the black holes, the angular velocity of the event horizon increases due to the dragging of inertial frames [54]. The change in the angular velocity is estimated to be where δ J is the angular momentum of the test particle or field, and M is the mass of the black hole. We should note that, the black hole itself does not acquire angular momentum before the absorption of the test particle or field, in this process. Fig. 1 The graphs of f (α) and g(α) for = = 0.01. ω max is larger than ω sl for α > 0. ω max and ω sl deviate from each other as α increases If this were the case a nearly extremal black hole would be overspun before the absorption of the test particle or field, as the angular momentum parameter increases while the mass is kept constant. However only the angular velocity of the event horizon increases before the absorption. In particular, one can have a black hole with zero angular momentum with an event horizon with angular velocity given by (24), as stated by Will. The change in the angular velocity leads to a backreaction in the scattering problems. Since the limiting frequency for superradiance increases, the absorption of modes that could lead to the over-spinning of the black hole can be prevented. The backreaction effects in this form was analysed by Hod in [28], who argued that the violation of Wccc due to the the tunnelling of scalar particles derived in a previous work [26] could be prevented as the superradiant limit increases. In this section we calculate the backreaction effects for scalar test fields scattering off Kerr-MOG black holes. We start with the extremal case. Backreaction effects for extremal black holes In Sect. (2) we envisaged an extremal black hole interacting with a test field which carries energy δ E = M , and angular momentum δ J = (m/ω)δ E. We derived that there exists a range of frequencies ω sl < ω < ω max that lead to the overspinning of the black hole, where and To calculate the backreaction effects, let us consider an extremal black hole interacting with a test field with fre-quency ω that is arbitrarily close to but slightly less then ω sl . If α is larger than for the black hole, the test field will be absorbed by the black hole since ω > ω sl , and it will be overspin the black hole into a naked singularity. However as the test field approaches the black hole the angular velocity of the event horizon will increase by an amount (δ J )/(4M 3 ). The limiting value for superradiance will increase by the same amount. If the modified value of the superradiance limit exceeds the frequency of the incoming field for ω ω max , it will exceed the incoming frequency even further for lower values ω sl < ω < ω max , since Δω will be larger. Therefore it is critical to calculate the backreaction effects for ω ω max , for a certain α. Setting ω ω sl , and δ J = (m/ω)δ E, one finds that Now we demand that the modified value of the superradiance limit exceeds the frequency of the incoming field. Explicitly we demand that For = 0.01 the condition (27) is equivalent to α 0.0299 ∼ 3 We set m = 1 in in (27), since these modes have the highest absorption probability ignoring the modes with m = 0 which do not contribute to the angular momentum of the black hole [55]. Thus, the backreaction effects prevent the overspinning of extremal Kerr-MOG black holes if α ≤ 0.0299. It would be appropriate to elucidate the subject with a numerical example. Let us envisage an extremal Kerr-MOG black hole with α = 0.029 which is less than the critical value derived in (28). For this black hole we find that Ignoring backreaction effects, this black hole will be overspun into a naked singularity if it interacts with a test field satisfying δ = M , and ω sl < ω < ω max . For example if the black hole interacts with a test field of energy δ E = M , and frequency ω = (m/M)0.5, one finds that where we have used δ in = 0 and δ J = (m/ω)δ E. Since δ fin is negative, the event horizon is destroyed and the spacetime parameters represent a naked singularity after the absorption of the test field. However, before the absorption of the test field, the angular velocity of the horizon will increase by an amount Due to the dragging of the inertial frames, the superradiance limit will be modified. For m = 1: Since the modified value of the superradiance limit is larger than the frequency of the incident field (ω ω sl ), the field will not be absorbed by the black hole; thus the overspinning of the extremal black hole will be prevented. If we choose a smaller value for ω for the frequency of the incident wave in the range ω sl < ω < ω max , Δω and ω sl will be larger than the values derived in (31) and (32), thus ω sl will exceed the frequency of the incident wave even further. Therefore we conclude that the backreaction effects prevent the overspinning of the extremal Kerr-MOG black holes for α 0.0299. Backreaction effects for nearly-extremal black holes One can proceed the same way to calculate the backreaction effects for nearly extremal black holes. The maximum value for the frequency of a test field to overspin a nearly extremal Kerr-MOG black hole, and the superradiance limit was derived in (20) and (23). Again we demand that the modified value of the superradiance limit exceeds, the frequency of the incoming field, so that the overspinning is prevented. For nearly extremal black holes the increase in the superradiance limit is given by where we use δ E = M , and δ J = (m/ω)δ E for the incoming field. ( is used to parametrise the closeness to extremality.) Again we have substituted the critical value ω ω max , to derive an expression for Δω. As in the case of extremal black holes we demand that the modified value of the superradiance limit exceeds the frequency of the incoming fields for the challenging modes ω sl < ω < ω max . Setting = = 0.01, one can derive that ω sl + Δω will be larger than ω max if Therefore the backreaction effects prevent the overspinning of Kerr-MOG black holes for which α 0.0119. For a numerical example, let us consider a nearly extremal Kerr-MOG black hole with α = 0.011. For this black hole we find that Ignoring backreaction effects, this black hole would be overspun into a naked singularity by a test field with frequency ω sl < ω < ω max . However, the superradiance limit will be modified by an amount For m = 1, the modified value of the superradiance limit will be (1/M)0.49799, which is larger than ω max . Therefore a nearly extremal Kerr-MOG black hole parametrised as (16) and (17) cannot be overspun by a test field, provided that α 0.0119, if one considers the increase in the angular velocity of the event horizon due to the interaction with the field. We would like to note that the calculations in this section are valid for = 0.01. For smaller values of the values derived for Δω will approach the corresponding limit for extremal black holes and the backreactions will work for greater values of α approaching the value derived for extremal black holes. The validity of the third law for Kerr-MOG black holes The laws of black hole dynamics which were proposed by Bardeen, Carter, and Hawking are based on a connection between thermodynamics and black hole dynamics [56]. In this manner the area of the event horizon and the surface gravity are analogous to the entropy and the temperature, respectively. The identification of the area of the event horizon with entropy entails that it should not be possible to decrease the area of the event horizon, which had been previously proved by Hawking assuming that no naked singularities exist in the outer region [57]. Accordingly, it should not be possible to drive a black hole to extremality which would be analogous to decreasing the temperature to absolute zero. After a decade Israel proved the third law of black hole dynamics which states a nearly extremal black hole cannot be driven to extremality in any continuous process [58]. An alternative approach by Dadhich and Karayan also justified the validity of the third law. They showed that the range of the allowed energy and angular momentum ratios to drive a Kerr black hole to extremality, pinches off as one gets arbitrarily close to extremality [59]. Currently, the validity of the third law is justified for Kerr, Kerr-Newman and Reissner-Nördstrom black holes. The derivations by Hubeny, Jacobson-Sotiriou, and Düztaş-Semiz that nearly-extremal black holes can be overcharged or overspun into naked singularities [4,5,16] should not be interpreted as counter-examples to the third law. These authors confirm that extremal black holes cannot be overcharged/overspun, which implies that nearly extremal black holes are driven beyond extremality by a discrete jump rather than a continuous process. As one gets arbitrarily close to extremality the allowed ranges of energy, angular momentum, and/or charge for the test particle or field vanishes in accord with the derivations of Dadhich and Karayan. The analysis for the nearly-extremal Kerr-MOG black holes in the previous section can be exploited to test the validity of the third law. Let us consider a Kerr-MOG black hole arbitrarily close to extremality, which corresponds to the case → 0. The maximum value for the frequency of an incoming scalar field to overspin this Kerr-MOG black hole, and the value of the superradiance limit approach their corresponding values for the extremal case as → 0 A Kerr-MOG black hole arbitrarily close to extremality would become extremal if it absorbed a test field with frequency ω = ω max given in (37), while it would be overspun if ω < ω max as discussed in the previous section. Contrary to the case of the Kerr family of solutions, the interval (ω sl , ω max ) does not pinch off as the black hole becomes arbitrarily close to extremality. Therefore it first appears that Kerr-MOG black holes can be continuously driven to extremality by scalar test fields with frequency ω max , which is larger than the superradiance limit ω sl even in the → 0 limit. However we can calculate the increase in the superradiance limit as → 0 which is the corresponding value derived for extremal black holes. The calculations for backreactions imply that if α 0.02999 ∼ 3 for nearly extremal black holes, they cannot be driven to extremality by test fields, since the modified frequency for the superradiance limit will exceed the frequency of the incoming field which prevents its absorption. Therefore the third law of black hole dynamics is valid for Kerr-MOG black holes which are characterised by a deformation parameter α 0.03. Conclusions In this work we applied a test of the weak cosmic censorship conjecture in the interaction of Kerr-MOG black holes with test fields. We restricted ourselves to the case of scalar fields the energy-momentum tensor of which obey the weak energy condition. Our analysis also exploits the fact that superradiance occurs for scalar fields scattering off Kerr-MOG black holes [52]. Superradiance is essential in a scattering process as it determines the lower limit for the frequency of a wave to ensure that it is absorbed by the black hole. In the absence of such a limit the modes carrying low energy and relatively high angular momentum can also be absorbed by the black hole which reinforces the overspinning of the black hole. We have first shown that both extremal and nearly-extremal Kerr-MOG black holes can be overspun by test scalar fields with a frequency slightly above the superradiance limit. The range of the allowed frequencies for the incoming field is extended as the modification parameter α increases. Next we employed the backreaction effects based on the the increase in the limiting frequency for superradiance, which was suggested by Will [54]. We showed that the increase in the superradiance limit, prevents the overspinning of extremal black holes for which α 0.03 ∼ 3 . We derived this relation by setting = 0.01 in the inequality (27). The inequality (27) does not allow us to find an analytical solution for α in terms of . However, one can numerical verify that the relation α 3 continues to hold for smaller values of . (A larger value for would disrupt the test field approximation.) The corresponding value for nearly extremal black holes turns out to be lower: α 0.012. However we noted that, it approaches the critical value derived for extremal black holes as the black hole approaches extremality. In a previous work we had found that, -though extremal Kerr black holes can not-nearly extremal Kerr black holes can be overspun by test fields [16]. We would like to note that the derivation of backreaction effects carried out in this work, directly apply to Kerr black holes with α = 0. Therefore the backreaction effects based on the argument by Will [54], also reassure the validity of Wccc for Kerr black holes interacting with test fields. One would also expect the third law of black hole dynamics to hold for Kerr-MOG black holes analogous to the Kerr case. Our analysis for the nearly-extremal Kerr-MOG black holes imply that the allowed range of frequencies for overspinning to occur does not pinch off even in the → 0 limit. Thus, neglecting backreaction effects, it first appears that a nearly extremal Kerr-MOG black hole that is arbitrarily close to extremality can be continuously driven to extremality by absorbing a test field with frequency ω max . However the backreaction effects imply that test fields with frequency ω = ω max will not be absorbed by nearly extremal Kerr-MOG black holes arbitrarily close to extremality, provided that α 0.03. Therefore the third law of black hole dynamics is also valid for Kerr-MOG black holes with α 0.03, which includes the Kerr case with α = 0. Data Availability Statement This manuscript has no associated data or the data will not be deposited. [Authors' comment: This manuscript has no associated data since it is purely theoretical.] Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/. Funded by SCOAP 3 .
2022-12-01T15:55:31.660Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "d6ee05ed447d3801df40c41db7ca5d2e4d854dee", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1140/epjc/s10052-020-7607-5.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "d6ee05ed447d3801df40c41db7ca5d2e4d854dee", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
212651695
pes2o/s2orc
v3-fos-license
Low CSF/serum ratio of free T4 is associated with decreased quality of life in mild hypothyroidism – A pilot study Highlights • General health, according to the Likert scale, was considerable affected even in mild hypothyroidism.• The level of T4 in the brain, expressed as the CSF/serum f-T4 ratio, was associated with decreased general health.• Depressive symptoms, according to the MADRS scale, correlated with the CSF/serum f-T4 ratio.• T4 might have a direct effect in the brain, and not only as a storage hormone for the more active T3.• Further studies on pharmacokinetics of CSF-thyroxine might be of benefit especially in patients not feeling well. Introduction Primary hypothyroidism (PH) is common worldwide with a prevalence in Sweden of 4.5% for individuals over 50 years of age [1], with a female/male ratio of 5:1. The disease develops insidiously and contains many non-specific symptoms as thyroid hormones affect virtually all cells in the body. Impaired cognition, deepened voice, depressive symptoms, dry and coarse skin, chilled sensation, muscle weakness, constipation, and weight gain are observed even in mildly hypothyroid patients [2,3]. The first symptoms are subtle, slowly worsening over time. Many patients initially believe that they are suffering from normal age-related symptoms such as general fatigue and impaired vitality. An T increased level of thyroid stimulating hormone (TSH) in serum is used as a primary indicator for suspected PH [4]. In addition, the diagnostic criteria of pH include typical symptoms [5] as well as the levels of thyroid hormones; free thyroxine (f-T4), and sometimes also total T4, below or in the lower range and normal or reduced levels of serum free triiodothyronine (f-T3), and sometimes total T3 [4]. Thyroxine (T4), which is considered to be a less active pro-hormone, is deiodinated to the metabolically more potent triiodothyronine (T3). The deiodination takes place in virtually all tissues [6]. TSH stimulates thedeiodination via the type I and type II iodothyronine deiodinaseenzymes (DIO1 and DIO2) [7]. In the CNS DIO2 is more highly expressed than DIO1 [6]. There are also several thyroid hormone receptors on testicular cells, endothelial cells, erythrocytes and also in CNS with a higher, or exclusive affinity for T4 and with a non-genomic effect [8,9]. Three previous clinical studies have measured CSF levels of thyroid hormones in the hypothyroid state [10][11][12], but, to our knowledge, no report describes the relationship between thyroid hormones in CSF and quality of life (QoL). In transgenic mice without the ability to convert T4 to T3 in the brain as well as in vitro studies of receptor affinity [13][14][15][16], brain functioning has been found dependent on the function of T4. The levels of f-T3 in the rodent CNS are largely dependent on thelocal deiodination of f-T4, whereas f-T4 can pass the blood-brain-barrier (BBB) [13], and several studies have shown that T4 is the main thyroid hormone passing the BBB [16][17][18][19][20]. Therefore, the aim of the present study was to investigate whether T4 levels in CSF are associated with QoL and to elucidate whether the levels of serum TSH, f-T4 and f-T3 reflect those of the CSF in untreated mild primary hypothyroidism. Study population In this pilot study we investigated the relationships between f-T3 and f-T4 in serum and CSF in newly diagnosed, untreated PH and in age-and sex-matched healthy subjects. In addition, we evaluated if thyroid hormone levels were associated with QoL. Subjects with hypothyroidism (HYP, n = 25, 20 women) were recruited at the Endocrinology clinic, Halland Central Hospital, Sweden, Table 1. They were asked by their primary physician for interest to be enrolled in the study and was mainly recruited from the same physician. During a certain period, she asked all newly diagnosed PH, and only 2 persons did say no thanks due to lack of time. The subjects displayed at least two pathologically increased TSH (> 4.0 mIE/L; reference range: 0.40-4.0 mIE/L) prior to admission. In addition, all subjects had a f-T4 level in the lower-half of the normal range of 11-22 pmol/l (all patients < 15, Md = 11.7 pmol/l). Considering the moderate elevation of TSH, we also used the Zulewski scale [5], > 5 points, to ensure that the subjects were clinically hypothyroid. So, the cause of hypothyroidism in the 25 subjects was considered to be primary hypothyroidism, with fourteen of them positive for TPO-antibodies. Patients receiving anti-thyroid hormone or thyroid hormones, or with a recent use of iodine contrast were excluded. Pregnant women were also excluded. Recent onset of fatigue, hypersomnia or lethargy was the main cause for 22 out of the 25 subjects to consult their primary physician. All the 25 subjects had though hypothyroid symptoms. Aside hypothyroidism, one subject was previously diagnosed with type 1 diabetes, one with pernicious anemia and another with rheumatoid arthritis. The proportion of anotherautoimmune diseases were consistent with previously reported studies [21]. Four subjects were medicated with estrogen hormone replacement (oral estrogen and patch) and two with hormonal contraceptives (drospirenone/etinylestradiol). One subject received glucocorticoid treatment i.e.; Budesonid inhalation td. None of the 25 subjects did fulfill any criteria of a depressive diagnosis. The healthy controls (CON, n = 25, 20 women) were recruited by posters in the staff rooms at the medical clinic, the ambulance, and at the university of Halmstad. Though considering themselves as healthy one had a mild asthma, one arthrosis, one hypertension, one diverticulitis, one rheumatoid arthritis, and one migraine. Among them, four were taking hormonal contraceptives, one an ACE-antagonist, one abeta-blocker and a statin, and another received bulk-forming laxatives. All had thyroid hormone levels within the normal range, with a Zulewski score of 0-4 points. There was no significant difference in age or body surface area (BSA) between the groups (Table 1), and the study procedures were identical in both groups. All subjects in both groups were of Caucasian origin. Fourteen subjects in the HYP group, and four subjects in the CON group, had increased levels of thyroid peroxidase (TPO)-antibodies (> 15 U/ml). In the HYP, those with TPO-antibodies were similar in all aspects of clinical data compared to those without (see comparison in Table 1), and the prevalence is in accordance with a previous report on patients with this TSH range [22]. Body weight and area Body weight was measured in the morning to the nearest 0.1 kg, height was measured barefoot to the nearest 0.01 m. Body surface area (BSA) was measured according to the Du Bois formula. Table 1 Basic characteristics of Hypothyroid subjects (HYP) and Healthy controls (CON). Negative and positive TPO Abś is divided in two columns for comparison. Sampling of blood and CSF Lumbar puncture was performed at 08:00-08:30 a.m. after a minimum of 8 h fasting. Venous blood sampling was obtained for analyses of TSH, f-T3, f-T4, and TPO antibodies. The lumbar puncture was performed according to a standardized procedure comprising puncture at the L4-5 interspace in a seated position. CSF was drawn with a disposable needle (Becton-Dickinson, Oxford, UK, quincke: 0.70x75 mm, 22 GA), and collected in polypropylene tubes. A total of 12 mL CSF was obtained and divided in six 2 mL aliquots and in case of bleeding the first volume of CSF was not used for analysis. CSF was immediately transported to the local laboratory for centrifugation at 2000g at +4°C for 10 min. The supernatant was pipetted off, gently mixed to avoid possible gradient effects, and stored in polypropylene tubes at −70°C pending biochemical analyses, without being thawed and refrozen. Biochemical procedures All samples were analyzed in the same assay run for each specimen to minimize the analytical inter-assay variation. TSH, f-T3 and f-T4 were analyzed by dissociation-enhanced lanthanide fluoro-immunoassays [23] (Auto DELFIA, Wallac Oy, Turkku, Finland). The intra-assay coefficient of variation (CV%) for TSH was < 4.4% for all levels down to 0.1 mIE/l, and the intra-assay coefficient for f-T4 and f-T3 were < 6.1% for all levels down to 5 and 2 pmol/l, respectively. TPO-antibodies were analyzed by chemiluminescent microparticle immunoassay (CMIA) (Architect i2000, Abbott, Chicago, USA) and the intra-assay coefficient of variation was < 10% for all levels down to 5 U/ml; we considered > 15 IU/mL as a positive result. Quality of life After the other study procedures had been performed, all subjects completed the self-assessment of general health using a Likert scale (GHLS) [24]. The HYP group also completed used the Montgomery-Åsberg Depression Rating Scale (MADRS) questionnaire to evaluate symptoms of depression [25]. Statistical methods For statistical analyses SPSS, Version 24.0, and Matlab R2016b were used. The descriptive statistical results are presented as the mean ± SD, or median with 1st (Q1) and 3rd (Q3) quartile. A non-parametric statistical approach was used in all the statistical analyses. Between-group analyses were performed using the Mann-Whitney U test for continuous variables and chi-square tests for categorical variables. Correlations were investigated using the Spearman rank correlation coefficient. A twotailed P-value < 0.05 was considered significant. Ethical considerations The study was approved by the Regional Ethical Committee in Lund, Sweden (2011/1 and 2012/11). Oral and written informed consent was obtained from all participants. Serum f-T3 (pmol/L) did not differ significantly between the two groups (Table 1). CSF thyroid hormone levels CSF levels of thyroid hormones are given in Table 2. Neither CSF f-T4 nor CSF f-T3 differed significantly between the HYP and CON groups. For CSF f-T4, the Md was 9.13 pmol/L in the HYP group and 9.78 pmol/L in the CON group. There was no significant difference between the HYP (Md: 0.77) and CON (Md: 0.75) groups in the CSF/serum f-T4 ratio. CSF/serum f-T3 ratio was also similar in the two groups ( Table 2). Quality of life and depression Self-assessed health was significantly impaired in HYP group compared to that in the CON group for the Likert scale (median 65 vs 90, p < 0.001) (Fig. 1). In the HYP group, the MADRS score (Md = 10, Q1:5, Q3:16, M = 11.84 ± 7.93) was considerably higher than the normal range for healthy individuals, according to a review article [24] with a mean of 4.0 ± 5.8. Table 2 Selected basic statistical characteristics of results in Hypothyroids subjects (HYP) and Healthy controls (CON). CSF = Cerebrospinal fluid. f-T3 = free component of triiodothyronine, f-T4 = free component of thyroxine. Correlations between serum and CSF thyroid hormone levels As expected, in the HYP group, CSF f-T4 correlated strongly and positively with serum f-T4 (r = 0.72, p < 0.001), and CSF f-T3 correlated positively with serum f-T3 (r = 0.56, p < 0.01). No correlation was found between f-T3 and f-T4 neither in serum nor in CSF. Correlations between serum TSH and thyroid hormone levels In the HYP group, higher serum TSH level correlated with lower f-T4 in CSF (r = −046, p < 0.05), but only tended to correlate withserum f-T4 (r = −0.39, p < 0.053). Serum TSH did not correlate to f-T3 in serum or in CSF. There was a correlation between serum TSH and the serum fT3/fT4 ratio (r = 0.40, p < 0.05). In the CON group there was no correlation between serum-TSH and f-T4 or f-T3, but there was a correlation between serum-TSH and the serum fT3/fT4 ratio (r = 0.42, p < 0.05). Correlations between hormone levels and quality of life The CSF/serum f-T4 ratio correlated positively with self-assessed general health Likert scale (GHLS), in the HYP group (r = 0.46, p < 0.05), but not in the CON group ( Fig. 2a and b). If the three patients in the HYP group with another autoimmune disease were excluded, the positive correlation was even stronger in the remaining 22 subjects (r = 0.71, p < 0.001). The 95% Confidence Interval (CI) for the serum/CSF fT4 ratio, in the HYP group was 0.7856 ± 0.0407. The 95% CI for GHLS, in the HYP group was 62.72 ± 7.82. The 95% CI for the serum/CSF fT4 ratio, in the CON group was 0.7206 ± 0.0349, and the 95% CI for GHLS, in the CON group was 90.24 ± 3.83. In the CON group, there were no significant correlations between GHLS and any of the thyroid hormone measurements. Decreased CSF/serum f-T4 ratio correlated with an increased number of depressive symptoms according to MADRS (r = −0.56, p < 0.01) in the HYP group (Fig. 3). If we excluded the three patients in the HYP with anotherautoimmune disease, the correlation remained (n = 22; r = −0.54, p < 0.01). Multiple linear regression We evaluated if the Zulewski score, an inclusion criterion, affected our results in the HYP or the CON group. The Zulewski score did not correlate with CSF/serum f-T4 ratio in any of the two groups using bivariate correlation analysis. In multiple regression analysis, using CSF/ serum f-T4 ratio as the dependent variable and GHLS and Zulewski score as independent variables, there was also no relationship between the Zulewski score and CSF/serum f-T4 ratio in the HYP or CON groups. Finally, in an additional multiple regression analysis in the HYP group, in which CSF/serum f-T4 ratio was the dependent variable and MADRS and the Zulewski score were the independent variables, the Zulewski score did not impact on the CSF/serum f-T4 ratio. Thus, these analyses suggest that the Zulewski score did not influence the relations between CSF/serum f-T4 ratio and QoL. Theoretical considerations GHLS showed a significantly lower QoL in the HYP group (p ≤ 0.001; Fig. 1). Furthermore, the MADRS score in this study was much higher than in healthy subjects in previous reports [26][27][28]. These results indicate a generally impaired health. Our results suggest that f-T4 in CSF is important for general health in PH patients, as exhibited by the positive correlation between the CSF/serum f-T4 ratio and the GHLS (r = 0.46, p < 0.05 or r = 0.71, p < 0.001 when excluding other autoimmune diseases). This is also congruent with the strong negative correlation (r = −0.56, p < 0.01) found between this ratio and MADRS. Although f-T3 is considered as a more potent thyroid hormone we did not detect any significant correlations between serum or CSF levels of fT3 and the QoL scales. The monocarboxylate transporter 8 (MCT8) [16,29] is a specific transporter for thyroid hormones (TH) into the CNS. Allan-Herndon-Dudleys Syndrome (AHDS) is a disease specifically witha MCT8 mutation [30]. In AHDS, the deficient transport of THinto CNS is considered to be the cause of the mental retardation [17,19]. In addition, a specific transporter for T4, organic anion transporter polypeptide (OATP1C1) is expressed in capillaries throughout the brain. Transport of TH into the CNS, in particular, T4, is also facilitated by organic anion transporter polypeptide (OATP1C1), which is expressed in capillaries throughout the brain [16]. The primary TH that crosses the BBB is T4 [13,18], which is due to the higher binding to these transporters. In our study, the correlation between CSF/serum f-T4 ratio and QoL might provide some additional support that the passage of T4 into the CNS is of importance for healthy brain function. We found an unambiguous correlation (r = 0.72, p < 0.001) between serum and CSF levels of f-T4 and a less pronounced correlation (r = 0.55, p < 0.01) between f-T3 in serum and CSF. The weaker association between the CSF and serum levels of f-T3 might be in line with theory that the main part of T3 in the CNS derives from local production, which to a major extent takes place in the hypothalamus. The hypothalamic astrocytes and tanycytes express DIO2, and hereby convert T4 to T3 [14]. In some mice models with inactivation of DIO2, the animals were grossly physiologically and behaviorally normal without any signs of abnormality and did not show any signs of hypothyroidism in the brain or in the body [31]. However, these results are conflicting with a recent study in mice, which reported that a DIO2 polymorphism affected brain function [32]. Thyroid hormones forward their effect by binding to the thyroid hormone receptors (TR hypothalamus (HPT axis), cone cells of the retina and auditory cells of the cochlear [33] and hence is of little relevance for the other parts of the brain. TRα1 has a much stronger response to T4 than TRβ1 [15], and TRα1 constitutes 70-80% of all TR expression in the adult brain [34]. T4 has a response to TRα1 comparable to that of T3 [15]. Thiswould support our hypothesis of T4 being of greater importance in the brain than merely being a pro-hormone that is converted into T3. Also, thyroid hormones, and particularly T4, has a non-genomic effect on several cell-types including cells in the CNS [8,9]. Much is known about this regarding testicular function [35,36], erythrocytes and endothelial cells, but more studies are needed about this non-genomic effect the CNS. Clinical implications The results of the present study suggest that in primary hypothyroidism, a low f-T4 CSF/serum ratio, is of importance for some of the most plaguing symptoms, supposed to be of central nervous origin. Therefore, simultaneously measuring f-T4 in serum and in CSF, to get the ratio of individuals with a persistent severe psychiatric disturbance, despite a normalized serum fT4, may give us a pathophysiological explanation. However, larger studies are required to confirm our results. In addition, the correlations between thyroid hormones in CSF and QoL need to be investigated in more detail before the clinical relevance of our findings can be evaluated. Unfortunately, there are no other therapeutic options available today. We found that high serum TSH in the HYP group correlated with low CSF f-T4 (r = −0.46, p < 0.05), and only tended to correlate with low serum f-T4 (r = −0.39, p = 0.053). These findings seem to be plausible as the concentration of f-T4 in the CSF might influence the thyroid releasing hormone (TRH) production in the hypothalamic nuclei (in parvocellular cells in the wall of the third ventricle belonging mostly to the periventricular hypothalamic area (PVH) and arcuate nucleus (ARC)), which controls the pituitary production of TSH to a greater extent than the circulating level of f-T4. However, this must be confirmed in a larger study. Limitations MADRS was not assessed in CON subjects because of their reported high index of perceived QoL according to SF-36 and general health scales. Thus, they were assumed to be healthy, and there was no reason to believe that our healthy controls would be different from those in other studies. A review article by Zimmerman [26] presented a mean MADRS score in healthy controls of 4.0 ± 5.8, which was considerably different from our HYP-group (Md = 10, Q1:5, Q3: 16). Funding and support This work was granted by Södra Sjukvårdsregionen and Region Hallands forskningsfond. Contribution Anders Funkquist is a primary investigator, data collection, data analysis incl statistical analysis, primary writer and planning of the study. Anders Bengtsson was involved in data collection and planning of the study. PM Johansson was involved in setting up the study, involved in the planning. Johan Svensson was involved in setting up the study, involved in the writing of the article. Per Bjellerup did the analysis of thyroid hormones. Kaj Blennow did additional analysis that mostly is part of next studies. Birger Wandt was involved in setting up the study. Involved in the writing of the article. Stefan Sjöberg is guarantor for the study. Involved in all processes from setting up the study to data analysis and writing the article. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2020-02-13T09:20:07.625Z
2020-02-04T00:00:00.000
{ "year": 2020, "sha1": "8e96c8dcd21018a14bb6389a924d21408bb5fdda", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.jcte.2020.100218", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9de0cb815dd123ba3584724a3ada467a35176428", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
44247095
pes2o/s2orc
v3-fos-license
Measurement and Analysis of Channel Characteristics in Reflective Environments at 3 . 6 GHz and 14 . 6 GHz Recently, high frequency bands (above 6 GHz) have attracted more attention for the next generation communication systems due to the limited frequency resources below 6 GHz. To reveal the influence of frequency on propagation channels, channel characterization results at 14.6 and 3.6 GHz bands based on measurements in an indoor scenario and in a reverberation chamber are presented. The measurement results indicate minimal differences in path loss exponents, shadow fading standard deviation, root-mean-square (RMS) delay spread and coherence bandwidth for the two frequency bands, while the path loss at 14.6 GHz band is clearly larger than that at the 3.6 GHz band. Furthermore, the underlying factors that influence the channel characteristics are investigated. It is found that the RMS delay spread is independent of the frequency in the scenario where free space propagation and/or reflection are the main mechanisms. Measurements in the reverberation chamber verify this inference. Introduction Achieving a high data rate is one of the key challenges for the next-generation wireless communication systems.Hence, large frequency bandwidth is required.Such large bandwidth cannot be achieved on radio links operating at low frequency bands (below 6 GHz).Therefore, high frequency bands (above 6 GHz) have been widely considered for future applications [1].It is well known that the design of a wireless system requires a deep understanding of the propagation channels.However, most available channel models, e.g., WINNER II/+ and International Mobile Telecommunications-Advanced models, have been designed for the frequency range below 6 GHz.Thus, Mobile and wireless communications Enablers for the Twenty-twenty Information Society (METIS) indicates that measurement data above 6 GHz is crucial for the needed extensions/modifications of available channel modeling [2].In this paper, measurement-based characterization for the corridor scenario at 14.6 GHz is presented.To reveal the influence of frequency on the channel characteristics, characterization for the same scenario at 3.6 GHz is also presented. As one of the candidate frequency bands for the fifth generation (5G) system, the 14 GHz band has more than 0.7 GHz bandwidth ) to be allocated [3].Some 5G tests using this frequency band demonstrate data rates up to 4.5 Gbps [4].The 3.6 GHz band is also suitable for providing capacity to fulfill the increasing traffic requirements, especially for small coverage with denser network deployment [3]. Stochastic channel models reproduce the statistics of the channel parameters based on the realistic measurements and have been widely used [5][6][7].Based on measurements, several papers have provided comparisons between some different frequency bands [8][9][10].In [8], a multi-frequency outdoor-to-indoor path loss model for different frequency bands ranging from 0.8 GHz to 28 GHz was presented.In [9], the path loss and wideband characteristics were measured at 1.7 GHz and 60 GHz for small cell systems.Omnidirectional and directional antennas were used for 1.7 GHz and 60 GHz, respectively.The difference between the antennas limited the effectiveness of the comparison.In [10], measured data and empirical models for 2-10 GHz and 57-66 GHz were presented.In this work, the measurements were performed in a laboratory and the distance between the transmitter and receiver was less than 4 m.The scenario is different from our work and leads to different results on the frequency dependency for some channel characteristics (for details, see the Comparative Analysis section). This paper focuses on the reflective environments where free space propagation and/or reflection are the main mechanisms.A comparative analysis of the channel characteristics between the low and high frequency bands is presented.The main contributions of our work are summarized as follows: • A detailed insight into the channel characteristics of the path loss, RMS (root-mean-square) delay spread and coherence bandwidth in the corridor scenario with the directional antennas at 3.6 GHz and 14.6 GHz frequency bands is presented.• A comparative analysis is made for statistical metrics of the propagation channel between the two frequency bands.• The underlying factors that lead to the influence of frequency on the channel characteristics are investigated.It is found that, in the corridor, the differences between the two frequency bands in RMS delay spread are minimal (less than 1 ns).Measurements in the reverberation chamber (RC) verify this inference. The remainder of this paper is organized as follows: Section 2 describes our measurement system and measurement scenario.Section 3 presents the measurement results.Section 4 focuses on the comparative analysis.In Section 5, some auxiliary experiments in an RC are described to support the analysis.Section 6 concludes this paper. Measurement System The wideband frequency responses of the propagation channel were measured with a measurement system based on a vector network analyzer (VNA) (MS2038C, Anritsu, Morgan Hill, CA, USA).The scattering parameter S 21 was measured over a 0.8 GHz bandwidth and centered at 3.6 GHz and 14.6 GHz, respectively.The VNA was set to transmit 1024 continuous wave tones uniformly distributed over each frequency band, which results in a frequency step of about 0.78 MHz.The frequency resolution further yields a maximum excess delay of 1280 ns, i.e., a maximum distance range of approximately 384 m.Intermediate-frequency (IF) averaging bandwidth was set to 300 Hz.To reduce noise, six sweeps were averaged. The antennas used in measurements are directional horn antennas.Radiation patterns of antennas were measured in an anechoic chamber in the National Institute of Metrology, Beijing, China and are given in Figure 1 and Table 1.Although the beamwidths of the two frequency bands are not exactly the same, given the fact that the corridor is a very narrow and closed space, the angles of departure and angles of arrival for all major multipath components are expected inside the main beamwidth at both frequencies, which means that the difference of the beamwidths will not effectively influence the channel properties at the two frequencies under study.The use of the directional antennas is common at the high frequency band, e.g., [11,12].The antennas were mounted on wooden tripods with a small section of metal behind the directional antennas, which does not influence the pattern.Calibration was performed in an anechoic chamber to compensate the frequency response of antennas and cables as suggested in [13].These two measured bands were free from interference during the measurement campaign, as confirmed by measurements with a spectrum analyzer (PSA E4445A, Agilent, Santa Clara, CA, USA).A summary of the measurement parameters is listed in Table 1. It is worth noting that the channel characteristics are extremely sensitive to antenna placements [14].To extract the influence of frequency on the channel characteristics, the measurements for the two frequency bands should be carried out at the exact same antenna placement and in the exact same scenario.To this end, the measurement process is designed as follows.Firstly, the VNA is calibrated by using a "response" calibration [12].The calibration is carried out for the two frequency bands, respectively.In addition, the calibration data for the two frequency bands are stored in the internal memory of the VNA.For each measurement position, calibration data for 3.6 GHz band is recalled and the measurement for 3.6 GHz band is executed.Then, immediately, without the antennas moved, calibration data for 14.6 GHz band is recalled and the measurement for 14.6 GHz band is executed.In this way, the measurements for the two frequency bands are carried out at the exact same antenna placement, in the exact same scenario, and almost at the exact same time.This design effectively reduces the uncertainty in the comparative analysis of channel characteristics between the two frequency bands. Measurement Scenario The measurements have been carried out in the second-floor corridor in the 18th building, National Institute of Metrology, Beijing, China.The building is constructed of concrete with faculty offices and laboratories as shown in Figure 2. The corridor is 20 m in length, 2.4 m in width and 2.8 m in height.The side walls, floor and ceiling of the corridor are constructed of concrete.The floor is covered with ceramic tiles and the ceiling is covered with polystyrene tiles.The corridor has a number of wooden doors and some windows.In the measurements, the doors and the windows were closed and no people were moving around.In order to analyze the influence of antenna location (whether near the walls) on channel characteristics, we conducted another measurement in the same scenario.All the measurement settings were the same as the previous measurements, but the TX and RX antennas were placed 0.5 m from the wall.As shown in Figure 3, the TX and RX are denoted by the hollow star and hollow circles, respectively. In order to achieve spatial averaging, five spatially separated positions were measured at each receiver location as suggested in [15].One received position is at the center of one specific location, surrounded by the remaining four positions as shown in Figure 4 (top view); these positions are separated by about 0.17 m, which is about two wavelengths for 3.6 GHz and eight wavelengths for 14.6 GHz.Assuming that the phase of received signal is uniformly distributed, it is reasonable to average out the small-scale fading effect via these five positions [15] in the Ricean channel. Path Loss Exponent and Shadow Fading In the widely accepted power-law path loss model, the change of the path loss along with the TX-RX distance is depicted by the path loss exponent.Such exponent and shadow fading are extracted from the measured results by using the following expression: where d is the distance between the TX and RX, and PL(d) is the path loss (small-scale fading has already been filtered out) at d. d 0 denotes the reference distance and n is the path loss exponent.X is the shadow fading, which is well fitted by the log-normal distribution by passing Kolmogorov-Smirnov, Anderson-Darling, and chi-squared tests [16].By using the least-square criterion, the path loss exponent n can be obtained. Delay Spread The RMS delay spread is widely used to characterize the delay dispersion of the channel.By taking the inverse Fourier transform of the measured transfer function, the power delay profile (PDP) and RMS delay spread of the channel are obtained.The RMS delay spread is defined as the second moment of the PDP [17] and expressed as where τ i and PDP(d, τ i ) represent the delay and corresponding delay power of the i th mutipath component (MPC) measured at the distance of d, respectively.When computing the RMS delay spread, we set the power of all the MPCs below a threshold to zero to reduce the impact of the noise.The threshold is set to be 6 dB above the noise floor [18]. Coherence Bandwidth The coherence bandwidth is a statistical measure of the range of frequencies over which two frequency components have a strong potential for amplitude correlation.The coherence bandwidth B ρ is calculated as [19] where R H is the frequency correlation function, δ f is frequency separation, and ρ is the correlation level. Measurement Results Table 2 shows the measured channel characterization results for the two frequency bands with the directional antennas in the corridor scenario.The following findings can be summarized: (1) the differences in the path loss exponent, shadow fading standard deviation, RMS delay spread and coherence bandwidth for the two frequency bands are minimal; (2) the path loss at 14.6 GHz is larger than that at 3.6 GHz.As expected, the difference is about 20log 10 (14.6 GHz/3.6 GHz) = 12.16 dB between the two carrier frequencies; and (3) there is no significant difference in all of the extracted propagation characteristics between different antenna locations (near the wall or in the middle of the corridor).The path loss exponent in the corridor ranges from 1.51 to 1.69 (see Table 2), smaller than two (which corresponds to the free-space propagation).This implies a waveguide phenomenon due to the confined structure of the corridor.In addition, the minimal difference between different antenna locations (near the wall or in the middle) implies that the wave-guide effect is not very sensitive to the antenna-wall distance in the corridor scenario with the directional antennas.The standard deviation of shadow fading ranges from 1.79 to 2.27 dB, which agrees with those measurements in corridors at 2.4 GHz, 5.3 GHz, and 60 GHz [20][21][22]. No single definitive value of correlation has emerged for the specification of coherence bandwidth.Hence, coherence bandwidths for generally accepted values of correlations coefficient equal to 0.5, 0.7 and 0.9 are evaluated, and these are referred to as B 0.5 , B 0.7 and B 0.9 .Furthermore, the coherence bandwidth is highly variable with the locations of receiver.By convention, cumulative distribution function (CDF) of the coherence bandwidth is computed and the level below which the coherence bandwidth stays for a given percentage is used to interpret the coherence bandwidth results [13].As shown in Table 2, the level of B 0.5 , B 0.7 and B 0.9 for 90% of receiver positions is about 320 MHz, 120 MHz and 40 MHz, respectively.The communication system designers sometimes rely on the lowest value of the coherence bandwidth.Thus, the lowest value, B ρ,min , is also listed in Table 2. Example APDPs We denote the PDP averaged over the five spatial points (see Figure 4) as the averaged PDP (APDP) [23].In Figure 5, we observe the APDPs for both frequency bands.For the example APDPs, the TX and RX are placed in the middle of the corridor and the TX-RX distance is 1.2 m.For visual comparison, we have normalized the line-of-sight (LOS) contribution to 0 dB. It is observed that the main components fit the same delays.For both frequency bands, the delay of the largest component is about 4 ns, which, multiplied by the velocity of light, equals the LOS distance (1.2 m). Figure 5 clearly illustrates multipath propagation.From basic trigonometry, the excess distances (compared with the LOS path) for the ground reflected path, the ceiling reflected path and the wall reflected paths are 1.85 m, 1.85 m, 1.43 m and 1.43 m, respectively.The distance differences divided by the velocity of light are from 4.77 and 6.16 ns.The components (area highlighted by red box in Figure 5) are likely the combination of all the four reflected paths since the relative excess delay is also between 4.77 and 6.16 ns. where P t is the transmitted power, G t is the transmitter antenna gain, A e is the RX antenna effective area, and d LOS is the distance for the LOS path as shown in Figure 6.For a reflected component, the received power at the RX is [24] where (Γ) is the reflection coefficient, d r1 is the distance between the TX and the reflecting point, and d r2 is the distance between the RX and the reflecting point.The power ratio between the LOS component and reflected component is The reflection coefficient (Γ) is determined by the complex dielectric constant, δ, and δ is defined as [19] where j is the imaginary unit, is the dielectric constant, and σ e is the conductivity.As σ e is very small [25], the imaginary part in Equation ( 7) becomes negligible.In addition, stays constant with a large frequency range (1-100 GHz) for many typical building materials (e.g., concrete, wood, glass and chipboard) [25].Thus, the reflection coefficient is almost independent of the frequency.Then, via Equation ( 6), the power ratio between the LOS component and reflected component (i.e., the normalized power for the reflected path) is almost independent of the frequency. The similar delays and normalized amplitudes should yield similar RMS delay spread. RMS Delay Spread Table 2 indicates a minimal difference in the RMS delay spread for the two frequency bands.The statistical results are consistent with the above theoretical prediction.This phenomena has also been observed in the measurements of [26,27], where the RMS delay spreads from 2.4 GHz to 28 GHz are similar for the indoor LOS and non-LOS (NLOS) channels.However, some papers present different results.For example, in [10], for LOS, the RMS delay spread is around 8 and 4 ns for 2-10 and 57-66 GHz, respectively.The measurements were performed in a laboratory furnished with several closets, shelves, desk, and chairs.In this scenario, there are not only free space propagation and reflection but also the diffraction.The diffraction gain is frequency-dependent [24].This may be the reason for the difference of the RMS delay spread in this scenario. Because the reflection coefficient is insensitive to the frequency with the large frequency range for many typical building materials, it is reasonable to hypothesize that the RMS delay spread is also independent of the frequency in the scenario with the typical building materials where free space propagation and/or reflection are the main mechanisms.To verify this hypothesis, we conducted some measurements in the National Institute of Metrology (NIM) RC (see the Verification in Reverberation Chamber section). Path Loss Figure 7 and "Fit intercept" in Table 2 indicate that the path loss at 14.6 GHz is larger than that at 3.6 GHz.As expected, the higher frequency leads to the larger propagation loss.The "path loss exponent" in Table 2 indicates a minimal difference in the path loss exponent for the two frequency bands.This observation is in line with what was experienced in [8,10,28].In [8], a multi-frequency (0.8 GHz to 28 GHz) path loss model for external wall attenuation and indoor propagation was presented.For the indoor propagation, the measurement results show that the linear attenuation factor (similar to the path loss exponent) is not very sensitive to the frequency.In [10], the measured path loss exponents are 1.31 and 1.29 for centimeter-and millimeterwave ultra-wideband LOS channels, respectively.In [28], the exponents are similar at 1.33 and 1.31 for 5 and 60 GHz for indoor channels. Coherence Bandwidth The measurement results in Table 2 indicate that the coherence bandwidths at the two frequency bands are similar.For simplicity, our analysis begins with the two-ray model.The antenna patterns are assumed to be similar for the different frequencies. For the LOS path, we can express the electric far field at a point u in space as [29] where α s (θ, ψ, f ) is the radiation pattern of the TX antenna at frequency f in the direction (θ, ψ), d is the distance from the TX to the point u, and the constant c is the speed of light. Because the phase difference between the two rays determines whether the two waves add constructively or destructively, we focus on the phase of the signal.Then, Equation ( 8) is simplified as where A equals , and θ equals −2π f d c .For the reflected path, the electric far field at the point u is expressed as where A r is the amplitude of the reflected signal, and δt is determined by the distance difference of the two rays.The phase difference between the two rays is For another frequency f , the phase difference is The phase difference between the two rays, φ, determines the constructive or destructive interference pattern.Then, the difference of φ for two frequencies, i.e., |φ − φ|, should be insignificant within the coherence bandwidth.The difference of φ is Equation (13) indicates that the coherence bandwidth is related to the relative frequency (| f − f |), not related to the absolute frequency, under the assumption of having similar antenna patterns over the frequency band.Therefore, the coherence bandwidths at 14.6 GHz and 3.6 GHz should be similar.The measurement results verify the above analysis. Verification in Reverberation Chamber Based on the comparative analysis in Section 4.2, it is hypothesized that the RMS delay spread is independent of the frequency in the scenario where free space propagation and/or reflection are the main mechanisms.We used the RC to verify this hypothesis.The RC provides a reliable, controllable, and repeatable multipath environment where reflection is the main mechanism [30][31][32][33]. The RC is essentially an large, electric metal box, having dimensions of 5.09 m × 6.43 m × 5.57 m, as shown in Figure 8.In the RC, reflection is the main propagation mechanism [34].Inside the chamber, the mechanical stirring was performed by two metal paddles that were moved stepwise.The measurement of RMS delay spread in the RC was repeated for 100 different fixed paddle positions spaced by 360 • /100 = 3.6 • .For the RMS delay spread estimation, the 100 measurement results were averaged to obtain the mean values.The RMS delay spread in the chamber can be changed by putting different numbers of RF-absorbing material inside the RC.We repeated the measurements for three and four absorbers, respectively.Each piece of the absorber is with 81 cones in a 9 × 9 array, the cone width is 6.8 cm, and the cone height is 17.8 cm.The transmitting antenna was directed into a corner of the chamber, and the receiving antenna was placed in the middle of the RC.We measured the channel over the whole frequency range from 3.1 GHz to 15.1 GHz, using the same VNA-based measurement system, as shown in Figure 9.Because the sweep points of the VNA are limited to 60,001, sectional measurement was performed over the whole frequency range.We divided the whole range (3.1 GHz to 15.1 GHz) into 12 frequency bands.For each band, the S 21 was measured in the 1 GHz bandwidth with 60,001 sweep points.Figure 10a shows the measurement results and the linear fitting lines.The slopes of these fitting lines are −0.0020ns/MHz and −0.0026 ns/MHz for the 3 and 4 absorbers, respectively.It is confirmed that the difference is minimal in the RMS delay spread for the whole sweep frequency range.In order to analyze the behavior of the RMS delay spread for a given frequency band with higher frequency resolution, we use a 200 MHz bandpass filter in postprocessing.This filter is applied to the measured S 21 frequency-domain data, and the RMS delay spread is calculated for different filter center frequencies.As shown in Figure 10b, the slopes of the fitting lines are −0.0023ns/MHz and −0.0032 ns/MHz for the 3 and 4 absorbers, respectively.The coherence bandwidths in the RC are calculated via Equation (3) based on the measurement data.It is shown in Figure 11 that the coherence bandwidth is also similar in the frequency range.The slopes for the RMS delay spread and coherence bandwidth are summarized in Table 3.The measurement results imply that the RMS delay spread and coherence bandwidth are not sensitive to the frequency in the RC. Conclusions Based on measurements, we presented an analysis of indoor channels for the corridor scenario at 3.6 GHz and 14.6 GHz frequency bands with the directional antennas.From the comparative analysis for the corridor, we show that the path loss is larger at 14.6 GHz frequency.The difference of the path loss exponents in the two frequency bands is less than 0.18 in the corridor.The main reason is that the path loss exponent is relevant to the relative power in the distance domain.For the RMS delay spread and coherence bandwidth, they are relevant to the relative power in the time domain.The constant reflection coefficient makes them not very sensitive to the frequency in the corridor.The differences between the two frequency bands are less than 1 ns and 6 MHz for RMS delay spread and coherence bandwidth, respectively.Based on the comparative analysis, it is hypothesized that the RMS delay spread and coherence bandwidth are independent of frequency in the typical indoor scenarios where free space propagation and/or reflection are the main mechanisms.The measurement results in the highly reflective environment (RC) and common reflective environment (corridor) verify the hypothesis. This analysis helps readers to understand the essence of the propagation.Both the quantitative results and qualitative analysis are useful for the needed extensions/modifications of channel modeling from the low frequency band to the high frequency band. Figure 4 . Figure 4. Positions of RX (receiver side) at each location. Figure 5 Figure 5 also indicates that the normalized amplitudes of the reflected paths are similar for the two bands.The received power for the LOS component at RX is Figure 6 . Figure 6.Illustration of the LOS (line-of-sight) path and a reflected path. Figure 9 . Figure 9. RMS (root-mean-square) delay spread measurement setup in the NIM RC. Figure 10 . Figure 10.(color online) RMS delay spread in the NIM RC calculated over (a) 1000 MHz bandwidth; and (b) 200 MHz bandwidth. Figure 11 . Figure 11.(color online) Coherence bandwidth in the NIM RC calculated over a 1 GHz bandwidth as a function of center frequency. Table 2 . Measurement results at the two frequency bands.
2017-10-09T01:50:50.195Z
2017-02-13T00:00:00.000
{ "year": 2017, "sha1": "327c52f2ce8ad50393a166117f7569040472c179", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/7/2/165/pdf?version=1486959344", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "327c52f2ce8ad50393a166117f7569040472c179", "s2fieldsofstudy": [ "Business", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
229616527
pes2o/s2orc
v3-fos-license
Information technology in financial sector Russian Federation - driver of the formation of the Russian economy . The digital changes taking place in all areas of society are most evident in certain sectors, especially in financial sector. One of the drivers of the modern development of a sustainable economy is the digitalization of society associated with the rapid development of information technologies. Informatization of this sector in Russia is associated with the synthesis of changes in the legal, economic, social, political nature. The article is devoted to key trends in the informatization of the financial sector of Russia. Analysis of the main suppliers of information technologies in the financial sphere and their products was made. Priority directions of development of information technologies in the financial sector and ways of their implementation are identified. The main approaches to strengthening information security and reliability of financial technologies in Russia have been studied. Synthesis of materials on information technologies researches in the financial sector of Russia was carried out. Introduction Digital transformation of any sector of economy is formed on the basis of such components as state policy, information technologies and business processes. At the same time, creative competition comes to the fore [1]. These components can ensure sustainable development and increase the potential of the economy to meet the needs of society. The digital transformation of the financial sector increases the quality and speed of interaction between consumers of financial services and financial organizations, but at the same time creates additional risks. Digital channels are becoming the main competitive advantage [2]. The landscape of information banking security is constantly changing, which gives rise to new forms of circumvention of security measures. Factors affecting banking security include political, social, technological and legislative components [3]. This relationship demonstrates the need to address these aspects together, given the importance of each in achieving the planned outcome. Materials and Methods The object of this study is the financial sector of the Russian Federation. The subject of the study are processes in Russian financial sector as a result of its digital transformation. These processes are associated with the strengthening of the information component, an increase in the risks of cyber-attacks and the growth of cybercrime in this sector. The main goal of this study is to identify key trends in the development of information technologies in the financial sector of the Russian Federation. The aim of the study is also to consider ways to optimize the existing and projected risks of the financial sector by analyzing the existing information technologies and strengthening its information security Analysis, synthesis, induction, deduction was used as research methods. The main distinguishing feature of this study is its multidisciplinary nature, there is a clear interconnection of sectors such as information technology, financial technologies, economic aspects. Results and Discussion It must be borne in mind that almost a fifth part of all cyber attacks in the world (17%) are in the financial sector. The United States of America, Canada, Singapore, Australia, Malaysia, New Zealand, Japan, Great Britain, Austria are most attractive to the financial sector today, so they have greater protection against cybercriminals than other countries. The main risks in the financial and credit sector associated with the informatization of this sector include: -financial losses of financial services consumers caused by the growth of cybercrime; -financial losses from cybercrime of some financial and credit institutions critical to their financial situation; -decrease of operational reliability and impossibility of continuity of financial services provision to clients, which leads to decrease of reputation and increase of social tension in society; -the possibility of a systemic crisis in the presence of serious information security problems due to cyber-attacks in banks significant for the national market. The current development of information security and information technologies in the credit and financial sphere of the Russian Federation is based on the experience of the US National Institute of Standards and Technology, the Monetary Authority of Singapore, the Committee on Payment and Market Infrastructures at the Bank for International Settlements, the Basel Committee on Banking Supervision and other important organizations [4]. Priority areas for information technology development in the financial sector should be highlighted. These are big data management, artificial intelligence-based technologies, financial sector robotization, digital customer service channels, the formation of a deployed cyber threat protection system, Open source platforms, Web solutions that optimize the bank's internal processes, outstaffing, and banking insights. These technologies provide for a new format of financial business, qualitatively different business models, as well as a serious change in the national regulator information technologies [5]. The use of artificial intelligence in the banking sector is quite active. The rating agency Expert RA and RAEX (RAEX-Analytics) presented the classification of Russian banks on their use of information technologies based on artificial intelligence. The results are shown in Table 1. The study covered 50 banks from among the top 100 in assets. The final assessment sums up two factors: the use of artificial intelligence as part of credit analysis and the use of artificial intelligence as part of the bank's activities as a whole. The first factor is given a share of 45%, the second factor -55%. The informatization of Russian financial sector is at a fairly high level. The leaders of informatization in all activity areas are the largest banks. Other banks focus on front office and client back office digital technologies. The most important projects in the field of informatization of Russian financial sector are associated with the introduction of state initiatives in this direction, projects of Bank of Russia. This is the Unified biometric system, Faster payment system, creation of a marketplace [7]. The coronavirus pandemic has revealed new financial sector challenges and opportunities. The transition to remote service has shown the need and attractiveness of many digital services [8]. The Markswebb Internet Banking Rank 2020 provided for the formation of final values based on the fulfillment of day-to-day tasks that were set for banks in the pandemic. These values are shown in Table 2. According to the results of the rating, were allocated banks the most clearly adapted to consumer requests in the conditions of priority of remote banking (the first ten positions). The data are shown in Table 3. Table 3. Rating of Internet banks for individuals [9] Position, year 2020 Bank Rating 1 Tinkoff Bank 68,0 2 Otkritie Bank 67, 3 3 Ak Bars Bank 61, 1 4 Bank Levoberezhny 60, 9 5 VTB 57, 9 6 Post Bank 56,5 7 Promsvyazbank 54,2 8 Raiffeisenbank 53, 4 9 Russian Agricultural Bank 53,0 10 SKB-Bank 52,9 Innovative changes in the financial sector associated with digital transformation require significant investment. The total revenue of the largest suppliers of information products and technologies for the banking sector in 2018 increased by 8.9% compared to 2017. Detailed information on suppliers (five leaders) is provided in table 4. Let's look at the most important products and technologies sold by leading information technology providers -Center of Financial Technologies and Sberteh. They are shown in Table 5. The main approaches to enhancing information security and the reliability of financial technologies include the following four components: -legal regulation established by federal laws, -creation and development of a secure and cyber-resistant financial infrastructure that includes remote identification platforms; marketplace; Faster payment system, a platform for recording financial transactions; Bank of Russia payment system; the national payment card system Mir; a financial messaging system; Cloud services platform distributed registry technology platform, -application of the latest developments such as financial technologies RegTech (regulatory technology), SupTech (supervision technology); Big Data, Smart Data; development of mobile technologies; development of technologies based on artificial intelligence, application of developments in the field of robotics and machine training; active biometrics developments; Introduction of distributed registry technologies; active use of open interfaces, -examination of innovative financial technologies, products and services within the regulatory platform of the Bank of Russia taking into account possible cyber risks, modelling of possible threats and ways to minimize them [10]. The change in approaches to banking security in the information sphere is dictated by the predominance of mobile technologies for conducting financial transactions and a decrease in the use of desktop computers. Telephone banking is also losing its position. These changes are related to the preferences of millennials and Generation Z. The growth of consumer preference for mobile devices causes an increase in the number of vulnerabilities, threats and risks. All these positions are prerequisites that security must be laid down from the very beginning, and not record the facts of cybercrime that have already been committed. Often fraudulent processes are subject to registration and activation of a mobile device when forming an online account or conducting a transaction. The development of bank applications directly affects the reputation of the brand, which requires constant strengthening of their security. Open banking also has a number of serious vulnerabilities, the main of which is the risk of data leakage, despite clear legislative support in a number of countries. It should also be noted that the presence of weak vulnerabilities in the information infrastructure of third-party suppliers will cause an increase in fraudulent payments next year. Artificial intelligence is increasingly being used by the financial sector -its use optimizes the activities of a financial institution, increasing efficiency, competitiveness and reducing the risks of fraud. A huge, but disparate customer data bank means limited use of artificial intelligence, however, combining technology and human experience to ensure customer safety today is the best solution. New threats of technological and political nature come to the fore. New information technologies are formed on the basis of an assessment of these risks. They provide modern methods of identity verification in order to minimize suspicious transactions or open accounts. Information technology is aimed at analysing data from various sources, accelerating real-time security decisions, taking into account the requirements of regulators [3]. A special priority is the successful operation of the payment system. The growth of information and economic security for operators of payment systems, money transfers, payment infrastructure services and banks is provided by the Fid-Antifrod automated system. This system is a centralized database of cases and attempts to transfer money without the consent of the client, this system allows customers to assess the risks when making money transfers [11].
2020-11-26T09:06:36.000Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "3525724cacd32b5d27f2d24595b0236be3c56406", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2020/68/e3sconf_ift2020_03017.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "d4893893662acb9e1ba54aa83f8c745e70fec135", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Business" ] }
258465281
pes2o/s2orc
v3-fos-license
Electrically driven amplified spontaneous emission from colloidal quantum dots Colloidal quantum dots (QDs) are attractive materials for realizing solution-processable laser diodes that could benefit from size-controlled emission wavelengths, low optical-gain thresholds and ease of integration with photonic and electronic circuits1–7. However, the implementation of such devices has been hampered by fast Auger recombination of gain-active multicarrier states1,8, poor stability of QD films at high current densities9,10 and the difficulty to obtain net optical gain in a complex device stack wherein a thin electroluminescent QD layer is combined with optically lossy charge-conducting layers11–13. Here we resolve these challenges and achieve amplified spontaneous emission (ASE) from electrically pumped colloidal QDs. The developed devices use compact, continuously graded QDs with suppressed Auger recombination incorporated into a pulsed, high-current-density charge-injection structure supplemented by a low-loss photonic waveguide. These colloidal QD ASE diodes exhibit strong, broadband optical gain and demonstrate bright edge emission with instantaneous power of up to 170 μW. Colloidal quantum dots (QDs) are attractive materials for realizing solutionprocessable laser diodes that could benefit from size-controlled emission wavelengths, low optical-gain thresholds and ease of integration with photonic and electronic circuits [1][2][3][4][5][6][7] . However, the implementation of such devices has been hampered by fast Auger recombination of gain-active multicarrier states 1,8 , poor stability of QD films at high current densities 9,10 and the difficulty to obtain net optical gain in a complex device stack wherein a thin electroluminescent QD layer is combined with optically lossy charge-conducting layers [11][12][13] . Here we resolve these challenges and achieve amplified spontaneous emission (ASE) from electrically pumped colloidal QDs. The developed devices use compact, continuously graded QDs with suppressed Auger recombination incorporated into a pulsed, high-current-density charge-injection structure supplemented by a low-loss photonic waveguide. These colloidal QD ASE diodes exhibit strong, broadband optical gain and demonstrate bright edge emission with instantaneous power of up to 170 μW. Electrically pumped lasers or laser diodes based on solution-processable materials have long been desired devices for their compatibility with virtually any substrate, scalability and ease of integration with on-chip photonics and electronics. Such devices have been pursued across a wide range of materials, including polymers [14][15][16] , small molecules 17,18 , perovskites 19,20 and colloidal QDs [1][2][3][4][5][6][7] . The last materials are especially attractive for implementing laser diodes because, as well as being compatible with inexpensive and easily scalable chemical techniques, they offer several advantages derived from a zero-dimensional character of their electronic states 21,22 . These include a size-tunable emission wavelength, low optical-gain thresholds and high temperature stability of lasing characteristics stemming from a wide separation between their atomic-like energy levels [21][22][23] . Several challenges complicate the realization of colloidal QD laser diodes. These include extremely fast nonradiative Auger recombination of optical-gain-active multicarrier states 1,8 , poor stability of QD solids under high current densities required to achieve lasing 9,10 and unfavourable balance between optical gain and optical losses in electroluminescent devices wherein a gain-active QD medium is a small fraction of the overall device stack comprising several optically lossy charge-transport layers [11][12][13] . Here we resolve these challenges using engineered QDs with suppressed Auger recombination and a special electroluminescent-device architecture, which features a photonic waveguide consisting of a bottom distributed Bragg reflector (DBR) and a top silver (Ag) electrode. The transverse optical cavity formed by the DBR and the Ag mirror improves field confinement in the QD gain medium and simultaneously reduces optical losses in charge-conducting layers. It also facilitates the build-up of ASE owing to improved collection of spontaneous seed photons and the increased propagation path in the QD medium. As a result, we achieve large net optical gain with electrical pumping and demonstrate room-temperature ASE at the band-edge (1S) and excited-state (1P) transitions. In this study, we use an optical gain medium based on a revised version of continuously graded QDs (cg-QDs), which are similar to our previously introduced CdSe/Cd 1−x Zn x Se cg-QDs 9 but feature a reduced thickness of the graded layer. These 'compact' cg-QDs (abbreviated as ccg-QDs) 13 comprise a CdSe core of 2.5 nm radius, a 2.4-nm-thick graded Cd 1−x Zn x Se layer and a final protective shell made of ZnSe 0.5 S 0.5 and ZnS layers with 0.9 nm and 0.2 nm thicknesses, respectively (Fig. 1a, top-right inset and Supplementary Fig. 1). Despite its reduced thickness, the compact graded shell allows for highly effective suppression of Auger decay 24 , which leads to a long biexciton Auger lifetime (τ XX,A = 1.9 ns) and a correspondingly high biexciton emission quantum yield of 38% ( Supplementary Fig. 2). The compact graded shell also produces strong asymmetric compression of the emitting core, which increases the light-heavy hole splitting (Δ lh-hh ) to about 56 meV (ref. 25) (Fig. 1a). This impedes thermal depopulation of the band-edge heavy-hole state and thereby reduces the optical gain threshold 7 . Notably, the reduced shell thickness allows for an increased QD packing density in film samples and, as a result, leads to enhanced optical gain, which spans across the 1S and 1P transitions and exhibits a wide bandwidth of about 420 meV (Fig. 1b). These properties facilitate the development of ASE, which is readily observed for optically excited ccg-QD films (Fig. 1c). The ASE occurs at both the 1S and 1P transitions and exhibits low excitation thresholds ⟨N th,ASE ⟩ ≈ 1 (1S) and 3 (1P) excitons per dot on average. On the basis of the variable stripe length (VSL) ASE measurements of a 300-nm-thick ccg-QD film, the 1S and 1P optical gain coefficients are 780 cm −1 and 890 cm −1 , respectively ( Supplementary Fig. 3). Owing to a near-unity mode confinement factor of the measured Article film, we will refer to the derived values as 'material gain' coefficients (G mat,1S and G mat,1P , respectively). Initially, we incorporate ccg-QDs into 'reference' light-emitting diodes (LEDs) whose architecture is similar to that in refs. 12,13. These devices (Fig. 1d) are assembled on top of a glass substrate and comprise a ccg-QD active layer (approximately three monolayers thick) sandwiched between a bottom electrode (cathode) made of low-index indium tin oxide (L-ITO) and an organic hole-transport layer (HTL) of poly[(9,9-dioctylfluorenyl-2,7-diyl)-alt(4,4′-(N-(4-butylphenyl)))] (TFB). The L-ITO electrode is made of a mixture of standard ITO and SiO 2 , which reduces optical losses and enhances refractive-index contrast at the QD-cathode interface, thereby improving optical-mode confinement in the QD layer 11 . The TFB HTL is separated from the organic hole-injection layer (HIL) made of dipyrazino[2,3-f:2′,3′-h]quinoxaline-2, 3,6,7,10,11-hexacarbonitrile (HAT-CN) by an insulating 50-nm-thick LiF spacer containing a 'current-focusing' 30-μm-wide slit 10,12,13 . The device is completed with a silver electrode (anode) prepared as a 300-μm-wide strip orthogonal to the slit in the LiF interlayer. This approach leads to two-dimensional current focusing and allows us to limit the injection area to 30 × 300 μm 2 . The fabricated LEDs, as well as other devices studied in this work, were characterized at room temperature in air. In Fig. 1e,f, we show electroluminescence (EL) measurements of one of the reference LEDs excited using pulsed bias (1-μs pulse duration, 1-kHz repetition rate) with a voltage amplitude (V) up to 67 V. At the maximal voltage, the current density (j) reaches 1,019 A cm −2 (Fig. 1e), which is comparable with values realized with previous current-focusing, pulsed LEDs 10 . The device emission turns on at about 3 V, after which the EL intensity exhibits fast growth. The EL spectra measured at lower j peaked at 1.96 eV (1S feature), which corresponds to the band-edge 1S e -1S hh transition (Fig. 1f). As j is increased, the EL exhibits a pronounced broadening owing to increasing intensities of the higher energy bands associated with the 1S e -1S lh (2.02 eV) and the 1P e -1P hh ( The TA signal is presented as α(hv,t) = α 0 (hv) + Δα(hv,t), where α 0 and α are the absorption coefficients of the unexcited and excited sample, respectively, and Δα is the pump-induced absorption change. The solid black line (α = 0) separates the regions of absorption (α > 0, brown) and optical gain (α < 0; green). The dashed black line is the second derivative of α 0 (panel a). c, Pump-intensity-dependent spectra of edge-emitted photoluminescence (PL) of a 300-nm-thick ccg-QD film on a glass substrate under excitation with 110-fs, 3.6-eV pump pulses. The pump spot is shaped as a narrow 1.7-mm-long stripe orthogonal to the sample edge. The emergence of narrow peaks at 1.93 eV and 2.08 eV (full width at half maximum 35 meV and 40 meV, respectively) at higher ⟨N⟩ indicates the transition to the ASE regime. On the basis of the onset of sharp intensity growth (inset), the 1S and 1P ASE thresholds are, respectively, about 1 and about 3 excitons per dot on average. d, A device stack of the reference LED comprises an L-ITO cathode, a ccg-QD layer and TFB/HAT-CN hole transport/injection layers separated by a LiF spacer with a current-focusing aperture. The device is completed with a Ag anode prepared as a narrow strip. e, The j-V (solid black line) and EL intensity-V (dashed blue line) dependences of the reference device. f, The j-dependent EL spectra of front (surface) emission of the reference device. The EL spectrum recorded at 1,019 A cm −2 is deconvolved into three Lorentzian bands that correspond to the three ccg-QD transitions shown in a. AU, arbitrary units. transitions ( Fig. 1f and Extended Data Fig. 1a). At the highest j, the EL spectrum peaks at the position of the 1P band, which is indicative of a high per-dot excitonic number realized in these devices. In particular, on the basis of the ratio of the 1P-band and 1S-band amplitudes, the average QD excitonic occupancy ⟨N⟩ reaches roughly 7.4 (Extended Data Fig. 1b), which is higher than the optical gain threshold for both the 1S and 1P transitions (Fig. 1c). Despite achieving population inversion, the reference devices do not exhibit ASE under electrical pumping in either front (surface) or edge emission. This indicates that the overall optical loss overwhelms optical gain generated in a thin QD medium. Photonic modelling of the reference LEDs using a finite element method confirms this assessment (Supplementary Note 1). In these devices, light amplification occurs because of optical modes guided by total internal reflection (TIR) at the L-ITO-glass interface and by the reflection at the silver mirror (Fig. 2a). Because of strong quenching by the metal layer, transverse magnetic (TM) modes experience strong attenuation, therefore, the modes preferred by ASE are of transverse electric (TE) character 12,13 . In Fig. 2a, left, we show the computed electric-field distribution of the TE 0 TIR mode. The mode confinement factor for the QD layer (Γ QD ) is 0.23, which yields the maximal 1S modal gain (G mod,1S = Γ QD G mat,1S ) of about 180 cm −1 . Notably, a considerable fraction of the optical mode resides in the optically lossy L-ITO electrode. This leads to a large optical loss coefficient (α loss ) of about 140 cm −1 (refs. 12,13). Although it is slightly lower than G mod,1S , light absorption in the top Ag electrode and unaccounted light scattering at imperfections within the waveguide increase the overall optical loss such that it becomes greater than modal gain, which suppresses ASE. Because of high propagation losses, the reference device exhibits very weak edge emission and radiates light primarily from the glass-cladded bottom surface such that the ratio of the surface-to-edge emission intensities is about 50 (Fig. 2a, right). Owing to the lack of light amplification, the spectrum of edge emission replicates that of surface EL at all j (Extended Data Fig. 2a). To tackle the problem of excessive losses, we use a transverse Bragg reflector approach 26 previously explored in the context of traditional laser diodes 27,28 . In this approach, an optical gain medium is flanked with a DBR stack on one or both sides 26 (Fig. 2b, left). The resulting Bragg reflection waveguide (BRW) supports low-loss modes (Extended Data Figs. 3 and 4) that develop owing to coherent superposition of several reflections produced by the DBR structure (Fig. 2b, left). The BRW mode is favoured over the TIR modes in the case of ASE as they offer improved mode confinement within the gain-active medium and, as a result, feature reduced optical losses and enhanced net modal gain 27,28 . Furthermore, the BRW mode is characterized by an increased effective amplification length, as the corresponding angle of incidence (θ BRW ) can be considerably sharper than that in the TIR case ( Fig. 2a, To implement a BRW waveguide, we incorporate a DBR stack made of ten pairs of Nb 2 O 5 and SiO 2 layers below the cathode ( Fig. 3a and Supplementary Fig. 4). To reduce serial resistance and thereby lessen overheating at high j, we make the cathode of standard ITO rather than higher-resistivity L-ITO used in refs. 12,13. As a result, we can push the In the reference device, this mode is supported by TIR, whose critical angle (θ c ) is controlled by the refractive-index contrast at the L-ITO-glass interface (sinθ c = n glass /n L-ITO ). In the BRW device, the mode angle (θ m = θ BRW ) is defined by the condition of constructive interference (Bragg condition) of reflections from different layers of the DBR. As a result, the optical-field profile exhibits an oscillatory pattern linked to the periodic structure of the DBR. Right, dependence of front-emitted and edge-emitted light intensities (yellow and red symbols, respectively) on current density for the reference (a) and BRW (b) devices. Owing to large propagation losses, the reference device radiates primarily from its front glass-cladded surface (the front-to-edge intensity ratio is about 50). By contrast, owing to reduced optical losses (inset in b, right) and strong amplification of guided light, the BRW emits more strongly from its edge (the edge-to-front intensity ratio is about 2 to 3). AU, arbitrary units. Article current density up to 1,933 A cm −2 (V = 53 V) without causing device breakdown ( Supplementary Fig. 5). To further improve charge flow in the device, we deposit an n-type ZnO electron-transport layer (ETL) on top of the ITO cathode (Fig. 3a). The ZnO ETL is followed by the QD layer and a series of hole transport/injection layers that are similar to those of the reference LED (Fig. 3a). As well as improving charge transport, the ZnO layer also allows us to achieve n-doping of the active medium, as ZnO is known to facilitate electron injection into the QDs and thereby helps keep them negatively charged 29,30 . As shown previously, the use of charged (doped) QDs benefits lasing performance by lowering optical gain thresholds owing to partial or complete bleaching of ground-state absorption 31-35 . A potential problem of this approach is quenching of QD emission resulting from Auger recombination of charged excitonic species 32,33 . However, it is less of a problem with our ccg-QDs because, owing to impeded Auger decay, these QDs show high emission efficiencies for both singly and doubly negatively charged excitons ( Supplementary Fig. 2). In the fabricated structures, the bottom DBR and the top Ag mirror form a BRW. The computed electric-field distribution for the BRW mode is depicted in Fig. 2b, left. It exhibits an oscillatory pattern that reflects the periodic structure of the DBR. The main peak is centred within the QD optical gain medium, which leads to a high mode confinement factor (Γ QD = 0.2), despite the small thickness of the gain medium (approximately three ccg-QD monolayers). Notably, the BRW mode profile also features a diminished field intensity in the optical lossy ITO and ZnO layers. As a result, the overall loss coefficient is only 16 cm −1 (Extended Data Fig. 4d). The favourable changes in the optical-field distribution have a profound effect on device EL performance. In particular, we observe a marked boost in edge emission, whose intensity becomes greater than that of surface emission by a factor of around 2 to 3 (Fig. 2b, right). This is a direct consequence of the reduced propagation losses and the emergence of the regime of ASE. The effect of ASE is pronounced in the spectra of edge-emitted EL (Fig. 3b). At low injection levels (j < 8 A cm −2 ), they show a weak, single-band 1S emission at 1.98 eV with an 82-meV linewidth (full width at half maximum, FWHM). At higher current densities, we observe the emergence of new narrow features whose spectral energies (1.94 and 2.09 eV) are identical to those of the 1S and 1P ASE Photon energy (eV) Fig. 3 | Electrically driven ASE in the BRW device. a, A BRW device is built on top of a DBR made of ten pairs of Nb 2 O 5 and SiO 2 layers. The device contains an ITO cathode, a ZnO ETL, a ccg-QD gain medium (three QD monolayers), a TFB HTL, a LiF interlayer with a current-focusing slit, a HAT-CN HIL and a strip-like Ag anode. b, Edge-emitted EL spectra of the BRW device as a function of current density tuned from 0.8 to 1,933 A cm −2 . The device was excited using pulsed bias with τ p = 1 μs and pulse-to-pulse separation T = 1 ms. The EL spectra show a transition from broad spontaneous emission observed at low j to sharp 1S and 1P ASE bands at high j. c, Top, the j-dependent EL intensities at the peaks of the 1S spontaneous (black) and ASE (red) bands indicate the ASE threshold j th,ASE ≈ 13 A cm −2 . Bottom, the dependence of 1S emission linewidth on j indicates progressive line narrowing from 82 to 39 meV. d, Polarization characteristics of edge-emitted light of the BRW device in the case of electrical (left, j = 650 A cm −2 ) and optical (right, 110-fs, 3.6-eV pulses, w p = 85 μJ cm −2 ) excitation. Owing to strong damping of TM modes, the 1S and 1P ASE bands are not present in TM-polarized emission (blue) and exhibit nearly perfect TE polarization (red). The spontaneous 1S band is not polarized (black) and, as a result, is present in both TE-polarized and TM-polarized emission. e, The VSL measurements of the optically excited BRW device (inset) indicate the development of the 1S and 1P ASE features with increasing stripe length. These measurements used 110-fs, 3.6-eV pump pulses with w p = 90 μJ cm −2 . The sharp ASE bands are similar to those observed in the EL spectra (panel b). AU, arbitrary units. bands in the optically excited ccg-QD film (Fig. 1c). The new bands exhibit fast superlinear growth with increasing injection level (Supplementary Fig. 6) and eventually (at j ≥ 13 A cm −2 ) overtake the broad 1S band (Fig. 3c, top). This is accompanied by the pronounced narrowing of the band-edge emission from 82 to 39 meV (or 23 to 13 nm; Fig. 3c, bottom). The observed j-dependent evolution of the EL spectra is very different from that for the reference LED ( Fig. 1f) but very similar to the evolution of photoluminescence (PL) during the transition to ASE for the optically excited ccg-QD/glass sample (Fig. 1c). This suggests that the narrow 1S and 1P features in the edge-emitted EL are also linked to ASE. To infer the ASE threshold, we compare the j-dependent EL signals at 1.98 eV and 1.94 eV (Fig. 3c, top), which correspond to peak energies of the spontaneous emission and ASE, respectively. Although initially the two signals grow synchronously with increasing injection level (approximately linear), they start to diverge at j > 13 A cm −2 owing to the onset of faster (superlinear) increase of the 1.94-eV EL intensity (Supplementary Fig. 6). We ascribe this behaviour to the onset of ASE and the corresponding current density to the ASE threshold (j th,ASE = 13 A cm −2 ). The value of j th,ASE , determined in this way, is consistent with the onset of line narrowing, characteristic of the ASE process (Fig. 3c, bottom). The calculated ASE thresholds for our ccg-QD films depend on a charging level 33 (Supplementary Note 2). For neutral QDs, j th,ASE is about 28 A cm −2 and it drops to about 26 A cm −2 and then about 15 A cm −2 for singly and doubly negatively charged QDs, respectively. The comparison of these values with j th,ASE observed experimentally suggests that, in our devices, QDs are populated with two electrons on average, which is consistent with previous studies of high-brightness cg-QD LEDs containing a ZnO ETL 29 . Next, we describe evidence that the sharp 1S and 1P EL features are indeed because of photon amplification during light propagation in the BRW and not because of spectral filtering effects arising, for example, from the DBR-Ag cavity. The first piece of evidence is the close correspondence between spectral positions of the EL peaks with the optically excited 1S and 1P ASE features observed for cavity-free ccg-QD/ glass samples (Fig. 1c). Second, the comparison of surface-emitted and edge-emitted EL spectra (Extended Data Fig. 2b) shows that the ASE features are spectrally distinct from the vertical cavity mode. Furthermore, the edge-emitted and surface-emitted bands show distinct behaviours as a function of j (Extended Data Fig. 5). In particular, owing to the onset of ASE, edge-emitted EL shows spectrally non-uniform intensity growth, whereas such spectral non-uniformity is absent in the surface emission. Polarization-dependent measurements provide further evidence for the ASE regime. In particular, both sharp EL peaks observed at high j (post ASE threshold) are TE polarized and not present in TM-polarized emission ( Fig. 3d and Extended Data Fig. 6). The detailed polarization-dependent measurements of the 1S and 1P EL features, ascribed to ASE, show a nearly perfect sin 2 α pattern, as expected for TE-polarized light (Extended Data Fig. 7; α is the angle between the polarization direction of the analyser and the vertical direction) 18 . This type of polarization is expected for amplified guided BRW modes, as propagation of TM modes is strongly inhibited owing to quenching by the Ag electrode 12,13 . Notably, the observed polarization trends are identical between the regimes of electrical and optical pumping (Fig. 3d; left and right subpanels, respectively). This is strong evidence for the ASE character of edge-emitted EL, as the ASE effect is unambiguous in optically excited edge-emitted PL spectra, as discussed below. In Fig. 3e, we show VSL measurements of BRW structures conducted with optical excitation (see Methods). For these measurements, we prepare devices without a LiF spacer, which allows us to avoid parasitic signals from the parts of the QD layer outside the current-focusing aperture. In the VSL experiment, the pump laser beam is focused into a narrow stripe of a varied length (l), which is orthogonal to the cleaved device edge. For short stripe lengths, the edge-emitted PL is characterized by a broad spectral profile that is similar to that of EL at low injection levels (Fig. 3b,e; green lines). As l is increased, the emission intensity experiences quick growth ( Supplementary Fig. 7), which is accompanied by the development of sharp peaks (Fig. 3e) whose spectral energies are in close agreement with the narrow EL features emerging at high j in electrically pumped devices (Fig. 3b, solid lines), as well as the 1S and 1P ASE bands observed for the optically excited ccg-QD/glass sample (Fig. 1c). First, these results exclude that the narrow 1S and 1P features arise from spontaneous emission of higher-order multiexcitons, as the increase in l does not affect per-pulse fluence, the quantity that controls the excitonic occupancy of the QDs. Second, these observations confirm the connection of the sharp 1S and 1P peaks to the process of stimulated emission, as the build-up of ASE does require a sufficiently long light propagation path in the gain medium approximately defined by the condition G net l > 1. On the basis of the analysis of the l-dependent emission intensities, the 1S and 1P gain coefficients are 45 and 55 cm −1 , respectively (Supplementary Fig. 7). These values are close to the calculated maximal net optical gain for charged QDs (G net = 0.5G mod,max − α loss ≈ 64 cm −1 ; Supplementary Note 2), in agreement with our earlier analysis of ASE thresholds, according to which the observed gain is because of charged excitons. The effect of ASE is also evident in the measurements of temporal coherence conducted using a Michelson interferometer. In particular, under conditions similar to those in Fig. 3d, left, the coherence time (τ c ) observed for the TE-polarized light is appreciably longer (by a factor of about three) than that for the TM-polarized EL (Extended Data Fig. 8). The lengthening of τ c indicates a considerable contribution of ASE to the TE-polarized EL, as photon replication during light amplification enhances temporal coherence. These results are consistent with the measurements of spectrally resolved EL that indicate the dominance of ASE in the TE-polarized emission (Fig. 3d, left and Extended Data Fig. 6). As pointed out earlier, another indication of the ASE in the BRW structures is the high brightness of edge-emitted EL (Fig. 2b, right). In the reference device, the edge signal is undetectable by the naked eye, even in the dark. By contrast, as illustrated in Fig. 4a, the light radiating from the edge of the BRW device is clearly seen even in room light, despite a very small edge-emitting area (its nominal size is approximately 9 μm 2 ). In fact, the emission from the BRW structure can be detected and characterized with a standard power meter used to evaluate the output of commercial lasers. On the basis of such characterization, the instantaneous edge-emitted power (P out ) during the voltage pulse reaches 170 μW (j = 1,933 A cm −2 ); Fig. 4b (dashed blue line). A substantial role in the development of strong edge-emitted ASE is played by the BRW structure, which increases the effective amplification length and improves the collection of 'seed' photons produced by spontaneous emission (Supplementary Fig. 8). The edge-emitted light exhibits a fairly tight angular distribution for out-of-plane angles ( Supplementary Fig. 9a,b). It features a sharp spike (from approximately −0.2° to 0.2°), which appears on top of an asymmetric profile extending to the DBR device side. Such asymmetry is consistent with the calculated BRW mode structure (Fig. 2b, left). The angular distribution for in-plane angles is fairly flat ( Supplementary Fig. 9c,d), as expected for our devices that lack angle-selection elements in the device plane. The fabricated devices exhibit good operational stability under ambient environment. Even when the driving voltage is well above the ASE threshold, they operate for hours in the ASE regime without considerable losses in the output power. In particular, a stability test conducted at j = 120 A cm −2 (at the beginning of the test) shows that, after 2 h of continuous operation, the device still preserves around 90% of its original power (Extended Data Fig. 9). It operates in the stable ASE mode for two more hours, at which point the device finally fails. Overall, we have fabricated 15 chips, each of which contained eight devices (120 devices in total). We observed excellent reproducibility Article of performance characteristics between devices on the same chip and those on different chips prepared through separate fabrication cycles. In particular, high-j EL measurements were conducted on 11 devices from different chips. All of them showed the ASE effect. As illustrated in Extended Data Fig. 10, the tested devices exhibited good consistency between their j-V (P out -V) dependences, EL spectra, ASE thresholds and the characteristic line narrowing accompanying the transition to the ASE regime. It is instructive to examine the external quantum efficiency (EQE) of the BRW device versus the reference LED. Because our devices lack lateral optical confinement within the QD layer and do not use any schemes for improved light outcoupling, the collected edge-emitted light is only a small fraction of the total ASE. Therefore, we will focus on the analysis of normalized EQEs as a function of current density (Fig. 4b). For the reference device, the EQE reaches its peak value at j of about 130 A cm −2 , after which it shows a fast decline and drops to half of the maximal value at j = j ½ = 500 A cm −2 (Fig. 4b, black triangles). This is the manifestation of a droop effect typically attributed to processes such as nonradiative Auger recombination and/or thermally induced emission quenching 10,29 . The BRW device also exhibits the EQE droop. However, its onset is shifted to about 300 A cm −2 and j ½ is increased to about 1,930 A cm −2 (Fig. 4b, red circles). These are expected consequences of the ASE regime, which accelerates radiative recombination and thus allows it to compete more favourably with nonradiative processes. In conclusion, we demonstrate 1S and 1P ASE with an electrically excited gain medium made of solution-cast colloidal QDs. This advance has been enabled by excellent optical-gain properties of ccg-QDs and a specially engineered device stack, which contains a low-loss photonic waveguide. This waveguide is formed by the bottom DBR and the top Ag mirror that flank the QD medium and the adjacent charge transport/ injection layers. The use of the BRW allows us to shape the optical-field profile so as to reduce optical losses in charge-conducting layers and enhance mode confinement in the QD medium. These ASE diodes exhibit strong edge emission with instantaneous output power of up to 170 μW, even though they lack lateral optical confinement within the gain-active region and do not use engineered light outcoupling. The next important milestone-the realization of a QD laser oscillator-can be accomplished by supplementing the developed structures with an optical resonator implemented, for example, as either an in-plane distributed feedback grating or a Fabry-Pérot cavity formed by edge reflectors. Online content Any methods, additional references, Nature Portfolio reporting summaries, source data, extended data, supplementary information, acknowledgements, peer review information; details of author contributions and competing interests; and statements of data and code availability are available at https://doi.org/10.1038/s41586-023-05855-6. , P out reaches 170 μW. On the basis of the measured output power, we determine the EQE (red circles), which is compared with that of the reference device (black triangles). Owing to the efficient ASE, which leads to the increased QD emission rate and enhanced power extraction from the inverted QD medium, the EQE droop is much less pronounced in the BRW device. In particular, j ½ is about four times higher than that for the reference device (1,933 versus 500 A cm −2 ). Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Synthesis of ccg-QDs In the next step, the preformed core particles were overcoated with a compositionally graded Cd x Zn 1−x Se layer. For this purpose, 2 ml Zn-oleate (0. Purification. The synthesized ccg-QDs were purified with ethanol by centrifuging at 7,000 rpm for 5 min and then redispersing in 10 ml toluene. These solutions were used in spectroscopic measurements. For device fabrication, ccg-QDs were further washed with acetonitrile. In this procedure, 2 ml of ccg-QDs in toluene were mixed with 20 ml acetonitrile and centrifuged at 9,000 rpm for 15 min. The washing step was repeated two more times. The precipitate was fully dried and redispersed in octane to obtain a desired concentration (typically, 20 mg ml −1 ). Fabrication of reference LEDs Glass substrates coated with L-ITO were purchased from Thin Film Devices, Inc. The glass/L-ITO substrate was washed using sequential 10-min sonication steps in isopropyl alcohol, acetone and ethanol. After the cleaning step, the substrate was dried using a N 2 gas blower. Afterwards, 20 μl of ccg-QD solution (20 mg ml −1 ) were spin-coated onto the L-ITO substrate at 2,000 rpm for 30 s to form one monolayer of the ccg-QDs. This procedure was repeated two more times to prepare a film that nominally contained three ccg-QD monolayers. Following deposition, the ccg-QD film was annealed at 100 °C for 10 min. To fabricate a HTL, 10 mg of TFB were dissolved in 1 ml of chlorobenzene and spin-coated onto the ccg-QD layer at 4,000 rpm for 30 s, which was followed by annealing at 120 °C for 20 min. Then, a 50-nm-thick LiF interlayer was deposited by thermal evaporation using a shadow mask, which defined a 'current-focusing' aperture in the form of the 30-μm-wide slit. After that, a 100-nm-thick HIL of HAT-CN was deposited using thermal evaporation with a deposition rate of 0.2-0.3 Å s −1 . The device was completed with a 100-nm-thick Ag electrode deposited by means of thermal evaporation (at a rate of 1 Å s −1 ) through a shadow mask with a 300-μm-wide slit orthogonal to the slit in the LiF interlayer. This allowed us to obtain two-dimensional current focusing and limit the injection area to 30 × 300 μm 2 . We would like to point out that the hole-injection part of our devices is distinct from that of traditional QD LEDs that usually use a combination of MoO x HIL and an organic HTL. However, the standard HIL/HTL combination leads to large optical losses that are mitigated here using the new design of the hole-injection device part 12,13 . Fabrication of devices with a BRW BRW devices were assembled on top of ITO-coated DBR substrates purchased from Thin Film Devices, Inc. The substrates were custom made to match their stopband to the emission spectra of the ccg-QDs. In particular, their reflection coefficient was >95% (normal incidence) across the wavelength window of 490-690 nm (Supplementary Fig. 4), which covered both the 1S and 1P emission bands (Fig. 1c). The DBR was made of ten pairs of Nb 2 O 5 and SiO 2 layers (60 nm and 100 nm thickness, respectively) prepared on a glass substrate. A 50-nm-thick ITO film was deposited on top of the Nb 2 O 5 layer of the DBR. The resulting multilayered stack is depicted in Supplementary Fig. 4. The acquired ITO/DBR/glass substrates were cleaned using the same procedure as in the case of reference devices. Then, a ZnO ETL with a thickness of 50 nm was deposited through a sol-gel method. A sol-gel solution was prepared by dissolving 0.2 g of zinc acetate dihydrate (Zn(CH 3 COO) 2 ·2H 2 O) and 56 mg of ethanolamine in 10 ml of 2-methoxyethanol (CH 3 OCH 3 CH 3 OH). The solution was stirred overnight before use. 300 μl of a sol-gel precursor was spun at 3,000 rpm for 50 s and annealed at 200 °C for 2 h in ambient air. Afterwards, the active ccg-QD layer and the rest of the device were prepared using the same steps as in the case of reference LEDs (see previous section). Device characterization All fabricated devices were tested at room temperature in air. For edge-emission measurements, devices were cleaved across the emitting area using a diamond tip. In the regime of electrical excitation, the devices were driven using square-shaped voltage pulses generated by a function generator (Tektronix AFG320; pulse amplitude up to 3.5 V), followed by a high-speed bipolar amplifier (HSA4101, NF Corporation) with 20 times voltage gain. The voltage applied to a device was measured using a Tektronix oscilloscope (TDS2024B) connected to the monitoring port of the amplifier. The generated transient current was measured by monitoring the voltage drop across a 10-Ω-load resistor on the current return ( Supplementary Fig. 5). Both edge-emission and front-emission spectra were collected using a Czerny-Turner spectrograph (Acton SpectraPro 300i) dispersing light in the focal plane of a liquid-nitrogen-cooled charge-coupled device (CCD) camera (Roper Scientific) or a fibre-coupled Ocean Optics USB 2000 spectrometer (Fig. 1e,f, Fig. 3b and Extended Data Fig. 10). The spectral resolutions were 0.1 nm and 0.4 nm, respectively. The optical power of edge emission was measured using a standard photodiode-based power meter (Thorlabs S120VC with an active area of 73 mm 2 ). The power meter head was positioned 1 cm away from the cleaved edge of the device (Fig. 4a). The EQE was obtained on the basis of the instantaneous output power emitted during the voltage pulse (P out ) and the driving current (I) using the following expression: in which hv is the averaged energy of the edge-emitted photons calculated from the measured EL spectra and e is the elementary charge. Optical measurements Optical absorption and PL measurements. Optical absorption and PL measurements were conducted on ccg-QD/toluene solutions loaded into 1-mm-thick quartz cuvettes. The absorption spectra were collected with an ultraviolet-visible scanning spectrometer (Lambda 950, Perkin Elmer). In the PL lifetime studies, a ccg-QD sample was excited with 3.1-eV, 40-fs pulses at a 250-kHz repetition rate derived from a frequency-doubled Ti:sapphire laser (Mira oscillator and RegA amplifier, Coherent). The laser pulses were focused onto the sample into a 100-μm-diameter spot. The emitted PL was collected in the direction normal to the sample plane, spectrally selected with a Czerny-Turner spectrograph (Acton SpectraPro 300i) equipped with an exit slit and detected with a fibre-coupled superconducting nanowire single-photon detector (Opus One, Quantum Opus), followed by a time-correlated single-photon counting apparatus (PicoQuant PicoHarp). The PL was measured at the maximum of the 1S PL peak with a 2-nm bandwidth; the temporal resolution of the setup was 70 ps. Fig. 2 | Comparison of edge-emitted and surface-emitted EL spectra of the reference and the BRW devices. a, The spectra of edgeemitted and surface-emitted EL (top and bottom subpanels, respectively) of the reference device (insets illustrate the configuration of the measurements) do not show any apparent qualitative distinctions (the device is operated at 920 A cm −2 ). Furthermore, the edge emission is about 50 times weaker than surface emission. b, By contrast, the BRW device shows a substantial difference in the spectral shape of edge-emitted and surface-emitted EL (top and bottom subpanels, respectively); j = 730 A cm −2 . Furthermore, edge emission is about two times more intense than surface emission. The spectrum of edge-emitted EL is dominated by 1S and 1P ASE features, whereas the surface emission comprises a single narrow peak at 2.05 eV owing to a vertical Fabry-Pérot cavity formed by the bottom DBR and the top silver mirror. a.u., arbitrary units. Fig. 3 | Photonic modelling of the BRW devices developed in the present study. a, Contour maps of a TE field of the fundamental TIR mode (top) and the BRW mode (bottom) supported by the structure with the transverse DBR-Ag cavity. The optical field of the BRW mode is confined primarily in the device layer (waveguide core), whereas the TIR mode concentrates at the device-DBR interface and leaks into the DBR. b, The calculated ω-β dispersion (ω is the photon angular frequency and β is the modulus of the wavevector) of the TE modes allowed in the BRW structure (depicted in the inset). In this case, the highest (n 2 ) and lowest (n 1 ) index materials of the waveguide are Nb 2 O 5 and SiO 2 , respectively. There are no waveguided modes for n eff (= βc/ω) > n 2 , which corresponds to the 'cut-off' regime. In the range n 1 < n eff < n 2 (red-shaded area), several TIR modes are supported by the waveguide owing to reflections from various layers of the thick DBR stack. The range n eff < n 1 corresponds to a photonic bandgap or a stopband defined by the reflection spectrum of the DBR (purple line). A BRW mode (blue line) is located in the stopband of the photonic structure. c, A comparison of guided mode parameters between the TE 0 TIR (pink) and BRW (orange) modes of the DBR-based structure (Fig. 2b) and the TE 0 TIR mode (red) of the reference device (Fig. 2a). The calculated parameters include the effective refractive indices (n eff ), the modal angles (θ m ), the mode confinement factors for the ccg-QD layer (Γ QD ) and the optical-loss coefficients (α loss ). Fig. 4 | The calculated mode parameters as a function of photon energy. The spectral dependences of the effective refractive index (a), the modal angle (b), the mode confinement factor for the ccg-QD layer (c) and the optical-loss coefficient (d) for the reference device (denoted 'Ref') and the TIR and BRW modes (denoted 'TE 0 ' and 'BRW', respectively) of the BRW device. Fig. 5 | Evolution of edge-emitted and surface-emitted EL spectra with increasing current density. a, Edge-emitted EL spectra of the BRW device as a function of j. The bottom subpanel shows 'raw' (not normalized) experimental spectra. The top subpanel shows the normalized spectra scaled so as to match the amplitude of the 1S spontaneous emission feature (owing to a high noise level below the ASE threshold, we present the measured spectra using two-band Gaussian fits). The normalized spectra clearly show the emergence of a sharp 1S ASE band. b, Similar sets of data for surface-emitted EL of the same device. The EL is dominated by a vertical cavity mode at 1.84 eV. It is red-shifted versus that of the device shown in Extended Data Fig. 2b, because of a larger thickness of the device tested in the present measurements. The feature around 1.77 eV results from light leakage through the DBR (see Supplementary Fig. 4). Unlike edge-emitted EL, the surface-emitted signal shows spectrally uniform growth with increasing j. a.u., arbitrary units. Fig. 10 | Reproducibility of characteristics of BRW ASE devices. a-d, Characteristics of four devices from four different chips (each chip contains eight devices). The data are organized into four columns, one column per device. The top row shows the j-V curves (black lines) and the V-dependent edge emission intensity (right axis, blue lines). The middle row shows EL spectra that feature prominent 1S and 1P ASE bands. The bottom row shows the dependence of the 1S bandwidth on j, which exhibits 'line narrowing' typical of the transition to the ASE regime. There is excellent consistency between all shown datasets. e, The analysis of device-to-device variability of a turn-on voltage (left), a 1S ASE threshold (middle) and a 1S ASE linewidth. The histograms were obtained on the basis of the measurements of 11 devices. The average values of the measured parameters and the standard deviations are indicated in the figure. In the case of the turn-on voltage and the ASE linewidth, the standard deviations are 22% and 25% of the average value, respectively. The larger deviation observed for the ASE threshold (about 52%) can be attributed to high sensitivity of j th,ASE to device-to-device variations in propagation losses and a varied degree of charging of an active QD layer. a.u., arbitrary units.
2023-05-04T13:52:58.607Z
2023-05-01T00:00:00.000
{ "year": 2023, "sha1": "fa0834a5a1113bf5372e497681d45a1b4e1762b5", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "fa0834a5a1113bf5372e497681d45a1b4e1762b5", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
54522559
pes2o/s2orc
v3-fos-license
Audiological Screening in People with Diabetes. First Results The relationship between diabetes mellitus (DM) and hypoacusia has been discussed since the work of Jordao, in 1857. Type II diabetes was considered a prevalent age-related medical condition, resulting in subclinical pathological changes. It was estimated that the incidence is about 9.6% of people in USA. (Cowie et al. 2006). Some studies has shown that the magnitude of hearing loss in patients with DM is related to the duration of the disease, age, and affects the auditory threshold to high frequencies. (Frisina et al. 2006). In a study of Tay HL et al. (1995) was found that there is a possible correlation between the duration of diabetes and hearing loss. The auditory system requires glucose and high-energy utilization for its complex signal processing. This suggests that the cochlea may also be a target organ for the ill effects of hyperglycemias. (Cullen and Cinnamond, 1993). Increased glucose exposure, even for short periods, initiates a metabolic cascade that could disrupt the cochlea both anatomically and physiologically (Jorgensen, 1961). Hearing depends on small blood vessels and nerves of the inner ear that are affected by high blood sugar level in diabetic patients. Outer hair cells modulate auditory reception in the inner ear: consequently, OAEs are commonly considered a useful index of cochlear function... Introduction The relationship between diabetes mellitus (DM) and hypoacusia has been discussed since the work of Jordao, in 1857. Type II diabetes was considered a prevalent age-related medical condition, resulting in subclinical pathological changes. It was estimated that the incidence is about 9.6% of people in USA. (Cowie et al. 2006). Some studies has shown that the magnitude of hearing loss in patients with DM is related to the duration of the disease, age, and affects the auditory threshold to high frequencies. (Frisina et al. 2006). In a study of Tay HL et al. (1995) was found that there is a possible correlation between the duration of diabetes and hearing loss. The auditory system requires glucose and high-energy utilization for its complex signal processing. This suggests that the cochlea may also be a target organ for the ill effects of hyperglycemias. (Cullen and Cinnamond, 1993). Increased glucose exposure, even for short periods, initiates a metabolic cascade that could disrupt the cochlea both anatomically and physiologically (Jorgensen, 1961). Hearing depends on small blood vessels and nerves of the inner ear that are affected by high blood sugar level in diabetic patients. Outer hair cells modulate auditory reception in the inner ear: consequently, OAEs are commonly considered a useful index of cochlear function (Martin et al., 1990). Well-established complications of diabetes, such as retinopathy, nephropathy, and peripheral neuropathy involve pathogenic changes to the microvasculature and sensory nerves (Acuña García, 1997). This conditions lead to a common symptoms in diabetic people that are tinnitus, dizziness and sensorineural hearing impairment, typically bilateral and progressive. Moreover, the specific pathologic effects of hyperglycemias and the complication associated with diabetes such as microvascular and neuropathic sor-rows affecting also the ear including sclerosis of the internal auditory artery, thickened capillaries of the stria vascularis, atrophy of the spiral ganglion, and demyelination of the eighth cranial nerve, has been described among autopsied patients with diabetes (Lisowska et al., 2001). Several studies are present in the international literature and the results are not unique. Compromised cochlear function has been measured using evoked otoacoustic emissions, a non-invasive method to assess damage to the outer hair cells of the cochlea, among patients with diabetes relative to healthy controls (Lisowska et al., 2002). The aim of our study is to evaluate the topography of sensorineural hearing loss induced by diabetes, checking the sensitivity of audiological investigation to probe the damage. Methods Selected subjects were divided in two groups: 40 patients with diabetes mellitus type 2 and 20 healthy controls. The examination with otoscopy was normal and tympanogramm was A type (i.e. without signs of inflammation in progress) in both groups. In our study we rule out all subjects with a history of drugs able to influence the vascular reactivity, hearing loss or any middle/inner ear pathology or acoustic and cranial trauma; besides, any medical diseases which affect or are suspected to affect hearing (e.g. untreated hypertension, noise exposure, hypercholesterolemia, or use of ototoxic drug therapy), were excluded. The Audiometric tests conformed to the specification in Amplaid A 321 (Acoustic test methods; basic pure tone and bone conduction threshold audiometry, International Organization for Standardization, Geneva, Switzerland). Impedence audiometry was performed for each tested ear. The tympanograms obtained were analysed for middle ear pressure and compliance values. The average threshold across the tested frequencies for each ear was evaluated. Otoacoustic emissions (OAE) are sounds recorded in the external acoustic meatus that derive from the inner ear activity, specifically the movement of the outer hair cells. Testing of CEOAEs (Click Evoked OtoAcoustic Emissions) was accomplished using the ILO96 Otodinamycs analyzer (V6 ILO OAE Research). Brainstem Auditory Evoked Potentials (BAEP) were accomplished using a OtoAccess program. The electrode impedance for the ear canal electrode, as well as the surface electrodes, was typically less than 5 kW. Results Analyzing the data obtained comparing to healthy subjects, diabetic patients showed an increase of the perception threshold at high frequency such as 4000-8000 Hz (P<0.01) (Table 1). We observed percentage about 16% of subjects with normal hearing and 83% of hearing loss; in a percentage (10%) of subjects with mean age of 40 and 50 years, we found already some degree of hearing impairment (+25 dB). The results from stapedial riflessometry showed that P2 and P4 were the parameters that more frequently increased in patients with diabetes. P2 78% for certainly increased and 8% with value probably increased. P4 certainly increased for 43% and 11% with value probably increased. TEOAEs reproducibility in both ears was significantly lower compared to control subjects, observed at the mean frequencies of 2-4 kHz (Figure 1). DPOAE intensity was reduced in diabetic patients as shown the Figure 2. The ABR showed average values of the latencies of waves V, with stimulus intensity of 60 dB and 100dB, and a significant increase in the values of ranges I-V, as shown in Table 2. According with the data in the literature, although many have not arrived at unequivocal conclusion, there is acknowledgement that diabetes cause a change in the speed of acoustic stimuli along the auditory pathways, regardless of type of diabetes and blood glucose. A high incidence of impaired cochlear responses was found in diabetic patients, analyzing the results collected in our study. Conclusions Our report evaluates the association between diabetes and auditory dysfunction in patients, suggesting that diabetes could represent a risk factor for auditory pathway. To better understand the impact of auditory alterations in diabetes, we studied both cochlear function by recording TEOAEs and DPOAEs and neural transmission along the auditory pathway by recording ABRs. Adults with diabetes have a higher occurrence of hearing impairment than those without diabetes. Screening for this problem would allow for prevent an early hearing damage. From our primary results about study of functional brain-stem acoustic pathways of diabetic patients, we believe that the first diagnostic approach should be the impedence, that is easy and quick, whereas the ABR, that in our opinion should be recorded at least 2 rate of the stimulus, is a second level test for the subjects with normal responses in other audiological tests. The TEOAEs and DPOAEs have allowed a better understanding of the microvascular cochlear properties. These findings indicate a central disturbance in the auditory pathway and stress the idea of microvascular complications associated with diabetes. (Makishima and Tanaka, 1971) In conclusion, the combined use of different procedures for monitoring the central and peripheral portions of the auditory pathway in diabetic patients showed the existence of the alterations in the cochlear micro-mechanisms and in the retrocochlear auditory pathway. Diabetes-related hearing loss has only been described as progressive, bilateral, sensorineural impairment with gradual onset predominantly affecting the higher frequencies. We observed stronger association between diabetes and high than low/mid frequency hearing impairment. Indeed the sufferance of the cochlear cells is major at the basal gyros, according to the theory of tonotopicity. Compromised cochlear function has been demonstrated by a reduction of the OAEs values that could be attributed to a vulnerability of the hair cells to a glucose blood level (Sha et al., 2001). Through the use of TEOAEs and DPOAEs, noninvasive techniques that give direct and objective information about the outer hair cell activity (Friedman et al., 1975), our results have contribute to underline the presence of subclinical alterations of the cochlear function in DM patients. Our study may be contribute to focus on association of diabetes and hearing function, identifying an important public health problem that can be addressed. With the high prevalence of hearing impairment occurring among diabetic patients, screening for this condition may be justified. New studies could clarify whether the alteration of the auditory system could represent a useful means of staging and an early marker of ear dysfunction. The severity and the duration of the disease can contribute to the development of the decline of the neuronal and vascular function of the auditory pathway. Larger studies will further help confirm the association and elucidate the auditory benefit of diabetes therapy.
2018-04-03T01:43:58.821Z
2011-03-23T00:00:00.000
{ "year": 2011, "sha1": "caf0ff81e2a44debabc5ba887b5e27f65c585ffd", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.4081/audiores.2011.e8", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "caf0ff81e2a44debabc5ba887b5e27f65c585ffd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
14991410
pes2o/s2orc
v3-fos-license
Comparison of Gastric Microbiota Between Gastric Juice and Mucosa by Next Generation Sequencing Method Background: Not much is known about the role of gastric microbiota except for Helicobacter pylori in human health and disease. In this study, we aimed to detect human gastric microbiota in both gastric mucosa and gastric juice by barcoded 454-pyrosequencing of the 16S rRNA gene and to compare the results from mucosa and juice. Methods: Gastric biopsies and stomach juices were collected from 4 subjects who underwent standard endoscopy at Seoul National University Bundang Hospital. Gastric microbiota of antral mucosa, corpus mucosa samples, and gastric fluids were analyzed by barcoded 454-pyrosequencing of the 16S rRNA gene. The analysis focused on bacteria, such as H. pylori and nitrosating or nitrate-reducing bacteria. Results: Gastric fluid samples showed higher diversity compared to that of gastric mucosa samples. The mean of operational taxonomic units was higher in gastric fluid than in gastric mucosa. The samples of gastric fluid and gastric mucosa showed different composition of phyla. The composition of H. pylori and Proteobacteria was higher in mucosa samples compared to gastric fluid samples (H. pylori, 66.5% vs. 3.3%, P = 0.033; Proteobacteria, 75.4% vs. 26.3%, P = 0.041), while Actinobacteria, Bacteroidetes, and Firmicutes were proportioned relatively less in mucosa samples than gastric fluid. However there was no significant difference. (Actinobacteria, 3.5% vs. 20.2%, P = 0.312; Bacteroidetes, 6.0% vs. 14.8%, P = 0.329; Firmicutes, 12.8% vs. 33.4%, P = 0.246). Conclusions: Even though these samples were small, gastric mucosa could be more effective than gastric fluid in the detection of meaningful gastric microbiota by pyrosequencing. INTRODUCTION Human gut, colonized by complex communities of microorganisms, plays essential roles in digestion, absorption of nutrients, 1 stimulation of intestinal epithelial regeneration, 2 and immune reactions. 3 Keeping these microbial communities in balance with host is important for health maintenance and disease prevention. 4 Before the discovery of Helicobacter pylori, human stomach environment was considered to be sterile for its acidic gastric environment suppressing the microorganisms from the oral cavity. The detection of H. pylori made a critical change in the existing perspectives that stomach is a sterile organ. After that, more attention was brought to microbial ecosystem of the stomach, along with the development of culture-independent analysis methods such as next-generation sequencing. 1,5,6 H. pylori infection is a risk factor for gastric cancer, which causes mucosal atrophy, intestinal metaplasia, and dysplasia. 7 Bacteria other than H. pylori alone or simultaneously with H. pylori may also influence atrophic gastritis regulating inflammatory response or N-nitroso compounds (NOC) production. 8,9 NOC can be produced from nitrite and secondary amines by nitrosating bacteria of stomach, which have nitrosating enzyme such as cytochrome cd1 nitrite reductase. 10 The product of NOC has been suggested to increase the risk of cancers. 9 With the development of uncultivated methods, studies focused on non-H. pylori microbiota in human stomach. 11 We have conducted a research on an appropriate cutoff value for determining the colonization of H. pylori by the pyrosequencing. We further investigated gastric microbiota and the differences in microbiota according to H. pylori infection status in the presence or absence of gastric cancer using a pyrosequencing method. 12,13 We assumed that gastric microbiota could be detected in gastric mucosa and gastric juice as well. However, bacteria recently swallowed through mouth and throat can influence stomach microbiota. Microbiota from oral cavity and esophagus can make it difficult to detect true pathogen in stomach. So we decided to get some information about gastric microbiota in both gastric mucosa and gastric juice. This study aimed to characterize the microbiota of gastric fluid compared with microbiota of gastric mucosa using a pyrosequencing method. This is a sub-group analysis of our previous study that evaluated the composition of human stomach microbiota according to the presence of stomach cancer and H. pylori. 12 Gastric and blood samples This study was approved by the ethics committee of Seoul National University Bundang Hospital (B-1112/141-007). Gastric biopsies and fluid samples were collected from 4 subjects who underwent standard endoscopy to screen for premalignant or malignant gastric mucosal lesions or received endoscopy due to dyspepsia. Gastric mucosal (antrum and corpus) biopsies and blood samples were obtained from each patient during endoscopy from October 2008 to March 2013 at Seoul National University Bundang Hospital. Ten biopsy specimens per subjects were obtained to perform H. pylori tests and pyrosequencing as our previous study. 12,13 The biopsy specimens were assessed for the presence of H. pylori and for the degree of inflammatory cell infiltration, atrophic gastritis, and intestinal metaplasia (hematoxylin and eosin staining). Histological features of gastric mucosa were recorded as the updated Sydney scoring system (i.e., 0 = none, 1 = slight, 2 = moderate, 3 = marked). 14 To avoid contamination, the endoscopes were washed and disinfected by immersing in a detergent solution containing 7% proteolytic enzymes and 2% glutaraldehyde. Sterilized gastroscopy forceps were used while gaining another biopsy from the same patient. The biopsies were stored at −80 o C. In patients who had clear gastric fluid, the gastric fluid was gained through a catheter connected to 5 mL tube during endoscopy. The positivity of H. pylori was confirmed by conventional tests for H. pylori infection: 1) Rapid urease test (Campylobacter-like organism test; Delta West, Bentley, WA, Australia), 2) Histologic examination (modified Giemsa staining), 3) Culture for H. pylori. Current H. pylori infection was positive from any of the former three tests. In order to distinguish if the infection is an existing one, the following two methods were used: Serum H. pylori immunoglobulin G (Genedia H. pylori ELISA; Green Cross Medical Science Co., Eumsung, Korea), and a history of H. pylori infection eradication treatment. If all the 5 tests were negative, we regarded the subject as H. pylori-negative. Using a Latex-enhanced Turbidimetric Immunoassay (Shima Laboratories, Tokyo, Japan), serum concentrations of pepsinogen I and II were evaluated, which are known to be associated with the severity of gastric atrophy. 15 Barcoded 454-pyrosequencing of the 16S rRNA gene The mucosal and gastric fluid samples from 4 subjects were subjected to pyrosequencing. Total genomic DNA was separated using a commercial kit (iNtRON Biotechnology, Seongnam, Korea). PCR amplification was done by taking primers targeting the V1 to V3 regions of the 16S rRNA gene with extracted DNA. For bacterial amplification, barcoded primer of 9F (5'-CCTATCCCC-TGTGTGCCTTGGCAGTC-TCAG-AC-AGAGTTTGATCMTGGCTCA G-3'; underlined sequence indicates the target region primer) and 541R (5'-CCATCTCATCCCTGCGTGTCTCCGAC-TCAG-X-AC-ATTA-CCGCGGCTGCTGG-3'; 'X' presents the unique barcode for each subject) (http://oklbb.ezbiocloud.net/content/1001) as previous study shows. The sequencing was performed at Chunlab (Seoul, Korea) with GS Junior Sequencing system, the modified laboratory benchtop form of 454 sequencing systems (Roche, Branford, CT, USA) as stated in the manufacturer's directions. Pyrosequencing data analysis The primary analysis was conducted as described above. Reads taken from different samples were classified by unique barcodes of each PCR product. After identifying the target region in barcoded primers (9F or 541R), all of the linked sequences including adapter, barcode, and linker were eliminated. Low quality sequences such as reads containing two or more indefinite nucleotides, reads with a low quality score (average score < 25), or reads shorter than 300 bp, were eliminated. Potential chimeric sequences were confirmed by the Bellerophon formula, which compares the BLASTN search conclusions between the forward half and reverse half sequences. 16 After removing the chimeric sequences, the taxonomic sorting of each read was assigned against the EzTaxon-e database (http://eztaxone.ezbiocloud.net), 17 which has the 16S rRNA gene sequence of type strains that have valid published names and representative species level phylotypes of either cultured or uncultured entries in the GenBank database with complete hierarchical taxonomic classification from the phylum to the species. Phylogenetic trees were not created as we assigned reads into operational taxonomic units (OTUs) according to BLAST results. The raw 16S rRNA gene sequence originated from our study was deposited in National Center for Biotechnology Information's Sequence Read Archive (GSE61493). Evaluation of species richness and diversity To compare species richness between samples of different sizes, rarefaction curve, and diversity indices such as abundancebased coverage estimator, Chao1 estimator, and Jackknife estimator. Simpson diversity index and Shannon diversity index were estimated in the CLcommunity program (Chunlab). Random subsampling was conducted to equalize the read size of samples to compare the different read size within samples. To compare the OTUs between samples, shared OTUs were obtained with the XOR analysis of the CLcommunity program. Statistical analysis Descriptive statistics were reported as mean ± SD, and confidence intervals were computed as two-tailed using 95% coverage. Categorical variables were described as frequencies and proportions. Comparisons between continuous parameters were performed by the t-test and Mann-Whitney test. Statistical analyses were done by PASW ver. 18.0 (IBM Co., Armonk, NY, USA) and P-values < 0.05 were accepted as statistically significant. Patients Baseline characteristics of clinical and pyrosequencing results of gastric antral mucosal and fluid samples are shown in Table 1. Four subjects (2 gastric cancer, 1 gastritis, and 1 control) were included in this study. The mean age of subjects was 48.7 years (38-59 years). Subject 1 and 3 were male and subject 2 and 4 were female. Sample pH values varied between 1.0 and 7.4 (mean: 2.79). Three samples were found to contain H. pylori, showing positive results. We checked neutrophil infiltration, monocyte infiltration, and intestinal metaplasia to evaluate degree of gastritis. We also evaluated pepsinogen I/II ratio reflecting gastric atrophy. The mean pepsinogen I/II ratio was 3.3. Gastric fluid vs. gastric mucosa Though there was no statistical significance, the mean of total number of reads was lower in gastric fluid samples than gastric mucosa samples. However, the mean of OTUs was higher in gastric fluid (Fig. 1A). Generally, gastric fluid samples show higher diversity compared to gastric mucosa samples (Fig. 1B). At the phylum level, members of Firmicutes, Proteobacteria, Actinobacteria, Fusobacteria, and Bacteroidetes were identified. The difference in the composition of phyla between gastric mucosa and fluid samples is shown in Fig. 1B DISCUSSION Stomach plays an important role in maintaining gastrointestinal (GI) health, as a barrier against ingested infectious disease agents of the lower GI tract. 18 In healthy subjects, swallowed pathogens are inactivated by gastric fluid, which contains both hydrochloric acid and proteolytic enzymes. 19 Atrophic gastritis, gastric surgery, or drugs that inhibit acid secretion can cause hypochlorhydria. 18 Decreased gastric acid secretion is responsible for an increased risk of infection. 20 There were few studies related to gastric juice. von Rosenvinge et al. 21 reported the microbiota composition of gastric fluid in relation to various human host parameters, including immune status, gastric fluid pH, use of proton pump inhibitors, and antibiotic medications. Previous research has primarily focused on the microbiota of gastric mucosal biopsies and applied only-DNA-based methodologies; yet it was unable to distinguish between transcriptionally active, inactive, or dead bacteria. Analysis by the 16S rRNA gene contents of microbial samples after amplification by PCR has changed the characterization of microbial communities. 5,11,22 Using a 16S rRNA transcript amplicon sequencing strategy, we can differentiate transcriptionally active RNA microbiota from the total DNA microbiota compositions. 23 Our results reveal that human gastric fluid harbors a diverse microbiota dominated by Fusobacteria, Actinobacteria, Bacteroidetes, Firmicutes, and Proteobacteria including H. pylori, demonstrating a similar overall composition at the phylum level as previously found in other study. 5,20,24 Higher microbioal diversity of gastric fluid compared to gastric mucosa was observed in H. pylori (+) patients; however these pattern was not shown in the H. pylori (−) patient. The number of sequencing reads was lower in the fluid sample than mucosa. That is, microorganism in the gastric fluid was diverse, but small compared to mucosa. In addition, because there are some bacteria that come through oral cavity and esophagus in gastric fluid, it can be concluded that such bacteria simply pass through the stomach, instead of actually inhabiting in stomach. Therefore, pyrosequencing of mucosa could reflect more accurate information about gastric microbiota. This conclusion is also supported by a number of researches done using pyrosequencing to examine microbiota on gastric mucosa. 1,5 However, this study has a limitation due to small sample size, and further research with more samples would be needed.
2016-05-15T16:24:49.158Z
2016-03-01T00:00:00.000
{ "year": 2016, "sha1": "47acf4532ca0b04ed28128a1db13f17f5ffb6001", "oa_license": "CCBYNC", "oa_url": "http://www.jcpjournal.org/journal/download_pdf.php?doi=10.15430/JCP.2016.21.1.60", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "47acf4532ca0b04ed28128a1db13f17f5ffb6001", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
236319650
pes2o/s2orc
v3-fos-license
TGR5 Expression Is Associated with Changes in the Heart and Urinary Bladder of Rats with Metabolic Syndrome Adipose-derived cytokines may contribute to the inflammation that occurs in metabolic syndrome (MetS). The Takeda G protein-coupled receptor (TGR5) regulates energy expenditure and affects the production of pro-inflammatory biomarkers in metabolic diseases. Etanercept, which acts as a tumor necrosis factor (TNF)-α antagonist, can also block the inflammatory response. Therefore, the interaction between TNF-α and TGR5 expression was investigated in rats with high-fat diet (HFD)-induced obesity. Heart tissues isolated from the HFD-induced MetS rats were analyzed. Changes in TGR5 expression were investigated with lithocholic acid (LCA) as the agonist. Betulinic acid (BA) was used to activate TGR5 in urinary bladders. LCA was more effective in the heart tissues of HFD-fed rats, although etanercept alleviated the function of LCA. STAT3 activation and higher TGR5 expression were observed in the heart tissues collected from HFD-fed rats. Thus, cardiac TGR5 expression is promoted by HFD through STAT3 activation in rats. Moreover, the urinary bladders of female rats fed a HFD showed a low response, which was reversed by etanercept. Relaxation by BA in the bladders was more marked in HFD-fed rats. The high TGR5 expression in HFD-fed rats was characterized using a mRNA assay, and the increased cAMP levels were found to be stimulated by BA in the isolated bladders. Therefore, TGR5 expression increases with a HFD in both the hearts and urinary bladders. Collectively, cytokine-medicated TGR5 activation was observed in the hearts and urinary bladders of rats. Introduction Takeda G protein-coupled receptor (TGR5) belongs to the G protein-coupled receptor (GPCR) superfamily [1]. In addition to the heart [2], TGR5 is expressed in other organs and is amenable to being targeted by bile acids in both healthy and diseased states [3]. TGR5 is a metabolic regulator, which is also involved in inflammatory responses [4]. TGR5 activation induces cytoprotective changes in the heart [5,6]. At toxic concentrations, bile acid may stimulate cholinergic M2 receptors, which cause negative effects on myocardial contractility and heart rate [7]. Therefore, TGR5 activation is introduced to provide benefits to cardiac function [8]. Recently, it has been documented that cardiac TGR5 expression is promoted in type-1 diabetic rats [9], mainly due to hyperglycemia, which seems to were assessed for cardiac performance and female rats received catheter insertion from the urethra into the bladder for obtaining a cystometrogram. For each experiment, the rats were divided into three groups (n = 12 in each group): (i) Rats fed normal chow as the normal control; (ii) HFD-fed rats treated with vehicle as the model control; and (iii) etanercept-treated HFD-fed rats. HFD-fed rats were allowed ad libitum intake of a diet containing 60% fat (wt/wt) (#58Y1; TestDiet, Richmond, IN, USA). The control group rats were fed standard rat chow (5% fat, #5001; LabDiet, St. Louis, MO, USA). After 6 weeks, the body weight of the rats was measured to confirm obesity: The HFD-induced group was markedly different from the normal group (p < 0.05). In the etanercept treatment group, 0.8 mg/kg of etanercept (Enbrel@; Wyeth Europa, Maidenhead, UK) was subcutaneously administered per week, divided into six or seven equal daily injections [27] for 4 weeks. Vehicle treatment was performed in the same manner as that performed with sterile water, the solvent for etanercept. After the end of treatment, the fasting rats were anesthetized with 2% isoflurane, and blood samples were collected from the femoral artery of eight rats in each group. Under 4% isoflurane, the male rats were then sacrificed. The hearts of rats were rapidly excised and rinsed by immersion in ice-cold Krebs-Henseleit buffer (KHB) for the Langendorff assay. Plasma prepared from whole blood after centrifugation was stored at -80 • C until the analyses. The heart tissues of the other male rats (n = 4) in each group were dissected immediately, washed with ice-cold saline, dried, and weighed. The isolated tissues were stored at −80 • C until further analysis. Measurement of Blood Biomarkers The levels of plasma pro-inflammatory cytokines TNF-α and IL-6 were estimated using enzyme-linked immunosorbent assay kits (Sigma-Aldrich, St. Louis, MO, USA). The plasma biomarkers of hepatic function, including aspartate aminotransferase (AST) and alanine aminotransferase (ALT), were evaluated using commercial kits (BioVision, Milpitas, CA, USA), according to the manufacturer's protocol. Cardiac Performance in Langendorff Apparatus Measurements of cardiac performance were carried out using our previous method [28]. The rats were sacrificed under anesthesia induction with 3% isoflurane, and their hearts were excised rapidly and rinsed by immersion in ice-cold KHB. The hearts were mounted in the Langendorff apparatus and continuously perfused with warm (37 • C) and oxygenated (5% CO 2 in O 2 ) KHB at a constant pressure of 70 mmHg. The organ chamber temperature was maintained at 37 • C during the experiment. A water-filled latex balloon was inserted through an incision made in the left atrium into the left ventricle through the mitral valve and adjusted to a left ventricular end-diastolic pressure (LVEDP) of 5-7 mmHg during the initial equilibrium. The distal end of the catheter was connected to an iWorx 214 TM data acquisition system (LabScribe 2.0 software, iWorx Systems, Inc., Dover, NH, USA) through a pressure transducer for continuous recording. In each experiment, after allowing stabilization for 30 min through perfusion, the test agents were added to the KHB for further analysis. The female rats in another set received the same treatment as that of the male rats described above, and eight rats from each of the three groups were used for studying the changes in the urinary bladder using a cystometrogram, as described below. Cystometrogram of Urinary Bladder After the bladders were emptied in anesthetized rats, a urethral catheter was placed to fill the bladder, and saline was infused at a steady rate (0.08 mL/min) to measure the bladder pressure, as described previously [29]. Pressure and force signals were detected by connecting to an iWorx 214 TM data acquisition system, as mentioned above, through a pressure transducer for continuous recording. The cystometrogram parameters, including peak micturition pressure and duration of contractions, were recorded according to a previous method [30]. The peak micturition pressure was defined as the maximum pressure (cmH 2 O), and duration was defined as the time (s) of the intervals during micturition. Betulinic acid (Sigma-Aldrich, St. Louis, MO, USA) at a dose of 50 mg/kg administered through an intraperitoneal injection (ip) was identified to be effective on pressure and duration in preliminary experiments; it was then applied for comparing the parameters in the three groups. The activity of betulinic acid was calculated as the ratio (%) of decreased micturition pressure over non-treated pressure for performing a comparison among the three groups. The urinary bladders in another four female rats from each group were isolated under 4% isoflurane, and the isolated fresh tissues were used for the measurement of cAMP, as described below. The other tissues were washed in ice-cold saline and stored at −20 • C until further analysis. Measurement of Intracellular cAMP Levels in Isolated Urinary Bladders Urinary bladder tissues were incubated with phosphodiesterase inhibitors (IBMX 5 µM, Sigma-Aldrich, St. Louis, MO, USA) for 30 min and treated with betulinic acid (5 µM) for another 1 h. Sample lysates were collected, and intracellular cAMP levels were measured using a cAMP Assay Kit (Abcam, Cambridge, MA, USA). Differences between treatment with betulinic acid or no treatment were indicated as the cAMP levels increased in each group. Real-Time Quantitative PCR According to our previous report [31], the mRNA levels of the signal transducer were determined. In brief, total RNA was extracted using TRIzol reagent (Thermo Fisher, Carlsbad, CA, USA) from cell lysates. Total RNA (200 ng) was reverse-transcribed into cDNA with random hexamer primers (Roche Diagnostics GmbH, Mannheim, Germany). PCR experiments were performed using a LightCycler (Roche Diagnostics GmbH, Mannheim, Germany). The concentration of each product was calculated from a corresponding standard curve. The relative gene expression was subsequently indicated as the ratio of the target gene level to that of β-actin. The primers for each factor were as follows: Statistical Analysis The results are indicated as the mean ± SEM of each group. The results were analyzed by two-way analysis of variance, followed by Dunnett's post-hoc analysis, using SPSS analysis software (SPSS Inc., Chicago, IL, USA). A p-value of <0.05 was considered statistically significant. Role of TNFα in MetS Induced in HFD-Fed Rats Rats fed a 60% HFD for 6 weeks were compared with rats that received normal chow. As shown in Figure 1, the body weight was significantly increased in the HFD-fed rats. Additionally, the plasma lipids, including total cholesterol and triglyceride, were also increased in the HFD-fed rats. However, the plasma high-density lipoprotein cholesterol level was reduced in the HFD-fed rats; otherwise, the levels of biomarkers of hepatic function, including AST and ALT, were higher in the HFD-fed rats. These changes were widely observed in the rats with MetS. Etanercept (Enbrel) was then administered to block TNF-α activity, as described previously [27], in another group of HFD-fed rats. The changes in metabolic disorders were markedly alleviated by etanercept, as shown in Figure 1. Additionally, MetS was also confirmed in the female rats fed a HFD in the same manner. Similarly, blockade of TNF-α using etanercept (Enbrel) reversed the changes in the HFD-fed female rats. Therefore, they were used to assay the functions of the urinary bladder. Changes in Cardiac Performance in HFD-Fed Rats Spontaneous contractility in the Langendroff apparatus was markedly reduced in the hearts of the HFD-fed rats compared to that in the hearts of the normal rats. LCA (Sigma-Aldrich, St. Louis, MO, USA) stimulated contractile responses ( Figure 2a) and attenuated beating rates (Figure 2b) in the hearts isolated from the normal rats. Notably, the effect of LCA on cardiac performance was more significant in the hearts isolated from the HFD-fed rats than in those isolated from the control rats. However, the cardiac performance in the hearts isolated from the HFD-fed rats treated with etanercept exhibited less response than that in the hearts of the vehicle-treated group. The levels of plasma cytokines, including TNF-α ( Figure 2c) and IL-6 (Figure 2d), were also increased in the HFD-fed rats compared to those in the normal rats. This effect was also reduced in the HFD-fed rats treated with etanercept. Statistical Analysis The results are indicated as the mean ± SEM of each group. The results were analyzed by two-way analysis of variance, followed by Dunnett's post-hoc analysis, using SPSS analysis software (SPSS Inc., Chicago, IL, USA). A p-value of <0.05 was considered statistically significant. Role of TNFα in MetS Induced in HFD-Fed Rats Rats fed a 60% HFD for 6 weeks were compared with rats that received normal chow. As shown in Figure 1, the body weight was significantly increased in the HFD-fed rats. Additionally, the plasma lipids, including total cholesterol and triglyceride, were also increased in the HFD-fed rats. However, the plasma high-density lipoprotein cholesterol level was reduced in the HFD-fed rats; otherwise, the levels of biomarkers of hepatic function, including AST and ALT, were higher in the HFD-fed rats. These changes were widely observed in the rats with MetS. Etanercept (Enbrel) was then administered to block TNFα activity, as described previously [27], in another group of HFD-fed rats. The changes in metabolic disorders were markedly alleviated by etanercept, as shown in Figure 1. Additionally, MetS was also confirmed in the female rats fed a HFD in the same manner. Similarly, blockade of TNF-α using etanercept (Enbrel) reversed the changes in the HFD-fed female rats. Therefore, they were used to assay the functions of the urinary bladder. Changes in Cardiac Performance in HFD-Fed Rats Spontaneous contractility in the Langendroff apparatus was markedly reduced in the hearts of the HFD-fed rats compared to that in the hearts of the normal rats. LCA (Sigma-Aldrich, St. Louis, MO, USA) stimulated contractile responses ( Figure 2a) and attenuated beating rates (Figure 2b) in the hearts isolated from the normal rats. Notably, the effect of LCA on cardiac performance was more significant in the hearts isolated from the HFDfed rats than in those isolated from the control rats. However, the cardiac performance in the hearts isolated from the HFD-fed rats treated with etanercept exhibited less response than that in the hearts of the vehicle-treated group. The levels of plasma cytokines, including TNF-α ( Figure 2c) and IL-6 (Figure 2d), were also increased in the HFD-fed rats compared to those in the normal rats. This effect was also reduced in the HFD-fed rats treated with etanercept. Changes in TGR5 Expression in the Hearts As shown in Figure 3, TGR5 expression at either the protein (Figure 3a) or the mRNA level (Figure 3d) was increased in the hearts of the HFD-fed rats compared to that in the hearts of the normal rats. Notably, changes in cardiac TGR5 expression were less marked in the HFD-fed rats treated with etanercept than in those treated with the vehicle only. This finding suggests that etanercept may alleviate HFD-induced changes in terms of TGR5 expression. Additionally, as shown in Figure 3c, similar changes were also observed in the expression of cardiac STAT3. Changes in TGR5 expression seem to be associated with STAT3 activation (Figure 3b) [8]. However, the mRNA levels of the genes related to cardiac hypertrophy, including brain/B-type natriuretic peptides (BNPs) (Figure 3e) and β-myosin heavy chain (β-MHC) (Figure 3f), remained unchanged in the hearts of the Changes in TGR5 Expression in the Hearts As shown in Figure 3, TGR5 expression at either the protein (Figure 3a) or the mRNA level (Figure 3d) was increased in the hearts of the HFD-fed rats compared to that in the hearts of the normal rats. Notably, changes in cardiac TGR5 expression were less marked in the HFD-fed rats treated with etanercept than in those treated with the vehicle only. This finding suggests that etanercept may alleviate HFD-induced changes in terms of TGR5 expression. Additionally, as shown in Figure 3c, similar changes were also observed in the expression of cardiac STAT3. Changes in TGR5 expression seem to be associated with STAT3 activation (Figure 3b) [8]. However, the mRNA levels of the genes related to cardiac hypertrophy, including brain/B-type natriuretic peptides (BNPs) (Figure 3e) and β-myosin heavy chain (β-MHC) (Figure 3f), remained unchanged in the hearts of the HFD-fed rats. Therefore, induction of cardiac hypertrophy in the HFD-fed rats at this stage seems unlikely. HFD-fed rats. Therefore, induction of cardiac hypertrophy in the HFD-fed rats at this stage seems unlikely. Changes of TGR5 Expression in Urinary Bladder of HFD-Fed Rats To understand whether or not the increased TGR5 expression is specific to the heart, we used female rats to investigate the changes in the urinary bladder, as described in a previous report [32]. Generally, a higher bladder pressure indicates lower bladder compliance. The maximum pressure significantly decreased in the HFD-fed rats, as shown in Figure 4a. Additionally, the micturition intervals in the HFD-fed rats were also significantly longer than those in the control rats. Notably, the blockade of TNF-α with etanercept reversed the urinary dysfunction in the HFD-fed female rats. Because LCA was less effective in urinary function [26], we applied another agonist of TGR5, betulinic acid, as described previously [33]. In the preliminary experiments, betulinic acid could induce the relaxation of the urinary bladder in normal rats that was abolished by pretreatment with triamterene at the dose effective to block TGR5 [34]. Therefore, the effects induced by betulinic acid were considered as the results of TGR5 activation. Betulinic acid (50 mg/kg, ip) may reduce voiding contraction and micturition frequency in HFD-fed rats. Notably, the relaxation induced by betulinic acid in the urinary bladder was more marked in the HFD-fed rats than in the normal rats (Figure 4b). Similarly, a delay of micturition intervals by betulinic acid was also observed in the same manner (Figure 4c). Blockade of TNF-α with etanercept markedly alleviated these changes in the urinary bladder. However, the relaxation induced by betulinic acid was more significant in the HFD-fed rats (Figure 4d). Additionally, the mRNA levels of TGR5 in the isolated urinary bladder were also determined. As shown in Figure 4e, the mRNA levels of TGR5 were markedly higher in the urinary bladder of the HFD-fed rats than in that of the normal rats, while etanercept reversed these changes in TGR5 expression. Moreover, an increase in cAMP levels caused by betulinic acid (5 µM) through TGR5 activation was similarly produced in the three groups (Figure 4f). Collectively, TGR5 expression is promoted in the urinary bladder of HFD-fed rats. Life 2021, 11, x FOR PEER REVIEW 9 of 14 are shown. (e) The mRNA levels of genes related to cardiac hypertrophy, including BNPs and (f) β-MHC, were also compared. The data in each column are shown as the mean ± SEM (n = 4). * p < 0.05 vs. the control group; # p < 0.05 vs. the vehicle-treated group. Changes of TGR5 Expression in Urinary Bladder of HFD-Fed Rats To understand whether or not the increased TGR5 expression is specific to the heart, we used female rats to investigate the changes in the urinary bladder, as described in a previous report [32]. Generally, a higher bladder pressure indicates lower bladder compliance. The maximum pressure significantly decreased in the HFD-fed rats, as shown in Figure 4a. Additionally, the micturition intervals in the HFD-fed rats were also significantly longer than those in the control rats. Notably, the blockade of TNF-α with etanercept reversed the urinary dysfunction in the HFD-fed female rats. Because LCA was less effective in urinary function [26], we applied another agonist of TGR5, betulinic acid, as described previously [33]. In the preliminary experiments, betulinic acid could induce the relaxation of the urinary bladder in normal rats that was abolished by pretreatment with triamterene at the dose effective to block TGR5 [34]. Therefore, the effects induced by betulinic acid were considered as the results of TGR5 activation. Betulinic acid (50 mg/kg, ip) may reduce voiding contraction and micturition frequency in HFD-fed rats. Notably, the relaxation induced by betulinic acid in the urinary bladder was more marked in the HFDfed rats than in the normal rats (Figure 4b). Similarly, a delay of micturition intervals by betulinic acid was also observed in the same manner (Figure 4c). Blockade of TNF-α with etanercept markedly alleviated these changes in the urinary bladder. However, the relaxation induced by betulinic acid was more significant in the HFD-fed rats (Figure 4d). Additionally, the mRNA levels of TGR5 in the isolated urinary bladder were also determined. As shown in Figure 4e, the mRNA levels of TGR5 were markedly higher in the urinary bladder of the HFD-fed rats than in that of the normal rats, while etanercept reversed these changes in TGR5 expression. Moreover, an increase in cAMP levels caused by betulinic acid (5 µM) through TGR5 activation was similarly produced in the three groups ( Figure 4f). Collectively, TGR5 expression is promoted in the urinary bladder of HFD-fed rats. . Basic contractility in each group, as seen from the cystometrogram, was compared (a). TGR5 activated by betulinic acid (50 mg/kg, ip) or not (vehicle) for changes in voiding contraction (b) and micturition frequency (c) among the three groups, as seen from the cystometrogram, were also compared. The voiding contraction decreased by betulinic acid, calculated as the ratio (%) of the non-treated contraction, was indicated as the activity of betulinic acid for a comparison among the three groups (d). The results in each column are indicated as the mean ± SEM (n = 8 per group). Additionally, changes in the mRNA levels of TGR5 expression in isolated urinary bladders of each group were compared (e). Increased cAMP levels by betulinic acid (5 µM) through TGR5 activation were also compared among the three groups (f). The results in each column are indicated as the mean ± SEM (n = 4 per group). * p < 0.05 vs. the control group; # p < 0.05 vs. the vehicletreated group. Discussion In the present study, TGR5 expression in the heart or urinary bladder was noted to be increased in HFD-fed rats. This finding is consistent with the hyperglycemia-induced changes in the hearts of diabetic rats [9]. Insulin resistance is exacerbated by an increase in inflammation, along with a parallel increase in the activation of TGR5 [35], which may play a protective role in obese rats. TGR5 was identified in vivo, which may be targeted by bile acids [5] in both the healthy and diseased states [9]. TGR5 activation is beneficial to cardiac function [8] and MetS. TGR5 activation can ameliorate insulin resistance through the cAMP/PKA pathway in skeletal muscles [36]. However, at high concentrations, bile acids may stimulate cholinergic M2 receptors, which cause negative effects on myocardial contractility and heart rate [7]. In the present study, we demonstrated a novel view that TGR5 expression in the heart or urinary bladder is increased in HFD-fed rats. Basic contractility in each group, as seen from the cystometrogram, was compared (a). TGR5 activated by betulinic acid (50 mg/kg, ip) or not (vehicle) for changes in voiding contraction (b) and micturition frequency (c) among the three groups, as seen from the cystometrogram, were also compared. The voiding contraction decreased by betulinic acid, calculated as the ratio (%) of the non-treated contraction, was indicated as the activity of betulinic acid for a comparison among the three groups (d). The results in each column are indicated as the mean ± SEM (n = 8 per group). Additionally, changes in the mRNA levels of TGR5 expression in isolated urinary bladders of each group were compared (e). Increased cAMP levels by betulinic acid (5 µM) through TGR5 activation were also compared among the three groups (f). The results in each column are indicated as the mean ± SEM (n = 4 per group). * p < 0.05 vs. the control group; # p < 0.05 vs. the vehicle-treated group. Discussion In the present study, TGR5 expression in the heart or urinary bladder was noted to be increased in HFD-fed rats. This finding is consistent with the hyperglycemia-induced changes in the hearts of diabetic rats [9]. Insulin resistance is exacerbated by an increase in inflammation, along with a parallel increase in the activation of TGR5 [35], which may play a protective role in obese rats. TGR5 was identified in vivo, which may be targeted by bile acids [5] in both the healthy and diseased states [9]. TGR5 activation is beneficial to cardiac function [8] and MetS. TGR5 activation can ameliorate insulin resistance through the cAMP/PKA pathway in skeletal muscles [36]. However, at high concentrations, bile acids may stimulate cholinergic M2 receptors, which cause negative effects on myocardial contractility and heart rate [7]. In the present study, we demonstrated a novel view that TGR5 expression in the heart or urinary bladder is increased in HFD-fed rats. First, we confirmed the cardiac functional response to TGR5 using the Langendorff apparatus. In the hearts isolated from normal rats, LCA enhanced cardiac contractility and decreased the heart rate due to TGR5 activation. TGR5 transduces signals through Gs protein-mediated cAMP accumulation and can modulate cardiac functions [1]. Moreover, the LCA-induced increase in contractility was more marked in the hearts isolated from HFD-fed rats than in hearts isolated from normal rats, indicating the increased sensitivity of TGR5 in the hearts of HFD-fed rats. However, spontaneous contractility was found to be reduced in the hearts isolated from HFD-fed rats compared to those isolated from normal rats. This result may be due to the reduced spontaneous contractility in HFD-fed rats. Moreover, we found that cardiac TGR5 expression truly increased in the hearts of HFD-fed rats at the protein and mRNA levels using Western blotting analysis and qPCR, respectively. Therefore, cardiac TGR5 expression was identified to be promoted in HFD-fed rats. High-fat consumption is specifically known to be a causal factor in the development of cardiac damage [37]. The reductions in spontaneous contractility in HFD-fed rat hearts are consistent with this view. Cardiac damage is an inflammatory injury dependent on oxidative stress [38]. Additionally, a HFD induces insulin resistance and increases TNF-α expression. TNF-α promotes neutrophil-mediated tissue injury and amplifies inflammatory cascades by activating macrophages and other types of cells [39]. Functionally, TNF-α exerts a negative inotropic effect to inhibit myocardial contractility and lower blood pressure [40]. In the present study, the plasma levels of TNF-α and other cytokines markedly increased in HFD-fed rats. Moreover, we used etanercept at an effective dose to inhibit TNF-α in rats [23] as the negative control. The different changes in HFD-fed rats receiving etanercept may indicate the role of TNF-α. Notably, the changes in HFD-fed rat hearts were reversed by etanercept, as determined by a Langendorff assay. Therefore, the cardiac injury induced in HFD-fed rats seems to be associated with TNF-α, which is consistent with the findings of a previous report [41]. Moreover, in the current study, etanercept reversed cardiac TGR5 expression at both the protein and mRNA levels in HFD-fed rats. STAT3 is a cytoplasmic transcription factor that transmits extracellular signals to the nucleus [42]. Activated STAT3 in the nucleus binds to specific DNA promoter sequences to regulate gene expression [43]. In the current study, cardiac TGR5 expression was promoted in parallel with STAT3 activation. Interestingly, etanercept also inhibited the activation of STAT3 in HFD-fed rat hearts. TNF-α inhibitors, etanercept, and adalimumab can downregulate p-STAT3 expression in human Th17-polarized cells [37]. STAT3 activation provides an important link between inflammation and cardiac fibrosis [38]. STAT3 accumulation in the nucleus can increase the expression of pro-inflammatory cytokine IL-6, which is involved in the pathogenesis of various chronic inflammatory diseases [39]. In the current study, plasma TNF-α and IL-6 levels that increased in HFD-fed rats were also found to be reduced by etanercept. Therefore, etanercept-mediated inhibition of TNF-α may result in downregulation of the IL-6/JAK/STAT3 pathway in HFD-fed rats. Additionally, STAT3 accumulation in the nucleus can also induce the expression of IL-6 and other proinflammatory genes [44]. Moreover, TNF-α can induce cardiac apoptosis, which is also involved in ventricular remodeling [41]. Therefore, the changes in TGR5 expression need to be investigated further. Inflammation increases STAT3 activation, which contributes to the pathophysiology of tissue injury [45]. STAT3 activation and an increase in the ratio of phosphorylated STAT3 (p-STAT3) to STAT3 may promote nuclear translocation. Moreover, STAT3 was phosphorylated at Y705 and S727 in cells during cytokine-induced STAT3 activation [46]. Therefore, in the current study, we focused on changes in the ratio of p-STAT3 to STAT3, which is indicative of STAT3 activation. Interestingly, STAT3 activation was enhanced, along with the promotion of TGR5 expression in the heart. Our data also demonstrated that the increased ratio of p-STAT3 to STAT3 was reversed by etanercept in the hearts of HFD-fed rats. Mediation of STAT3 activation in terms of increased expression of cardiac TGR5 in HFD-fed rats can thus be considered. Additionally, to understand whether or not the increased TGR5 expression was specific to the heart, we used female rats to investigate the changes in the urinary bladder, as described previously [29]. The changes were the same as those observed in the hearts. TGR5 expression in the urinary bladder was reduced in HFD-fed rats, as shown in a cystometrogram. This effect was reversed by etanercept at an effective dose to inhibit TNF-α in rats [22], indicating the role of cytokines in changes in the urinary bladder of HFD-fed rats. The high expression of TGR5 in urinary bladders was also a characteristic feature in these rats. We used betulinic acid to replace LCA for the activation of TGR5 in the urinary bladder. Betulinic acid is a natural triterpene that has been demonstrated to activate TGR5 [33]. Notably, relaxation in the urinary bladder by betulinic acid was more marked in HFD-fed rats than in normal rats, as determined from a cystometrogram. This view was supported by the increased mRNA levels of TGR5 in urinary bladders isolated from HFD-fed rats. Moreover, TGR5 is a member of the family of GPCRs that may increase cAMP levels [1]. We found that betulinic acid induced an increase in cAMP more markedly in the urinary bladders isolated from HFD-fed rats than in those isolated from normal rats. Therefore, TGR5 expression is increased in the urinary bladder during metabolic disorders. An increase in TGR5 expression could be a compensatory response against the lipotoxicity observed in HFD-mediated damage. However, this hypothesis needs further investigation in the future. The main limitation of this study is that the effect of etanercept in normal rats was not compared. The relationships between etanercept and TGR5 expression in a MetS model require further investigation. Conclusions We found that TGR5 expression is elevated in the heart or urinary bladder of HFD-fed rats. Notably, etanercept is effective in ameliorating inflammation and decreases TGR5 expression in HFD-induced obese rats. These results have implications for dysfunction in the heart or urinary bladder, particularly the association between inflammatory cytokines and TGR5 activation, which provides the benefit of reversing the dysfunction. The development of tissue-specific drugs that target TGR5 expression could provide benefits for assessing interventions in metabolic disease.
2021-07-26T05:21:40.318Z
2021-07-01T00:00:00.000
{ "year": 2021, "sha1": "1c953e2f24b9366c9e1411a347a5ecc22f96b26a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-1729/11/7/695/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1c953e2f24b9366c9e1411a347a5ecc22f96b26a", "s2fieldsofstudy": [ "Medicine", "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
59251323
pes2o/s2orc
v3-fos-license
Predicting the limit of intramolecular H-Bonding with classical molecular dynamics The energetics of intramolecular recognition processes are governed by the balance of pre-organization and flexibility that is often difficult to measure and hard to predict. Here, by using state-of-the-art classical molecular dynamics simulations, we predict and quantify the effective strength of intramolecular interactions between H-bond donor and acceptor sites separated by a variable alkyl linker—in a variety of solvents and including crowded solutions. The fine balance of entropic and enthalpic contributions posits a solvent-dependent limit to the occurrence of intramolecular H-bonding. Nevertheless, H-bond free energies are rigidly shifted among different solvents with, for example, a systematic ~13 kJ/mol gap between water and chloroform. Molecular crowding shows little effects on thermodynamic equilibrium but it induces pronounced variations on H-bond kinetics. The results are in quantitative agreement with available experimental measurements (in chloroform) and showcase a general strategy to interrogate molecular interactions in different environments, extending the limits of current experiments towards the prospective prediction of H-bond interactions in pharmaceutical, agrochemical, and technological contexts. Hydrogen bonds are ubiquitous interactions that drive molecular recognition, 1,2 determine the properties of water and other polar solvents, 3,4 play critical roles in enzyme catalysis [5][6][7] and contribute to the structural stability 8 as well as to the specificity of drug-target complexes. 9,10 While H-bonds are typically conceived as interactions between pairs of molecules, intramolecular hydrogen bonds are widespread in biological molecules, 11,12 and are crucial in the design of new drugs and materials, [13][14][15][16][17] including supramolecular machines. [18][19][20][21][22][23] Unfortunately, characterization of intramolecular H-bonds (and exploitation thereof) is still partial, most likely as a consequence of complexities that cloud the interpretation of experimental data obtained in large and flexible entities. 11 Thus, it has long been known, for instance, that conformational flexibility posits a limit to the occurrence of intramolecular H-bonds, 24,25 but only recently Hubbard et al. 26 have quantified such a limit using competition experiments in CDCl3 within a controlled molecular context. As a matter of fact, these measurements in deuterated chloroform 26 set an unprecedented experimental reference in the field. Yet, in aqueous environments, the tiny population of Hbonded species still challenges the limit of their experimental quantification and it remains unclear to which extent H-bond measurements performed in noncompetitive solvents can be extended to polar solvents, such as water. 27,28 As additional layer of complexity, the behavior of intramolecular H-bonds in diluted water solutions could be altered by concentrated, or crowded, conditions that are likely to represent, better than diluted ones, the biological environment. 29,30 In this context, computer simulations could bring about a major productivity leap providing quantitative information on interacting processes escaping from spectroscopic detection. Whereas ab initio simulations can provide unparalleled details on H-bonds properties 31 Here, building on recent experimental data 26 we present a computational investigation on the influence of conformational flexibility on H-bonding in a strictly intramolecular context using a series of model compounds 26 ( Figure 1A) immersed in a polar solvent (water), polar aprotic (tetrahydrofuran), and an apolar one (chloroform, for which partial experimental data 26 a Beyond the low solubility of 1-10 (predicted logS in water < -1), the experimental detection of conformational ratios in water seems unfeasible for these compounds even using an indirect competitive binding approach. Indeed, if the △G of the competitive binding event (△G b ind ) is > 1 kJ/mol, no measurable binding is observed. 26 Using a strong acceptor (e.g.: phosphine oxide), we estimate a △G b ind of at least +4 kJ/mol even for compounds 7-10 that have a fairly accessible H-bond donor. Thus, the free energy associated to the intramolecular H-bond in a given solvent, △Gintra, can be obtained as −kBTlnP(folded)/P(unfolded), where kB is the Boltzman constant, T is the temperature and P(folded) and P(unfolded) is the population of folded and unfolded conformers, respectively. Chloroform-derived results for compounds 1-10 show a non-trivial dependence between △Gintra and the donor-acceptor separation ( Figure Figure 3C and Supporting Information, SI). We note that increments of the linker size progressively increase the entropy cost (-T△Sintra; Figure 3C, blue triangles) of circularization, with an average penalty of ~3 kJ/mol per rotor. As the number of rotors goes beyond 6, the entropic cost becomes larger than the enthalpic gain (△Hintra, blue points in Figure 3C) and the intramolecular H-bond in chloroform is disfavored. Overall, for 7 or more rotors, the linker length enables the folded state to retain a conformational freedom that makes the resulting -T△Sintra values changing regime. Contrary to chloroform solutions, the prevalence of circular topologies in water solvent is tiny as only a little population of intramolecular H-bond is observed in the series of compounds studied here (Figure 3, red plots). As expected, water molecules competed avidly with the intramolecular H-bond. As a result, the entropic cost of cyclization is higher than the enthalpic gain arising from intramolecular H-bond formation (red plots in Figure 3C). By comparing in-water and in-chloroform △Gintra values, we note that, depending on the environment, compounds 2-5 will populate different conformations and present different polarities-thus acting as molecular chameleons 39,40 whose conformation can, for example, modulate membrane permeability. 13,41,b This behavior occurs in a size range relevant for small-molecule pharmaceuticals and should be considered for more accurate in silico prediction of ADME profiles. [14][15][16] Very interestingly, the △Gintra vs linker-length profiles in water (red plots) and chloroform (blue plots) are clearly correlated ( Figure 3A); the intramolecular interactions are systematically 12-15 kJ/mol weaker in water than in chloroform, which is commensurate with the values proposed by other investigations. 27 The thermodynamic quantities in Figure 3C show that the fine balance between enthalpy and entropy has a clear relationship with the solvent. However, the overall balance between enthalpy and entropy (i.e.: △Gintra) controlling the formation of intramolecular cycles is, except for a rigid shifting factor, solvent independent. This is the case also when the △Gintra vs linker-length profile is collected in tetrahydrofuran (THF, Figure 3A, orange plot), suggesting that our conclusions can be extended to other pure solvents. We further note that such constant offset behavior on moving to different solvents, resembles the experimental data obtained when the competitive external binder was varied from the strong phosphine oxide acceptor to the weaker sulfinyl one. 26 Finally, as intramolecular H-bonding is of paramount importance in biology, we tested whether the above results and correlations hold also in highly concentrated, crowded conditions that can be reached, for instance, in the cellular environment (with up to 450 g/l of macromolecules). 29 Figure 4A). On the other hand, also the folding process is slowed down as the metastable conformation observed at a distance of ~0.4 nm can be stabilized by a bridging water molecule ( Figure 4A) that becomes less available in crowded conditions. 29 We argue that beyond the unspecific role of viscosity in decreasing molecular diffusion, 29 the reduction of the number of water molecules able to diffuse into and away from the H-bond in crowded conditions provides a mechanistic interpretation for the observed behavior ( Figure 4B). This model agrees with, and provides a "single-interaction perspective" to, previous studies showing that alteration of water dynamics in crowded conditions slows down protein kinetics. 47,48 The model also generalizes the behavior observed in ligand-receptor kinetics where the water accessibility of hydrogen bonding moieties can modulate ligand binding and unbinding rates. [49][50][51] In summary, we have shown that the characterization, by carefully set up Methods The 3-D structure of compounds 1-10 was built using Maestro from Schrödinger LCC. Compounds were modeled with the Generalized Amber Force Field (GAFF) 53 and partial atomic charges were assigned to the extended conformation using the Restrained Electrostatic Potential (RESP) fit 54 at the HF/6-31G* level of theory using the RED server. 55 See SI for further discussion on the charge model and the transferability of the computational approach. Parameters and topology files were prepared with Acpype. 56 Compounds were solvated with a 0.8 nm thick box of TIP3P 57 water molecules with periodic boundary conditions; for simulations in chloroform 58,59 and tetrahydrofuran 58,59 a 1.5 nm-thick solvent box was used. Crowded environments were created by adding to the system box multiple copies of polyethylene glycol (PEG, H−(O−CH2−CH2)n−OH; with n=2); PEG was modeled with GAFF and RESP charges. Each system was equilibrated at constant pressure and temperature (1 atm, 298 K); production runs were evolved at 298 K in the NVT ensemble with the velocity rescaling thermostat. 60 Hamiltonian Replica EXchange (H-REX) simulations 37 used 16 replicas with the scaling of all the atoms of the solute 61 with values of λ ranging from 1 to 0.59. Exchanges were attempted every 500 steps. In infinitely diluted conditions (one solute molecule in the box), each replica run for 50 ns with a cumulative sampling time of 800 ns per compound per solvent used. In crowded conditions (with PEG) each replica either run for 100 ns (at 200 g/l) or for 200 ns (at 450 g/l). The effective temperature of each replica relates to the scaling factor λ of the replica as Teff=298K/λ and the values of lnKintra were plotted against 1000/Teff to estimate variations of entropy and enthalpy. All the simulations were run using GROMACS 4.6.7 62 patched with PLUMED 2.1 63 and the H-REX implementation. 36 Structural figures were made with VMD 64 and chemical structures drawn with Marvin 18.1 (ChemAxon). Supporting Information Full methodological details on system set up, MD simulations and calculations.
2019-01-26T14:02:52.778Z
2019-02-14T00:00:00.000
{ "year": 2019, "sha1": "7a9fe7086c45df5b432e33bf07339e358cff89c1", "oa_license": "CC0", "oa_url": "https://diposit.ub.edu/dspace/bitstream/2445/128320/1/manuscript_201810922R1.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "538f115d987e264504f9aa6dfed1c13abc946da9", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
266215678
pes2o/s2orc
v3-fos-license
Rainwater management in urban areas in Poland: literature review . The work analyses and evaluates the results of research work carried out so far in the fi eld of rainwater management in urban areas in Poland. Using the “biblioshiny” tool, a bibliometric analysis was carried out based on queries to the Scopus and Web of Science databases. As a result, information was obtained on selected bibliometric statistics of scientifi c publications in which the topic of rainwater in Poland was taken up. The probable direction of further research development in the fi eld of the analysed issues was also determined. In addition, after a detailed review of all the articles obtained at the earlier stage of the bibliometric analysis, the main research contexts were indicated and discussed. Areas and issues requiring further analysis and supplementation were indicated in the work Introduction Th e intensive development of urban areas in the world, especially in the second half of the 19 th century, had socio-economic and environmental consequences.Th e most frequent and visible changes in the development of cities are transformations in land cover, land use and topography.Th ese factors also combine to represent the greatest determinant of another extremely important element that is essential for the functioning of man and the entire natural ecosystem.Th is element is of course water: the intensive development of urban areas causes locally signifi cant changes in its circulation.For this reason, the concept of Urban Water Cycles is becoming increasingly important (Mitchell et al. 2001;Amore et al. 2013).Th e concept relates to the hydrological cycle in urban areas that is transformed by multiple anthropogenic factors.As noted by Marsalek et al. (2008) Urban Water Cycles provide a good conceptual basis for studying the water balance of urban areas.Th e concept is also helpful in understanding the importance of water balance for the integrated management of urban water resources. Water is essential to the functioning of every city.However, it is oft en also the source of many problems in urban areas.Th is applies especially to fl oods, which in urban areas may take on various forms and have various sources.In recent years, fl ash fl oods have become particularly important.Th eir occurrence is associated with short-term, intense precipitation in urban areas of highly sealed surfaces.Th e increase in frequency of fl ash fl oods seen in urban areas indicates the growing prevalence of this phenomenon.Th e other, converse phenomenon is droughts and water shortages in urban areas.Water scarcity is a growing problem in urban areas in many parts of the world. Poland is one of the countries where the issue of precipitation waters has for many years been Adam Piasecki* , Agnieszka Pilarska 6 marginalised by national and local authorities.Poland has some of the lowest water resources in Europe.They amount to 61.6 km 3 (Gutry-Korycka et al. 2014).In addition, some forecasts indicate that, by 2030, the observed climate changes will have contributed to further unfavourable changes in water conditions in the country.This will happen despite almost no change in annual sums of precipitation.The reason will be the lengthening of drought periods, which are predicted to be followed by short-term heavy precipitation (Jarosińska 2016).This situation is particularly dangerous for urban areas, which are characterised by a large percentage of sealed surface area.In Poland, as in many other European Union (EU) countries, precipitation drains mainly through gravitational sewer systems (Dziopak 2018).This has the consequence of high peak flow rates in outflow channels and rapid rises in water levels in receiving bodies (Starzec et al. 2020).In many cases, after extremely intense shortterm rainfall, the amounts of rainwater are too large for the drainage network to discharge.The result is local flooding and disturbances to transport systems. Poland's approach to water resources management changed when the country began applying for EU membership.In order to join the EU, Poland had to meet a number of requirements and introduce many legislative changes.Of particular importance were issues related to implementing EU laws on environmental protection.One example is the Water Law Act adopted in 2001.This act treated rainwater and meltwater as sewage.It was only the adoption of the new Water Law Act in 2017 (Journal of Laws of 2017, item 1566), in which rainwater and meltwater ceased to be equated with sewage, that allowed for a change in the approach to this issue. As a result of these legislative changes and global research trends, there has also been a significant increase in interest in research issues related to rainwater management in urban areas in Poland in recent years.So far, however, no synthetic consideration of the conducted research and results has been carried out.The present work tries to fill this gap.The aim of the work is to analyse and evaluate the results of research conducted to date in the field of rainwater management in urban areas in Poland.Addressing the following research questions was helpful in achieving this goal: 1. What are the bibliometric statistics of scientific publications on the topic of rainwater in Poland? 2. What are the predominant research contexts?3. What is the likely direction for further research? Materials and methods The research questions were addressed and the objective achieved in several stages.In the first stage, the research tool was selected.The bibliometric method was chosen, which allowed for a quantitative and qualitative analysis. The "topic" category in Web of Science includes searches for: title, abstract, author keywords, and Keywords Plus (Web of Science 2023). The individual elements of the query to the databases included searching for all variants of the entry for the search items: "rainwater management" or "stormwater management", for the area of Poland: "polish" or "Poland" and for entries related to urban areas, i.e. "urban", "city", "town", "resident/ residential", "hause/hausing", "bulid/built/building" or "real estate". The inquiries were made in two stages.The first query was performed on February 12, 2023 and yielded 52 publications from the Scopus database and 42 from the Web of Science database.Then, the content of the found publications was reviewed in terms of compliance with the subject and objectives of the article.Ultimately, 47 publications from the Scopus database and 34 from the Web of Science database were accepted for further analysis.On February 20, 2023, the analysed databases were requeried and no new publications were added.On February 20, 2023, files with the *.bib extension were downloaded with results for the selected 47 (Scopus) and 34 (Web of Science) publications. The bibliometric analysis in the subject area covered basic statistics on: sources, authors, affiliations, citations of publications and author's keywords.Authors' affiliations were analysed in the traditional way (by reviewing each article) due to the questionable methodology implemented in the aforementioned bibliometric software.The methodology used in the programs counts all affiliations, regardless of whether they are in the same or different articles.So, for example, an article written by two people with the same affiliation is counted twice.For the author's keywords, an author keyword co-occurrence analysis (for number of nodes: 50) and a "thematic map" analysis were also performed.Analysis of "thematic map" indicates: "motor themes", "basic themes", "niche themes" and "emerging or declining themes".The choice of authors keywords instead of Keywords Plus was determined by the preliminary results obtained from "biblioshiny".For 34% of publications (16 articles) from the Scopus database there was no information about Keywords Plus.The analysis was performed taking into account all types of publications made available in the analysed databases and with no time limit (all available years were included).The bibliometric analysis allowed the first and third research questions to be answered. The second research question was problematic to answer based on bibliometric analysis alone.This required that the research be extended to a subsequent stage.It consisted in detailed readings and qualitative analysis of all articles identified in the earlier bibliometric analysis stage.This allows the collected research papers to be grouped by Results The issue of rainwater management was addressed almost exclusively by scientists of technical Explanations: The percentages cited refer to the shares of author keywords for the "tree maps" range shown in the figure (i.e., for the set of author keywords used more than once). universities.By far the most common affiliation by number of articles was Rzeszow University of Technology (Table 1).The following universities also boasted significant numbers of publications: Wroclaw University of Science and Technology, Lodz University of Technology and Kielce University of Technology.Non-technical universities accounted for very few publications each. In both databases, the most frequently cited article is "Modelling of green roofs' hydrologic performance using EPA's SWMM" by E. Burszta-Adamiak and M. Mrowiec (Table 2).The works of Stec and Słyś also have numerous citations.These scientists are the authors of the largest number of publications on the analysed research topics. 10 The analysis of basic statistics on author keywords showed that authors usually use quite general phrases: "stormwater management", "rainwater management", "rainwater harvesting".Next, attention should be paid to terms relating to the modelling of the stormwater runoff process ("SWMM", "SWMM model", "SBUH model", "modelling"), as well as those emphasising that the area of analysis is the city ("urban catchment", "urban drainage", "urban hydrology", "resilient cities").The remaining author keywords are various types of variances related to, for example, solutions to stop or slow the runoff of rainwater ("green roof ", "green roofs", "green infrastructure", "retention", "water saving"), financial issues, and the profitability of investments in rainwater collection ("payback period", "financial analysis", "life cycle cost"). Analysis of author keywords co-occurrence for the Web of Science database distinguished two clusters (Fig. 2).The author keyword "rainwater harvesting" constitutes the largest node and has a relationship with the author keyword "life cycle cost".The second cluster is the relationship between the author keywords "urban catchment" and "SWMM model".For the Scopus database, four clusters were distinguished: as in the case of the Web of Science database, a cluster was distinguished with a node representing the author keyword "urban catchment" that has a relationship not only with author keyword "SWMM model" but also with "SBUH model".It should be noted that the node representing the author keyword "green infrastructure" is clustered with "stormwater management" node In the "thematic map" analysis, nine clusters of topics were identified for the Scopus database and seven for the Web of Science database (Fig. 3 and Appendix 1).In Figure 3, each cluster is labelled with the selected keyword it represents.Appendix 1 presents the complete composition of clusters.The differences in position between similar clusters (not with identical but similar composition of author keywords) should be pointed out -in particular, the clusters for "rainwater harvesting", "ecosystem services" and "stormwater management" (Fig. 3).The "ecosystem services" and "rainwater" clusters were recognised to be "motor themes".Within the Web of Science database, no cluster of topics has been identified to belong to "motor themes".Only the "stormwater management" cluster lies on the border between "basic themes" and "motor themes" (Web of Science).Among the clusters identified in the Scopus database but not identified in the Web of Science database, the following clusters should be indicated: "nature-based solutions", "modernization" and "resilient cities".One of the clusters not identified in the Scopus database but identified in the Web of Science database is the "financial analysis" cluster (Appendix 1).These clusters were not classified as "motor themes", nor as "basic themes" (Fig. 3). The "thematic map" analysis allowed the third research question to be answered.According to the methodology, topics exhibiting high progress and the highest importance in the research field lie in the quadrant defined as "motor themes" (Cobo et al. 2012)."Motor theme" topics may indicate likely directions of future development in research.As already mentioned, only for the Scopus database were clusters (two: "rainwater", "ecosystem services") assigned to "motor themes".Considering the author keywords within these two clusters, they can be interpreted as probable directions of future development in research.The "rainwater" cluster is interpreted as representing: modelling rainwater runoff from the urban catchment, with particular emphasis on the functioning of the combined sewerage system (i.e., stormwater overflow).The equivalent interpretation for the "ecosystem services" cluster is: analysis of the possibility of rainwater retention, with particular emphasis on green roofs.These two interpretations of clusters determine in detail the probable directions of future research development within the research issues. A close familiarisation with all the publications included in the bibliometric analysis allowed them to be grouped according to the present research context.Eleven research contexts were distinguished.The research context most frequently adopted was analysis relating to technical solutions for rainwater management (14 articles).This context was very often combined with another research context relating to the analysis of economic aspects of rainwater management (10 articles).Similarly, the research context involving the hydrodynamic modelling of rainwater runoff in urban catchments was also frequently discussed (9 articles).Other research contexts in the analysed group of articles were approached much less frequently (2 to 5 articles). Discussion The analysis indicated interesting regularities and areas for discussion.Rainwater management in urban areas in Poland has proven to be a very hot topic in recent times.The articles addressing this research topic were published in the last few years.Most articles were created in 2020 and 2022.The significant proliferation of articles in the last three years may indicate the growing importance of this research problem.The articles were published mainly in international journals with high bibliometric indices, which further evidences the international importance of the issue.It should be noted that, of the 52 articles included in the study, as many as 19 have already been cited at least 10 times.This citation tally is quite high and confirms the dynamic development of research in the field of rainwater management in urban areas. The assumption was that author keywords analysis would identify dominant research contexts.Unfortunately, the terms that authors used were too general to make this possible.Better results would probably be obtained using Keywords Plus.However, as already mentioned, for the analysed group of articles, this was not possible because Keywords Plus lacks information for 34% of the publications.Therefore, this work may be a good demonstration that bibliometric analysis based on publicly available databases is associated with certain difficulties.With a relatively small number of articles and incomplete bibliometric data, it seems that a traditional literature review and content analysis are needed.It should be emphasised that the quantitative (bibliometric analysis) and qualitative (content analysis) methods used complement each other.It is not always possible to use both methods fully.This possibility is mainly determined by the scope and size of the set of articles. Extremely interesting results were obtained from the analysis of the author affiliations in the analysed articles.Almost all articles were written by scientists from technical universities.This is interesting, as the research subject has significant social and environmental dimensions.It also directly accounts for the research contexts that are addressed in the articles.As already indicated, most of the works concerned the analysis of technical solutions related to rainwater management.However, over the years, there have been changes in the solutions analysed and proposed.Initially, the dominant approach involved the use of green infrastructure (mainly green roofs) (Burszta-Adamiak 2012; Burszta-Adamiak and Mrowiec 2013).In subsequent studies, more attention was paid to the ecohydrology and restoration of watercourses (Wagner and Breil 2013;Zawilski et al. 2014) 2020).These authors have unequivocally indicated that the best way to manage rainwater is for it to infiltrate into the ground through ditches or infiltration tanks.Other solutions should be considered only when this method cannot be used.A slightly different approach was presented by Kasprzyk et al. (2022), who focused on the issue of rain gardens and their impact on ecosystem functions.They noted the effect of this solution in mitigating the effects of the "urban heat island" phenomenon. The second most frequently discussed research context in the studied group of articles concerned economic aspects of rainwater collection.This aspect also exhibited a certain evolution of the research, in that the scope of analyses had expanded.In the first publications within this research context, the authors focused on analysing the financial efficiency of solutions that collected rainwater mainly for the sanitary purpose of 2020), after analysing 13 designs of rainwater harvesting systems (RWH).They pointed to the limited profitability of the examined designs (RWH), emphasising the insufficiency of governmental financial support.The authors stressed that this could significantly impact the social sustainability of local projects.The latest article dealing with the analysed economic context of rainwater collection is the work of Bus and Szelągowska (2021).This article analyses the economic efficiency of intensive and extensive varieties of green roofs.In the study, the authors included the 11 largest urban communes in Poland, with populations of over 250,000.The work succeeds in showing the spatial diversity of rainwater harvesting opportunities.Some of the adopted assumptions may raise doubts, such as one on the area of green roofs, which in the paper equals 1% of the area of each commune. Within the research context related to the economic aspects of rainwater collection, one subcategory was distinguished.This was of works that, in addition to economic analysis, also conducted analyses of the life cycle costs (LCC) of various solutions for the collection and use of rainwater.The conducted LCC analyses focused mainly on the analysis of rainwater collection for flushing toilets, washing and watering gardens (Słyś and Stec 2014;Stec and Słyś 2018;Stec and Mazur 2019).Attention should be paid to the conclusions of the analysis carried out by Słyś and Stec (2020) for a rainwater collection system (decentralised or central) in a single-family housing state.Collecting rainwater (whether in a centralised or decentralised system) is not a financially viable solution for an estate.Financial efficiency is improved when the investment is 25% to 50% subsidised. 14 The third most frequently addressed research context was hydrodynamic modelling of rainwater runoff.Among the articles included, the hydrodynamic modelling program used most frequently was EPA's Storm Water Management Mode.Individual authors used this software to simulate: the operation of the sewerage network, including the rainwater drainage system (Wałęga et al. 2016;Nowakowska et al. 2017;Nowakowska et al. 2019;Szeląg et al. 2022), outflow from a small urban catchment (Boszcz 2017; Barszcz 2018Barszcz , 2022)), and the retention properties of green roofs (Burszta-Adamiak & Mrowiec 2013).An interesting analysis was conducted by Olechnowicz and Weinerowska-Bords (2014) on the impact of various forms of urbanisation on water runoff from an urban catchment.The authors considered seven land development variants and simulated three precipitation scenarios using the Hydrologic Modeling System designed by the Hydrologic Engineering Center of the US Army Corps of Engineers (HEC-HMS).The modelling results indicated that urban development raised but temporally shortened peak flow, while also increasing runoff volume.At the same time, the possibility to significantly reduce runoff was confirmed using various engineering solutions in the field of alternative rainwater management. Another of the distinguished research contexts relates to the analysis of the physical and chemical properties of rainwater.In the studies included, heavy metal concentrations were most often analysed (Zawilski et al. 2014;Sakson et al. 2018;Jakubowicz et al. 2022) (2022).The former showed that the physicochemical quality of rainwater collected in underground reservoirs as part of the research met the Polish and EU requirements for drinking-water standards.At the same time, the poor microbiological quality of water was emphasised, with the number of coliform bacteria reaching 19,300 CFU.The article by Jakubowicz et al. (2022) showed the possibility of significantly reducing some pollutants (heavy metals, microplastics, polycyclic hydrocarbons) using a pilot multi-stage wetland installation.Such installations may be important in reducing pollutants discharged through the rainwater drainage system to receivers (rivers, lakes). Urban planning, including rainwater management, is another highlighted research context.The two oldest works assigned to this research context relate to the issue of sustainable development as part of urban planning (Ogielski et al. 2015;Surma 2015).Particularly comprehensive analyses related to stormwater management have been carried out by Surma (2015).The author showed current sustainable rainwater management scenarios for areas that differed in use, time of creation and socio-economic functions performed.A different approach to urban planning has been presented in two recent works within this research context.Fitobór et al. (2022) used a holistic and dynamic planning method called the "extreme weather layer".This method allows for determination of the impact of a single investment on the operation of a municipal sewage system and thus on the flood risk of an entire catchment.The many actions proposed by the authors to minimise flood risk in cities include the still-underestimated and underfunded Nature-Based Solutions (NBS).A very innovative idea has been presented by S. M. Rybicki et al. (2022).They present a model of an elementary autonomous housing complex called Bio-Morpheme.The essence of the proposed model is the idea of ensuring that urban developments of various types integrate the water circulation of the buildings into that of their immediate surroundings.A theoretical model was analysed to verify the possibility of comprehensive water management that included the collecting of precipitation from roofs, pavements, cycle paths and roads.Results obtained for the climatic and hydrological conditions of the city of Kraków (south Poland) indicated that 41% of the collected precipitation could be used for economic purposes.The rest of the water collected in a surface receiver would improve the microclimate and could be used over time for other purposes, such as irrigation. Other research contexts related to rainwater management were decidedly less frequently addressed.In the context of research on legal regulations for rainwater management, authors emphasised the positive change in legal regulations relating to water management in Poland and their implementation of EU regulations.It should be noted that Sakson et al. (2017), Kordana and Słyś (2020a) and Godyń (2022) combined the legal regulations context with the economic instruments context in the rainwater management aspect.Saxon et al. (2017), analysing the impact of precipitation on the functioning of sewage treatment plants in a city (on the example of the city of Łódź), drew attention to the existence of regulations that do not ensure proper protection of water against pollution.They indicated that the regulations are too general and do not take into account local conditions, thereby preventing effective and economically viable water protection.Kordana and Słyś (2020a) and Godyń (2022) indicated gaps in the existing rainwater management policy in Poland, including the fees for the discharge of rainwater being too low, budgetary funds to offer grants for rainwater collection investments being too low, and fees for the discharge of rainwater into the municipal sewage system not being universal.Szpak et al. (2022) focused on the analysis of documents related to rainwater management in a selection of Poland's largest cities.The study draws attention to actions taken by individual cities to improve their resilience in terms of issues relating to rainwater. The research context related to social aspects of rainwater management was addressed only twice by the analysed group of articles.Interesting results were presented by Stec (2018), who conducted a study of households in Poland on the use of alternative water sources.The results clearly indicate a lack of interest in the possibility of replacing household mains water with grey water and rainwater.In the opinion of respondents, if they collected rainwater, it would be used mainly for watering the garden.The work of Mantey ( 2021) is of a similar nature.The research objective was, among the inhabitants of three types of suburban area, to identify attitudes towards small retention in the context of changes in the water law.The paper draws attention to the diverse ways in which residents living in areas of different degrees of urbanisation perceive the problem of rainwater.It indicated, among other things, that a greater urbanisation is accompanied by a lower sense of responsibility for rational rainwater management.Residents are also not convinced about the effectiveness of investing in individual small retention equipment.The main factors that might encourage them to implement such investments are financial issues, such as reducing fees for discharging rainwater to the rainwater drainage system. The last of the highlighted research contexts is related to quantitative analyses of the potential for collecting rainwater.In both works, the authors rely on 50-year data obtained for 19 synoptic meteorological stations throughout Poland.Canales et al. ( 2020) assessed long-term trends in 20-day cumulative precipitation periods throughout the year.They also indicated the impact of their results on issues relating to the design and operation of rainwater collection equipment.Gwoździej-Mazur et al. ( 2022) drew attention to the impact of longterm climate change on the possibility of using rainwater in households.The authors emphasised that the design of RWH systems should be based on archival data and take into account longterm changes in precipitation.The validity of this statement needs addressing.Climate change presents us with an intensification of extreme weather events that exceeds past levels.It is therefore reasonable to ask, "Shouldn't models extrapolating changes in climatic conditions be taken into account more fully when planning and designing RWH?" Almost half of the works presented their research and analyses on the example of a city or part thereof.However, importantly, the research containing more detailed analyses related to just six Polish cities.Most works concerned Warsaw, Łódź and Wrocław.The other studies concerned Gdańsk, Kraków and Rzeszów.It should be noted that several articles include more cities.However, these articles were more general (e.g., Szpak et al. 2022) or focused only on comparing just one element related to rainwater (e.g., Canales et al. 2020;Gwoździej-Mazur et al. 2022). One of the questions asked most often by scientists from various fields relates to the direction in which further research on their chosen research topic will go.In the analysed research issues, the answer was: analysis of "thematic maps".Two main research areas have been identified as probably the most intensively researched in the coming years.The first combines two important problems.It concerns modelling rainwater runoff from urban catchments and the functioning of the sewerage system in the event of extreme events that activate stormwater overflows.Modelling the outflow of rainwater from the urban catchment is very difficult, as it must take into account the specifics of local conditions.The activation of stormwater overflows results in untreated sewage being discharged directly to the receiver (usually a river or lake).The growing number of extreme events related to rapid, intense precipitation will increase the number of such situations in coming years.The proper development and subsequent implementation of stormwater runoff models will significantly reduce the frequency of storm overflows.The second of research area that will developed rapidly in coming years concerns the possibility of rainwater retention.This issue is extremely broad, as it includes a number of applicable methods and solutions.Therefore, in the coming years, further research should be expected to aim to find the most effective rainwater retention solutions.This research is likely to focus largely on the further development of a broad spectrum of green roof solutions. This paper is limited to the analysis of articles indexed by the two most prestigious bibliometric databases.One should therefore be aware that not all scientific articles addressing the analysed research issues were included.Despite this, the conclusions that can be formulated on the basis of the conducted analysis seem to fully reflect the state of knowledge on the undertaken research issues. Conclusions Based on the analysis, the following conclusions can be drawn: • The issue of rainwater in urban areas in Poland is a current and developing research area, as confirmed by the growing number of articles.• The research issues analysed in the work in Poland are mainly undertaken by scientists from technical universities, which has a direct impact on the research contexts of the resulting scientific studies.• Eleven research contexts have been distinguished within which the issue of rainwater in urban areas has been analysed to date.The research context most frequently adopted was analysis of technical solutions for rainwater management (14 articles).Another frequently discussed research context was that of hydrodynamic modelling of rainwater runoff in urban catchments (9 articles). • Based on the "thematic map" analysis, two probable directions of further development of the problem of rainwater in urban areas in Poland were indicated: 1) Modelling rainwater runoff from an urban catchment, with particular emphasis on the functioning of the combined sewerage system (i.e., including stormwater overflow); and 2) Analysis of rainwater retention possibilities, with particular emphasis on green roofs.• It has been shown that individual research contexts have increased the scope and methods of their analyses and the range of research tools used.• Detailed analyses have been limited to selected larger cities in Poland and have omitted medium and small cities. • There is a lack of broader integration of research results from different research contexts.Furthermore, in terms of the bibliometric analysis, two conclusions can be drawn: • The results of the bibliometric analysis may be influenced by the database on which the analysis is performed.This is particularly true when the researched group of publications is relatively small.• The incompleteness of bibliometric information (which persists in the Scopus and Web of Science databases) may significantly limit the ability to draw conclusions based only on data obtained from bibliometric databases. As it has been shown, rainwater is a current research problem that is analysed in many contexts.Despite the large amount of research, there are still areas in which gaps exist.The studies to date have provided a lot of valuable and important data and information.At present, there is a lack of studies using these results to synthesise and comprehensively analyse the issue from a regional (voivodship) or national perspective.Social studies into rainwater management are especially lacking.Research integrating environmental, economic, social and legal issues within the analysed issues should be considered essential.Ultimately, detailed analyses of these factors should be conducted for every city in Poland.In addition, efforts should be made to involve scientists from non-technical universities in these issues.It seems that geographers can play a special role in this respect (without dividing them into those dealing with socio-economic and environmental issues).They have appropriate environmental and socio-economic knowledge and tools for spatial analysis (GIS software). In light of the dynamic development of Polish cities, the accompanying spatial transformations and progressing climate change, addressing the comments and postulates indicated in this work appears to be increasingly urgent.The failure to immediately undertake the indicated research and then to implement the results may contribute to significant socio-economic losses in the near future. Fig. 2 . Fig. 2. Author keyword co-occurrence network (for number of nodes: 50) and the impact of land use and management of rainwater runoff drainage systems (Olechnowicz and Weinerowska-Bords 2014).An interesting aspect indicated by Żarnowiec et al. (2017) related to the drainage of rainwater from industrial areas through their evaporation from roof surfaces.The rapidly increasing number of spatially expansive facilities such as logistics centres, industrial plants and shopping centres in Poland in recent years indicates that this issue is very topical.The results of the cited authors clearly confirm the effectiveness of this solution.Żarnowiec et al. (2017) indicated that, in the seven-month research period, the amount of evaporation exceeded 1000 mm (with average annual precipitation for Poland amounting to about 610 mm).In recent years, research has focused on identifying the best criteria for selecting rainwater management solutions (Kordana and Słyś 2020b), and on analysing and comparing (Boguniewicz-Zabłocka and Capodaglio 2020; Godyń et al. 2020; Wojnowska-Heciak et al. 2020; Sobieraj 2022) and evaluating them (Kordana-Obuch and Starzec 2020).Of particular interest to urban decision-makers are the results of research by Kordana-Obuch and Starzec ( toilet flushing (Słyś et al. 2015; Stec and Słyś 2017; Sakson 2018; Starowicz and Bryszewska-Mazurek 2019; Stec and Zeleňáková 2019).These works differ mainly in the type of facility studied, whether single-family house (Sakson 2018; Słyś et al. 2015), multi-family building (Starowicz and Bryszewska-Mazurek 2019), student dormitory (Stec and Słyś 2017; Stec & Zeleňáková 2019) or housing estate (Godyń et al. 2020), and in location (which is important due to spatial variability in sums of precipitation) and in the details of the methodology used to calculate financial efficiency.More recent publications have focused on comparing the financial (and hydraulic) efficiency of several solutions for collecting and using rainwater (Boguniewicz-Zabłocka and Capodaglio 2020; Musz-Pomorska et al. 2020; Słyś and Stec 2020).Particularly interesting results were obtained by Musz-Pomorska et al. (
2023-12-15T16:11:10.070Z
2023-12-12T00:00:00.000
{ "year": 2023, "sha1": "1f13fa208fbcdee901b438e2f5e43d9129d63a0b", "oa_license": "CCBYNC", "oa_url": "https://apcz.umk.pl/BOGPGS/article/download/46182/37063", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a57ef6a44da5e79db002bd9d45cc8490a193feb8", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
253358227
pes2o/s2orc
v3-fos-license
Factors Predictive of Mortality among Geriatric Patients Sustaining Low-Energy Blunt Trauma Background: In geriatric trauma patients, higher mortality rate is observed compared to younger patients. A significant portion of trauma sustained by this age group comes from low-energy mechanisms (fall from standing or sitting). We sought to investigate the outcome of these patients and identify factors associated with mortality. Methods: A retrospective review of 1285 geriatric trauma patients who came to our level 1 trauma center for trauma activation (hospital alert to mobilize surgical trauma service, emergency department trauma team, nursing, and ancillary staff for highest level of critical care) after sustaining low-energy blunt trauma over a 1-year period. IRB approval was obtained, data collected included demographics, vital signs, laboratory data, injuries sustained, length of stay and outcomes. Patients were divided into three age categories: 65–74, 75–84 and >85. Comorbidities collected included a history of chronic renal failure, COPD, Hypertension and Myocardial Infarction. Results: 1285 geriatric patients (age > 65 years) presented to our level 1 trauma center for trauma activation with a low-energy blunt trauma during the study period; 34.8% of the patients were men, 20.5% had at least one comorbidity, and 89.6% were white. Median LOS was 5 days; 37 (2.9%) patients died. Age of 85 and over (OR 3.44 with 95% CI 1.01–11.7 and 2.85 with 95% CI 1.0–6.76, when compared to 65–74 and 75–84, respectively), injury severity score (ISS) (OR 1.08, 95% CI 1.02 to 1.15) and the presence of more than one comorbidity (OR 2.68, 95% CI 1.26 to 5.68) were independently predictive of death on multi-variable logistic regression analysis. Conclusion: Age more than 85 years, higher injury severity score and the presence of more than one comorbidity are independent predictors of mortality among geriatric patients presenting with low-energy blunt trauma. Introduction Traumatic injuries are the 5th leading cause of death in the older population and the mortality risk increases steadily with increasing age despite a decrease in injury severity [1,2]. Factors which impact mortality in the older trauma population are not well understood, even as trauma in older adults continues to emerge as a global health concern. Geriatric trauma is also frequently associated with complications despite a generally lower energy mechanism; these complications increase the odds of death [3]. Previous work showed that with every 1-year increase in age over 65, the odds of death after trauma increase by as much as 6.8%. Low-energy fall-related head injury can be associated with significant functional decline and increased resource utilization, with those older than 80 years having 1.6 times greater chance of dying than patients aged 65-80 years. The eastern surgical association (east) practice management guidelines proposed a lower threshold for trauma activation (hospital alert to mobilize surgical trauma service, emergency department trauma team, nursing, and ancillary staff for highest level of critical care) for injured patients aged 65 years or older, aggressive triage, correction of coagulopathy, and limitation of care when clinical evidence predicts an overwhelming likelihood of poor long-term prognosis [4]. Moreover, outcomes from these injuries display stark differences once age is compared. Besides head trauma, rib and pelvic fractures are the most common manifestations of lowenergy blunt trauma (falls from standing, sitting), both significantly increasing the mortality of this population. One of the differences is seen in head trauma as the geriatric population has twice the mortality of younger patients; in addition, the number of rib fractures is directly linked to need for higher level care units and mortality, whereas hip fractures can often go unseen on plain films. Thus, elderly trauma care can be very challenging and should be treated uniquely [5,6]. Since low-energy trauma accounts for around 75% of all elderly trauma and triage is often difficult because of a multitude of factors such as deafness, dementia, and circumstances, as these falls are often unwitnessed, it is important that the factors predictive of worse outcomes are identified [5,6]. We sought to identify the factors predictive of death among geriatric patients sustaining low energy blunt trauma in a large level 1 trauma center. Patient Selection and Variable Definition After obtaining the Institutional Review Board (IRB) approval, a retrospective analysis of geriatric trauma patients (65 years and older) who arrived to our trauma center (Staten Island University Hospital Center in Staten Island, New York, NY, USA) for trauma activation (hospital alert to mobilize surgical trauma service, emergency department trauma team, nursing, and ancillary staff for highest level of critical care) between January 2019 and October 2019 was conducted using the trauma database. All patients received computed tomography scans (CT Scans) of the head, cervical spine, chest, abdomen and pelvis. Inclusion and Exclusion Criteria All geriatric trauma patients who arrived for trauma activation (hospital alert to mobilize surgical trauma service, emergency department trauma team, nursing, and ancillary staff for highest level of critical care) over the study period were considered for inclusion. Data collected from the trauma data base included demographics (age, gender, and race), mechanism of injury (blunt, low-energy), injuries sustained, injury severity (injury severity score "ISS"), transfused blood products including 24 h packed cells, transfused total admission packed red blood cells, fresh frozen plasma and platelets, and presenting vital signs. We also collected data on pre-existing comorbidities (chronic renal failure "CRF", diabetes mellitus "DM", hypertension "HTN" and chronic obstructive pulmonary disease "COPD"), hospital length of stay (LOS), disposition and survival. The patients were divided into three age groups: 65-74, 75-84 and ≥85. Statistical Analysis This is a retrospective cohort study. Categorical data were summarized by the number and percentage of patients falling within each category. Continuous variables were summarized by descriptive statistics including mean and standard deviation or median and interquartile range. The primary comparison group is age group (65-74, 75-84, 85+). Age is an independent predictor of death in trauma patients; we wanted to assess the impact age has on outcomes of geriatric (65 years of age and older) patients by categorizing them into three age groups for ease of interpretation and assessment into 65-74, 75-84 and 85+ age groups. Having the patients belong to three 10-year categories enables ease of interpretation and management. The primary outcome variable is hospital mortality. Bivariate analyses were performed using the χ 2 -test, ANOVA and Wilcoxon Rank-sum test, as appropriate. The independent effects of age group, Injury Severity Score (ISS) and previous medical history (whether patient has two or more comorbidities) were evaluated using a multivariable logistic regression analysis. All statistical tests were two-sided. p-values < 0.05 were considered statistically significant. All statistical analyses were performed using SAS software (Statistical Analysis Systems Inc., Cary, NC, USA). Confidence intervals (CIs) were two-sided, unless otherwise stated. This was done to evaluate if age and the presence of more than one comorbidity were independently predictive of death (at hospital discharge) on multi-variable logistic regression analysis. Results There were 1285 geriatric patients (age > 65 years) that presented to our level 1 trauma center for trauma activation (hospital alert to mobilize surgical trauma service, emergency department trauma team, nursing, and ancillary staff for highest level of critical care) with a low-energy blunt trauma (fall from sitting or standing) during the study period; 34.8% of the patients were men, 89.6% were white and 20.5% had at least one comorbidity. The median injury severity score (ISS) was 5 and the median hospital length of stay (LOS) was 5, too. Injury and clinical characteristics are demonstrated in Table 1. Thirty-seven (2.9%) patients died with mortality determined to be due to the trauma itself. When the patients were divided into three age groups, significant differences were noted between the groups in sex (p = 0.01), race (p < 0.01), injury to extremities (p < 0.01), smoking (p < 0.001) and disposition (p < 0.0001) ( Table 2). While patients in the 65-75 age group had a higher rate of COPD (p = 0.01) compared to the two other age groups, there was no significant difference in the rates of CRF, HTN or MI. Patients who died had a significantly higher incidence of chronic renal failure (CRF) p = 0.02 compared to those who did not; there was no significant difference in the rates of COPD, MI or HTN between the two groups. Age of 85 or more (OR 3.44 with 95% CI 1.01-11.7) compared to those in the 65-74 age group (2.85 with 95% CI 1.0-6.76) and when compared to those in the 75-84 age group, injury severity score (ISS) (OR 1.08, 95% CI 1.02 to 1.15) and the presence of more than one comorbidity (OR 2.68, 95% CI 1.26 to 5.68) were independently predictive of death (at hospital discharge) on multi-variable logistic regression analysis (Table 3). Discussion We found that, among geriatric patients sustaining low-energy blunt trauma (falls from sitting or standing), age equal to or higher than 85, higher injury severity score (ISS) and having more than one comorbidity were predictive of death. These findings will have significant impact on the care for these fragile patients. The older group (85+ years), in addition to having the highest mortality, had a higher percentage of patients requiring the services of acute rehabilitation hospitals and skilled nursing facilities and the smallest percentage of patients being discharged directly home, which further emphasizes the differences between this group and the two younger ones in having worse outcomes that do not only involve mortality. Geriatric patients are at risk of sustaining serious injuries even when the mechanism is a low-energy one. Krappinger et al. showed that these patients are at risk for arterial hemorrhage from low-energy pelvic trauma, and Shcrag et al. showed that these patients need imaging of the cervical spine when sustaining low-energy mechanism injury as clinical indicators were inadequate to rule them out since these patients are prone to serious injuries despite the mechanism [7,8]. Our data suggests that worse outcomes are present among these patients, especially among the older geriatric group. The older age group in this study had a statistically significant difference in race composition including a larger percentage of white patients and a smaller percentage of other races (black and "other" races) compared to the two other groups, which could have contributed to the difference in mortality in this paper as multiple authors have demonstrated differences in trauma outcomes associated with race across all age groups [9][10][11]. Sammy et al., in a meta-analysis of the published literature, found that while multiple factors affect the mortality of older patients, demographics (age and gender), pre-existing comorbidities and injury severity and mechanism were significantly predictive of death [12]. A second systematic review conducted by Hashmi et al. concluded that overall mortality in geriatric trauma patients increases with age, with the 74 and older age group having twice the odds of mortality as compared to those in 65-74 age range; other studies had similar findings [13][14][15][16][17][18].Anemia, too, has been implicated as yet another risk factor for mortality [4]. Although these studies included all the mechanisms and levels of severity of injury in the older patients, our results are in agreement with their analysis as older age (75-85), higher ISS and having more than one comorbidity were associated with death in our study. We assert that such elderly, comorbid patients should be approached with care beyond early goals-of-care conversations and standard practices. Goals of care permitting, such patients should be managed aggressively with a multidisciplinary approach to address all injuries and comorbidities. Perhaps more importantly, fall risk assessments and preventative measures should be implemented on a community level. Numerous evidence-based assessment tools are available to identify both patient and environmental factors that can contribute to falls. Multifactorial approaches are likely required in the prevention of such traumas; such programs are themselves another area of active research. Our study has certain limitations. First, this is a single-center, retrospective study, decreasing the generalizability of our findings. Second, the study is limited to one year; having a multi-year data would enable us to examine a larger number of patients and potentially obtain a better evaluation of factors impacting death in this patient population. Despite this, a sizable sample was still obtained. Third, there were multiple statistically significant differences between age groups in terms of injuries. While this may cause difficulties when comparing age groups, such differences are not unexpected. Pre-existing conditions in the elderly are both distributed heterogeneously and affect patient outcomes. Finally, our data set specifies low-energy blunt trauma. While this makes our results less generalizable to all geriatric trauma patients, it highlights an important subset of patients with an increasingly common mechanism injury in this age group. Conclusions In conclusion, geriatric trauma patients sustaining low-energy blunt trauma including falls from sitting or standing are at risk of death. Factors shown in this study to be associated with mortality on multi-variable analysis included age of 85 or more, increasing injury severity score and having more than one comorbidity. Paying close attention to geriatric trauma patients that meet these criteria could result in improved outcome. Further studies are needed to confirm these results. Author Contributions: N.P., interpretation of results, manuscript drafting, critical review; T.N.L., interpretation of results, manuscript drafting, critical review; S.D., statistical analysis; S.P., study design, interpretation of results; T.K., interpretation of results, critical review; M.C., interpretation of results, critical review; S.A., interpretation of results, critical review; A.R., interpretation of results, manuscript drafting; C.G., data collection, interpretation of results; F.D., interpretation of results, critical review; G.G., interpretation of results, critical review; K.A., interpretation of results, critical review; B.K., interpretation of results, critical review; A.S., interpretation of results, critical review; A.G., interpretation of results, critical review; D.Y., study design, interpretation of results, manuscript drafting. All authors have read and agreed to the published version of the manuscript.
2022-11-06T16:17:29.688Z
2022-11-01T00:00:00.000
{ "year": 2022, "sha1": "fbd87e55d7945f89128cc833b2a2963ce6e7d57c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9032/10/11/2214/pdf?version=1667546630", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "ef52103b62158aba08d91f3ab856eaedf5ac0934", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
201064977
pes2o/s2orc
v3-fos-license
Internet survey on the actual situation of constipation in the Japanese population under 70 years old: focus on functional constipation and constipation-predominant irritable bowel syndrome Background In Japan, the prevalence of constipation-predominant irritable bowel syndrome (IBS-C) and functional constipation (FC) diagnosed by the Rome III criteria is unclear, as are the demographic profile, quality of life (QOL), and habits of persons with IBS-C or FC. Methods We performed an internet survey of constipation. After extracting 3000 persons fitting the composition of the general Japanese population, we investigated demographic factors, lifestyle, defecation, and laxatives. IBS-C and FC were diagnosed by Rome III criteria. Respondents also completed the Japanese IBS severity index (IBS-SI-J), Japanese IBS QOL scale (IBS-QOL-J), SF-8, Hospital Anxiety and Depression Scale (HADS), and Japanese Health Practice Index (JHPI). Results There were 262 respondents with FC (8.73%) [73 men and 189 women; mean age: 49.8 ± 13.1 years; mean body mass index (BMI): 21.0 ± 3.3 g/m2] and 149 respondents with IBS-C (4.97%) (76 men and 73 women; mean age; 41.6 ± 13.7 years; mean BMI: 20.8 ± 3.0 kg/m2). Total IBS-QOL-J score were significantly lower in the IBS-C group than the FC group. With regard to SF-8, score of mental component summary (MCS) was significantly lower in the IBS-C group. The total IBS-SI-J score and item scores, except for satisfactory defecation, were significantly higher in the IBS-C group than the FC group. HADS showed a significant increase of anxiety and depression in both the groups, and the JHPI revealed insufficient sleep. Conclusions In Japan, among the population of under 70 years old, the prevalence of IBS-C and FC (Rome III criteria) was 4.97% and 8.76%, respectively. IBS-C caused more severe symptoms than FC, resulting in impairment of QOL. Electronic supplementary material The online version of this article (10.1007/s00535-019-01611-8) contains supplementary material, which is available to authorized users. Introduction According to the 2016 Comprehensive Survey of Living Conditions conducted by the Japanese Ministry of Health, Labour and Welfare, the prevalence of constipation ranges from 2 to 5% in Japan and it shows a higher frequency in females than males (4.6% vs. 2.5%) [1]. However, the actual situation of constipation in the general Japanese population is unclear, including the methods used to control it, particularly if we not only consider patients receiving medical treatment, but also persons who control their symptoms with over-the-counter drugs and supplements. Since constipation-predominant irritable bowel syndrome (IBS-C) and functional constipation (FC) were defined by the Rome criteria [2], many epidemiological & Sayuri Yamamoto yuri3337@aichi-med-u.ac.jp 1 Division of Gastroenterology, Aichi Medical University School of Medicine, 1-1 Yazakokarimata, Nagakute 480-1195, Aichi, Japan studies have been conducted in Europe and the USA [3][4][5][6], but there have been few investigations of the actual status and profile of Japanese persons with FC or IBS-C [7][8][9]. In addition, effective countermeasures for constipation such as modification of the diet and lifestyle have not been established, and the selection criteria for medications to treat this condition remain unclear. Moreover, there have not been any studies comparing FC to IBS-C with regard to these points. Therefore, investigation of the use of laxatives and characteristic countermeasures for constipation, the influence of constipation on the quality of life (QOL), and the lifestyle and exercise habits of persons with constipation could provide useful data for the development and assessment of therapeutic strategies. Accordingly, we performed an internet questionnaire survey of constipation in Japan and extracted respondents who had FC or IBS-C according to the Rome III criteria [2,9,10], with the objective of determining the characteristics of these two groups with different types of constipation in the Japanese population. Methods From October 8 to 11 in 2016, a preliminary internet questionnaire was completed by 10,000 Japanese panelists with constipation aged 20-69 years. Hospital admission and use of medications were not considered as inclusion/ exclusion criteria. Men or women who gave informed consent to participation were enrolled in the internet survey. The following persons were excluded from the survey: 1. Persons with a history of abdominal surgery, other than appendectomy. 2. Persons with small and large intestinal diseases, such as inflammatory bowel disease (ulcerative colitis or Crohn's disease). 3. Persons with bowel cancer or other cancers. 4. Women who were pregnant. 5. Persons with a history of gastric/intestinal disease, such as gastric or duodenal ulcer, hemorrhoids, diverticulitis, or diverticulum. 6. To exclude secondary constipation, persons with a history of cerebral infarction were ineligible, as were persons with neurological disease, chronic obstructive lung disease, hepatic disease, or renal disease. 7. To exclude drug-induced constipation, persons with diabetes using oral antidiabetic agents or insulin were ineligible, as were persons with hypertension using antihypertensive agents, and persons taking chalybeate (mineral spring water), hypnotics, sedatives, or antipsychotic agents. 8. Persons who were unable to correctly follow the instructions for completing the survey. Panelists who refused to answer the questionnaire or failed to complete it were classified as dropouts. Questionnaires with sufficient data for analysis were obtained from 9523 persons. Among the 4909 persons who responded to the question 'Do you think you have constipation?' (on a 5-item Likert scale) by selecting 'I strongly think I have constipation' or 'I think I have constipation', 3000 persons were randomly extracted according to the population composition ratio of the Ministry of Internal Affairs and Communications Statistics Bureau metropolis and districts of Japan (Fig. 1). The internet survey and statistical analysis of the data were both performed by Rakuten Insight Inc. (Osaka, Japan). Diagnosis of IBS-C and functional constipation according to Rome III criteria. The questionnaire about IBS-C and FC was prepared on the basis of Rome III criteria [10,11]. According to Rome III, FC is diagnosed if two or more of the following symptoms have been present for at least 3 months (with symptom onset at least 6 months prior to diagnosis): In addition, loose stools are rarely present without laxatives and there are insufficient criteria to make a diagnosis of irritable bowel syndrome. IBS is diagnosed if a person has had recurrent abdominal pain or discomfort at least 3 days per month in the last 3 months associated with two or more of the following: 1. Improvement with defecation. 2. Onset associated with a change in frequency of stool. 3. Onset associated with a change in form (appearance) of stool. Criterion fulfilled for the last 3 months with symptom onset at least 6 months prior to diagnosis is also required. For diagnosis of IBS-C, there must be constipation in addition to the above symptoms, with a variable number of the features of FC. The major difference between the two conditions is that abdominal pain occurs in IBS-C and FC is painless. Investigation of the demographic profile The following demographic factors were investigated in all of the participants: age, gender, obstetric history (for women), height, body weight, BMI, annual income, educational history, occupation, stool frequency, and use of laxatives (including the place of purchase and the monthly cost). Investigation of symptoms, QOL, and mental symptoms The severity of constipation was determined using the Bristol Stool Form Scale [12,13], which classifies stools into 7 types based on appearance. Among these 7 morphologic types, Types 1 and 2 (Type 1; separate hard lumps, like nuts, hard to pass; Type 2: sausage-shaped, but lumpy) indicate constipation, Types 3 (like sausage, but with cracks on surface) and Types 4 (like sausage or snake, smooth and soft) and Types 5 (soft blobs with clear cut edges, passed early) are ''ideal'', Types 6 (fluffy pieces with ragged edges, mushy stool) Types 7 (watery, no solid pieces) indicate diarrhea. The Japanese IBS severity index [14] (IBS-SI-J) was used for determination of the severity of IBS-C. In addition, the QOL of the participants was assessed by using the Japanese IBS QOL scale [15] (IBS-QOL-J), which has 38 items that are each answered on a 5-point scale (0: absent/no, 1: slightly, 2: moderately, 3: strongly, 4: very strongly). Health-related QOL, particularly physical and mental health, was investigated by using the SF-8 based on 8 subscales [physical functioning, role (physical), bodily pain, general health, vitality, social functioning, role (emotional), and mental health]. Mental symptoms were assessed by employing the Hospital Anxiety and Depression Scale (HADS) [16], which has 7 items related to depression and 7 items for anxiety. Investigation of lifestyle and diet The influence of symptoms related to IBS-C and FC on the lifestyle of the participants was evaluated using the Japanese Health Practice Index (JHPI) [17]. Based on the Stanford University criteria [18], foods were classified into 17 food groups according to the content of fermentable oligosaccharides, disaccharides, monosaccharides and polyols (FODMAPs). Then the foods in the different food groups were divided into high-FODMAP and low-FODMAP foods, according to the previous report (Table 3) [19]. Statistical analysis The data are presented as mean ± standard deviation (SD) or median [interquartile range (IQR)]. In addition, the unpaired t-test or the 2-sample Wilcoxon rank sum test (Mann-Whitney U test) was used for inter-group comparison of numerical or ordinal scale data, as appropriate. Chisquare test or residual test was used for inter-group comparison of categorical data, as appropriate. In all analyses, the level of significance was set at 0.05 (two-sided). The Holm's method was used to correct for the multiplicity of the test. The statistical analyses were performed using SPSS ver. 23.0 for Windows (IBM Japan, Ltd., Tokyo, Japan). Ethical considerations This study was approved by the institutional review board of Aichi Medical University (October 6, 2016; approval no. 2016-H025). This study was carried out in conformity with the principles of the Declaration of Helsinki and the Ethical Guidelines for Medical and Health Research Involving Human Subjects enacted by the Japanese Ministry of Education, Culture, Sports, Science and Technology and the Ministry of Health, Labour and Welfare (December 22, 2014). age of 49.8 ± 13.1 years and the mean BMI was 21.0 ± 3.3 kg/m 2 . Another 149 subjects (4.97% of the total survey population) were classified into the IBS-C group, including 76 men (51.0%) and 73 women (49.0%). The IBS-C group had a mean age of 41.6 ± 13.7 years and the mean BMI was 20.8 ± 3.0 kg/m 2 (Table 1). While the FC group showed female predominance and was significantly older than the IBS-C group, there was no difference of BMI between the 2 groups. In addition, the IBS-C group included significantly more persons in their 20 s compared with the FC group (28.2% vs. 9.5%, p \ 0.001), as well as significantly more persons in their 30 s (24.8% vs. 16.8%, p = 0.049). On the other hand, the FC group had significantly more persons in their 50 s than the IBS-C group (18.7 vs. 10.1%, p = 0.020) and also had significantly more persons in their 60 s (32.8% vs. 16.1%, p \ 0.001). Overall, a significantly higher percentage of the IBS-C group was aged \ 40 years compared with the FC group (53.0% vs. 26.3%, p \ 0.001), while persons aged C 40 years accounted for a larger percentage of the FC group (73.7% vs. 47.0%) and the majority of the participants in this group were elderly (Table 1). There were no significant differences of the places of residence between the two groups, and there was a tendency for persons from the IBS-C group to be more likely to live in urban areas such as the Kanto area (IBS-C group: 38.3% vs. FC group: 35.5%), the Kinki area (IBS-C group: 24.2% vs. FC group: 22.9%), and the Chubu area (IBS-C group: 21.5% vs. FC group: 13.4%), while persons from the FC were more likely to live in rural areas such as Hokkaido (FC group: 4.6% vs. IBS-C group: 2.7%), Tohoku (FC group: 8.0% vs. IBS-C group: 2.7%), and Kyushyu (FC group: 9.2% vs. IBS-C group: 3.4%). There were no significant differences of the educational background between the two groups. However, the IBS-C group had a slightly higher level of academic qualifications than the FC group, with 34.7% of the FC group finishing their education at the high school level compared with 36.9% of the IBS-C group. There were also no significant differences of the annual income, which was \ 6 million yen and C 6 million yen in a similar proportion of both the groups. Finally, there were no significant differences of occupation. In both the groups, the most frequent occupation was office worker/public servant and this was followed by part time worker in the IBS-C group and by home duties in the FC group (Table 1). Defecation The frequency of passing stools was less than 3 times per week in a significantly higher percentage of persons from the FC group than the IBS-C group (51.5% vs. 44.3%, p \ 0.05) (see Supplementary Fig. 1). While a similar proportion of respondents in both the groups passed stools less than once a week or twice a week, only 8.7% of the IBS-C group passed stools once a week versus 13.7% of the FC group. There were no significant differences of the Bristol scale between the two groups, with hard stools of Types 1-2 being frequent in both the groups (about 40% of bowel motions) and normal to diarrhea stools accounting for 25-28% of bowel motions in both the groups (see Supplementary Fig. 1). Laxatives When use of laxatives was investigated, significantly fewer persons used laxatives in the IBS-C group compared with the FC group (40.9% vs. 77.1%, p \ 0.05). In addition, significantly fewer persons used irritant laxatives in the IBS-C group (11.4% vs. 22.1%, p \ 0.05), but there was no significant difference between the two groups with regard to the use of salt laxatives (Fig. 2). With respect to the source of laxatives, these were significantly more frequently purchased at a pharmacy by respondents from the FC group than by respondents from the IBS-C group (66.0% vs. 49.0%, p \ 0.05). When the monthly cost of laxatives was investigated, a cost of less than 1000 yen was the most common answer given in both the groups. It was found that persons who spent 5000 yen or more on laxatives were slightly more frequent in the IBS-C group, but the difference was not significant (Fig. 2). Physical symptoms, QOL, and mental symptoms When the IBS-SI-J was assessed, the total score was significantly higher in the FC group than in the IBS-C group (p \ 0.001) (Fig. 3). In addition, the scores for symptoms having an influence on daily life were all significantly higher in the IBS-C group compared with the FC group, including the severity of abdominal pain (p \ 0.001), the frequency of abdominal pain (p \ 0.001), the severity of bloating, swollen or tight tummy (p \ 0.001), and the how much IBS affecting or interfering with your life in general (p \ 0.05). With respect to the IBS-SI-J, both the total score and the frequency of moderate or severe symptoms were significantly higher in the IBS-C group than the FC group (202.4 ± 89.2 vs. 159.3 ± 80.0 and 56.7% vs. 44.6%, both p \ 0.05). Concerning the severity of evacuation difficulties as specified by the Rome III criteria, the frequency of 3 items (''Straining during at least 25% of defecations'', ''Lumpy or hard stools in at least 25% of defecations'', and ''Sensation of anorectal obstruction/blockage for at least 25% of defecations'') was significantly higher in the IBS-C group than the FC group (100% vs. 80.5%, 100% vs. 80.2% and 100% vs. 97.3%, respectively, all p \ 0.001). On the other hand, the frequency of ''Manual maneuvers to facilitate at least 25% of defecations (e.g., digital evacuation, support of the pelvic floor)'' was significantly higher in the FC group than the IBS-C group (6.9% vs. 0.0%, p \ 0.001). There was no significant difference in the frequency of a ''Sensation of incomplete evacuation for at least 25% of defecations'' between the groups, but it tended to be noted more often in the IBS-C group than the FC group (90.6% vs. 87.8%). When the SF-8 was investigated, it was found that the scores for physical component summary (PCS) (p \ 0.05) and mental component summary (MCS) (p \ 0.0001) were significantly lower in the IBS-C group than in the FC group. However after adjustment by Holm's method, PCS was not significant. In addition, except physical functioning (PF) and role physical (RP), the scores for other subscales were significantly lower in the IBS-C group than in the FC group. (Fig. 4a). With respect to the IBS-QOL-J, scores for the following components were significantly higher in the FC group: dysphoria (p \ 0.01), interference with activity, (p \ 0.05), health worry (p \ 0.05), social reaction (p \ 0.05), relationships (p \ 0.05). However, there were no significant differences between the two groups regarding the scores for body image, food avoidance, and sexual problems. However, after adjustment by Holm's method, only total score (p \ 0.05) and dysphoria (p \ 0.01) were significantly lower in the IBS-C group than in the FC group (Fig. 4b). On the other hand, the total HADS anxiety score and the total depression score were both significantly higher in the IBS-C group compared with the scores in the FC group (both p \ 0.001). However, although positive rate of anxiety was significantly higher in the IBS-C group, no significant difference was noted with depression between both the groups (see Supplementary figure 2 and Supplementary table 1). Lifestyle and FODMAP intake The JHPI lifestyle survey [17] revealed that the frequency of getting 'enough sleep' was significantly lower in the IBS-C group than the FC group (p = 0.016) ( Table 2). Although persons taking sedatives/hypnotics were excluded from this survey to avoid drug-induced constipation, it is interesting that fewer than half of the respondents in either group were able to obtain sufficient sleep. On the other hand, lifestyle factors such as smoking and drinking alcohol did not show a significant difference between the two groups, and neither did items related to exercise. There were also no significant differences of items related to weight gain and items related to eating habits between the IBS-C group and the FC group. Regarding diet, the frequency of eating certain high-FODMAP foods was significantly higher in the FC group than the IBS-C group, including bread (wheat or rye) and fruits (apples, pears, apricots, and watermelon) ( Table 3). The frequency of eating certain low-FODMAP foods was also significantly higher in the FC group than the IBS-C group, including some fruits (mandarins, bananas, and strawberries), some vegetables (spinach, carrots and potatoes) and hard cheese. On the other hand, intake of low-FODMAP grains such hard cheese was significantly higher in the IBS-C group than the FC group. The intake of isomerized sugar was also significantly higher in the IBS-C group compared with the FC group. Discussion In 2017, clinical practice guidelines for constipation were published in Japan [19], so it is hoped that this condition will attract increased recognition and that new evidencedbased treatments will be developed. The present survey was performed to investigate the prevalence of IBS-C and FC as defined by Rome III criteria [2] among persons with constipation who fitted the demographic profile of the general Japanese population under 70 years old. Heidelbaugh et al. previously reported that the prevalence of IBS-C and FC by Rome III criteria [2] in the USA was 3.3% and 5.5%, respectively [5]. Generally, the prevalence of IBS-C has been reported to be approximately 12% in Europe and the USA [20], while the reported prevalence ranges from 7 to 17% in Asia [20]. According to Saito et al. [21], IBS has a prevalence of 14.2% in the general Japanese population, with a 1-year morbidity rate of 1-2%, while another study showed that its prevalence was a high 31% among the outpatients of internal medicine departments [22]. Evaluation of FC by the Rome criteria (I, II, and III) has not been well documented because this concept was not clear in Rome I. With respect to IBS-C, a meta-analysis showed that the prevalence of IBS according to the Rome I, Rome II, and Rome III criteria was 8.8%, 9.4%, and 12.2%, respectively [20]. Thus, sensitivity increased across the criteria, suggesting that caution should be exercised when evaluating the prevalence of IBS-C. Between one-quarter and one-third of IBS patients are thought to have IBS-C, corresponding to around 4.7% of the general population, which seems to be similar to our result regarding the prevalence of IBS-C. In the present study, we excluded persons with secondary constipation or drug-induced constipation, which may explain the lower prevalence of constipation than in other reports. In general, it has been reported that the prevalence of constipation increases with aging [1,23], while the prevalence of IBS decreases [24]. In the present study, we found that a significantly higher percentage of the IBS-C group was aged \ 40 years compared with the FC group, while a [20]. Accordingly, it can be suggested that the pathogenesis of IBS may be agerelated or strongly influenced by age, but further studies will be needed to elucidate this potential relationship. It has been reported that the prevalence of constipation is lower among persons with a higher socioeconomic status [3,21], while the opposite trend has been identified for IBS [25]. However, we could not find any significant differences of socioeconomic status between the IBS-C group and the FC group in the present study, possibly because there is less disparity of annual income and educational background in Japan than in Europe or the USA. There were also no significant differences of the place of residence between the two groups, although respondents from the IBS-C group were more likely to live in urban areas such as the Kanto, Chubu, and Kinki areas than respondents from the FC group, while those from the FC group were more likely to live in rural areas such as Hokkaido, Tohoku, and Kyushyu. Because IBS is more frequent among people living in cities, there is a possibility that the prevalence of IBS-C would also be higher in larger cities where life is more stressful. To confirm this, it would be necessary to not only investigate the geographical place of residence, but also the population of the cities or towns in which the respondents lived. With respect to items regarding lifestyle from the JHPI [17], the percentage of respondents who answered 'I eat faster than other people' was higher in the FC group and the percentage who answered 'I tend to skip breakfast' was higher in the IBS-C group. According to a previous investigation of the characteristics associated with constipation, skipping breakfast is significantly more frequent among persons with constipation than among healthy persons [26] In agreement with this report, we found that persons in the IBS-C group skipped breakfast more often than those in the FC group. Diarrhea associated with IBS is often more frequent in the morning than the nighttime, which can affect commuting to work or going to school, and it has been reported that IBS patients tend to skip breakfast in order to avoid diarrhea [27]. Our study identified the same behavior in IBS-C patients who show predominance of constipation over diarrhea, but it is possible that they skipped breakfast to avoid aggravation of abdominal discomfort. We also found that use of laxatives was low in the IBS-C group. A stool frequency \ 3 times a week, which is important in the Rome III criteria, was significantly less common in the IBS-C group than in the FC group, so it is possible that the respondents with IBS-C may have considered laxatives were not necessary. Alternatively, they may have thought that use of laxatives could aggravate their symptoms. It has been reported that the role of psychological factors in IBS increases along with the severity of this condition [1]. Typical psychological abnormalities associated with IBS are reported to be depression and anxiety, followed by somatization [28]. In addition, stress during early life was reported to be a risk factor for the development of IBS [29], and it has been found that patients with IBS display catastrophe-oriented thinking and show digestive tract-specific anxiety [29]. Many authors have described the existence of a relation between psychological abnormalities and constipation. IBS-C patients have significantly more upper abdominal symptoms than diarrhea-related symptoms, and several previous studies have demonstrated that QOL is significantly worse when IBS is accompanied by upper abdominal symptoms [24,30,31]. According to a study performed in the USA, both anxiety disorder (odds ratio: 3.02) and depression (odds ratio: 2.31) were significantly more frequent among IBS patients than among ageand gender-matched controls [32]. Likewise, a systematic review of 10 case-control studies performed in Europe comparing healthy subjects with IBS patients demonstrated a significantly higher prevalence of anxiety and depression among the IBS patients [25]. According to our findings in the present study, QOL was reduced by constipation and the existence of constipation was closely related to stress. We also showed that some parameters of QOL was significantly worse in the IBS-C group compared with the FC group. Therefore, it is considered that careful evaluation of stress and maintaining good relations with patients should form the basis of medical care for persons who have constipation or IBS [33]. With respect to diet, interesting results were obtained through comparison of the intake of high-and low-FOD-MAP foods by the two groups [34]. In the IBS-C group, intake of certain high-FODMAP foods, including fruits (such as apples, pears, apricots, and watermelon) and bread was lower than in the FC group, as was the intake of certain low-FODMAP foods (hard cheese, mandarins, bananas and some vegetables including spinach, carrots and potatoes). It has been reported that low-FODMAP foods are associated with less abdominal distention and are effective for diarrhea in persons with IBS, but the effect of such foods on constipation is unknown [24,[28][29][30]35].The results obtained in the present study suggested that persons with IBS-C may empirically select foods that are less likely to cause abdominal symptoms such as bloating and abdominal pain. On the other hand, we found that persons with FC preferred high-FODMAP foods that could improve their bowel movements and were not so concerned about the potential risk of developing abdominal symptoms. Limitations In 2016, validation of the Japanese version of the Rome IV criteria was not completed, so we could not use Rome IV criteria. So, we had to use the Japanese version of the Rome III criteria, which had been validated. If we could use the Rome IV criteria, the results of our survey may become somewhat different. Because this study was based on an internet survey, there is some risk that the data obtained are unreliable. However, we concluded that this survey was likely to be sufficiently reliable because the respondents were from a registered panel and the identity of each participant was confirmed by a research company. This survey involved 3000 subjects who were randomly extracted to match the profile of the general Japanese population [1]. Because the mean age of the panel was less than 70 years, different results may have been obtained if an older panel had been interviewed. However, the age range of the present panel was considered to be appropriate for research on functional gastrointestinal disease, especially IBS [31]. While internet surveys have certain limitations, there is the advantage that data on many parameters can be obtained directly from the subjects themselves in a short time without requiring any intervention by healthcare workers. Thus, internet surveys seem to be useful for performing cross-sectional studies from the perspective of obtaining patient-reported outcomes without the risk of bias due to data collection by health care personnel. Conclusions Among Japanese persons with constipation aged under 70 years-old, an internet survey demonstrated that the prevalence of FC and IBS-C conforming to Rome III criteria was 8.73% and 4.97%, respectively. Compared with the respondents who had FC, those with IBS-C were found to be younger and to have more severe symptoms, along with worse QOL and a higher prevalence of anxiety and depression. Respondents with FC or IBS-C also showed differences in relation to their lifestyle and diet, including the intake of high-and low-FODMAP foods, as well as differences regarding the management of constipation, with use of laxatives being significantly more frequent in the FC group. It is hoped that these findings will prove useful for increasing our understanding of this common and troublesome condition, as well as providing some clues to improve its management.
2019-08-20T14:56:22.772Z
2019-08-19T00:00:00.000
{ "year": 2019, "sha1": "c069b7c97f1b913130e0d95ff4d994c5d7f717f1", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00535-019-01611-8.pdf", "oa_status": "HYBRID", "pdf_src": "Unpaywall", "pdf_hash": "ffada257c38783473bd36c71427f2b6997ed6aac", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
28718327
pes2o/s2orc
v3-fos-license
Factors Determing Ignition and Efficient Combustion in Modern Engines Operating on Gaseous Fuels © 2012 Mitianiec, licensee InTech. This is an open access chapter distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Factors Determing Ignition and Efficient Combustion in Modern Engines Operating on Gaseous Fuels Introduction Recently in automotive industry the applying of gaseous fuels and particularly compressed natural gas both in SI and CI engines is more often met.However application of CNG in the spark ignition internal combustion engines is more real than never before.There are known many designs of the diesel engines fuelled by the natural gas, where the gas is injected into inlet pipes.Because of the bigger octane number of the natural gas the compression ratio of SI engines can be increased, which takes effect on the increase of the total combustion efficiency.In diesel engines the compression ratio has to be decreased as a result of homogeneity of the mixture flown into the cylinder.Such mixture cannot initiate the self-ignition in traditional diesel engines because of higher value of CNG octane number.Direct injection of the compressed natural gas requires also high energy supplied by the ignition systems.A natural tendency in the development of the piston engines is increasing of the air pressure in the inlet systems by applying of high level of the turbo-charging or mechanical charging.Naturally aspirated SI engine filled by the natural gas has lower value of thermodynamic efficiency than diesel engine.The experiments conducted on SI engine fuelled by CNG with lean homogeneous mixtures show that the better solution is the concept of the stratified charge with CNG injection during the compression stroke.The presented information in the chapter is based on the own research and scientific work partly described in scientific papers.There is a wider discussion of main factors influencing on ignition of natural gas in combustion engines, because of its high temperature of ignition, particularly at high pressure.The chapter presents both theoretical considerations of CNG ignition and experimental work carried out at different air-fuel ratios and initial pressure. Gas engines play more and more important role in automotive sector.This is caused by decreasing of crude oil deposits and ecologic requirements given by international institutions concerning to decreasing of toxic components in exhaust gases.Internal combustion engines should reach high power with low specific fuel consumption and indicate very low exhaust gas emission of such chemical components as hydrocarbons, nitrogen oxides, carbon monoxide and particularly for diesel engines soot and particulate matters.Chemical components which are formed during combustion process depend on chemical structure of the used fuel.Particularly for spark ignition engines a high octane number of fuel is needed for using higher compression ratio which increases the thermal engine efficiency and also total efficiency. Thermal and dynamic properties of gas fuels The mixture of the fuel and oxygen ignites only above the defined temperature.This temperature is called as the ignition temperature (self-ignition point).It is depended on many internal and external conditions and therefore it is not constant value.Besides that for many gases and vapours there are distinguished two points: lower and higher ignition points (detonation boundary).These two points determine the boundary values where the ignition of the mixture can follow.The Table 1 presents ignition temperatures of the stoichiometric mixtures of the different fuels with the air. Fuel Ignition temperature [ The combustion mixture, which contains the fuel gas and the air, can ignite in strictly defined limits of contents of the fuel in the air.The natural gas consists many hydrocarbons, however it includes mostly above 75% of methane.For the experimental test one used two types of the natural gas: 1. the certified model gas G20 which contains 100% of methane compressed in the bottles with pressure 200 bar at lower heat value 47.2 -49.2 MJ/m 3 2. the certified model gas G25 that contain 86% of methane and 14% of N2 at lower heat value 38.2 -40.6 MJ/m 3 . Because the natural gas contains many hydrocarbons with changeable concentration of the individual species the heat value of the fuel is not constant.It influences also on the ignition process depending on lower ignition temperature of the fuel and energy induced by secondary circuit of the ignition coil.For comparison in Table 2 the ignition limits and temperatures for some technical gases and vapours in the air at pressure 1.013 bars are presented.The data show a much bigger ignition temperature for the natural gas (640 -670 °C) than for gasoline vapours (220°C).For this reason the gasoline-air mixture requires much lower energy for ignition than CNG-air mixture.However, higher pressure during compression process in the engine with higher compression ratio in the charged SI engine causes also higher temperature that can induce the sparking of the mixture by using also a high-energy ignition system.Because of lower contents of the carbon in the fuel, the engines fuelled by the natural gas from ecological point of view emit much lower amount of CO2 and decreases the heat effect on our earth. Till now there are conducted only some laboratory experiments with the high-energy ignition system for spark ignition engines with direct CNG injection.There are known the ignition systems for low compressed diesel engines fuelled by CNG by the injection to the inlet pipes. Fuelling methods and ignition in gas diesel engines Several fuelling methods of the natural gas are applied in modern compression ignition engines, where the most popular are the following cases:  delivering the gas fuel into the inlet pipes by mixing fuel and air in the special mixer  small pressure injection of gaseous fuel into the pipe and ignition of the mixture in the cylinder by electric spark  high pressure direct injection of gaseous fuel particularly in high load engine There are given the reasons of decreasing of compression ratio in two first methods and the aim of application of gaseous fuels in CI engines (lowering of CO2, elimination of soot and better formation of fuel mixture).Applying of the two first methods decreases the total e n g i n e e f f i c i e n c y i n c o m p a r i s o n t o s t a n d a r d d i e s e l e n g i n e a s a r e s u l t o f l o w e r i n g o f compression ratio and needs an additional high energetic ignition system to spark disadvantages of application of gas fuel in CI engines.Figure 1 presents an example of variation of heat release of dual fuel naturally aspirated 1-cylinder compression ignition engine Andoria 1HC102 filled by CNG and small amount of diesel oil as ignition dose.This type of engine is very promising because of keeping the same compression ratio and obtaining of higher total efficiency.NG in gaseous forms is pressured into the inlet pipe, next flows by the inlet valve into the cylinder.During compression stroke small dose of diesel oil is delivered by the injector into the combustion chamber as an ignition dose. Because ignition temperature of diesel oil is lower than that of natural gas the ignition start begins from the outer sides of diesel oil streams.In a result of high temperature natural gas the combustion process of the natural gas begins some degrees of CA later.The cylinder contains almost homogenous mixture before the combustion process and for this reason burning of natural gas mixture proceeds longer than that of diesel oil. Figure 1 presents simulation results carried out for this engine in KIVA3V program.At higher load of diesel engine with dual fuel a higher mass of natural gas is delivered into the cylinder with the same mass of ignition diesel oil.In order to obtain the same air excess coefficient  as in the standard diesel engine the following formula was used: where: mair -mass of air in the cylinder, mdo -mass of diesel oil dose, mCNG -mass of CNG in the cylinder, A/F -stoichiometric air-fuel ratio. At assumed the filling coefficient 0,98 and charging pressure at the moment of closing of the inlet valve po = 0,1 MPa and charge temperature To = 350 K, the air mass delivered to the cylinder with piston displacement Vs amounts: At the considered dual fuelling the calculated equivalent air excess coefficients after inserting into eq.( 2) and next into eq.(1) amounted, respectively: 1) at n = 1200 rpmz Variation of the mass of natural gas in the dual fuel Andoria 1HC102 diesel engine at rotational speed 2200 rpm is shown in Figure 2. The principal period of combustion process of the natural gas lasted about 80 deg CA and its ignition began at TDC.In the real engine the diesel oil injection started at 38 deg CA BTDC.Heat release from the both fuels (CNG and diesel oil) is shown in Figure 3 for the same engine at rotational speed 2200 rpm.Total heat released during combustion process results mainly on higher burning mass of the natural gas.The ignition process in the gas diesel engines with the ignition dose of diesel oil differs from other systems applied in modified engines fuelled by natural gas delivered into the inlet pipe and next ignited by the spark plug.The initiation of combustion process in CNG diesel engines with spark ignition is almost the same as in the spark ignition engines. Factors Determing Ignition and Efficient Combustion in Modern Engines Operating on Gaseous Fuels 9 Ignition conditions of natural gas mixtures The flammability of the natural gas is much lower than gasoline vapours or diesel oil in the same temperature.At higher pressure the spark-over is more difficulty than at lower pressure.During the compression stroke the charge near the spark plug can be determined by certain internal energy and turbulence energy.Additional energy given by the spark plug at short time about 2 ms increases the total energy of the mixture near the spark plug. The flammability of the mixture depends on the concentration of the gaseous fuel and turbulence of the charge near the spark plug.Maximum of pressure and velocity of combustion process in the cylinder for given rotational speed depend on the ignition angle advance before TDC (Figure 4).The beginning of the mixture combustion follows after several crank angle rotation.While this period certain chemical reactions follow in the mixture to form the radicals, which can induce the combustion process.The energy in the spark provided a local rise in temperature of several thousand degrees Kelvin, which cause any fuel vapour present to be raised above its auto-ignition temperature.The auto-ignition temperature determines the possibility of the break of the hydrocarbon chains and the charge has sufficient internal energy to oxidize the carbon into CO2 and water in the vapour state.Immediately, after the beginning of combustion (ignition point) the initial flame front close to the spark plug moves in a radial direction into the space of the combustion chamber and heats the unburned layers of air-fuel mixture surrounding it. For the direct injection of CNG for small loads of the engine in stratified charge mode the burning of the mixture depends on the pressure value at the end of compression stroke and on the relative air-fuel ratio.These dependencies of the CNG burning for different mixture composition and compression ratio are presented in Figure 5 [15].The burning of CNG mixture can occur in very small range of the compression pressure and lean mixture composition and maximum combustion pressure reaches near 200 bars.For very lean mixtures and higher compression ratios the misfire occurs, on the other hand for rich mixtures and high compression ratios the detonation is observed.During the cold start-up the ignition process of the CNG mixture is much easier than with gasoline mixture because of whole fuel is in the gaseous state.Today in the new ignition systems with electronic or capacitor discharge the secondary voltage can reach value 40 kV in some microseconds.The higher voltage in the secondary circuit of the transformer and the faster spark rise enable that the sparking has occurred even when the spark plug is covered by liquid gasoline.With fuelling of the engine by CNG the sparking process should occur in every condition of the engine loads and speeds.However, at higher compression ratio and higher engine charging the final charge pressure increases dramatically in the moment of ignition and this phenomenon influences on the sparking process. Electric and thermal parameters of ignition On the observation and test done before on the conventional ignition systems, the higher pressure of the charge in the cylinder requires also higher sparking energy or less the gap of the electrodes in the spark plug.The chemical delay of the mixture burning is a function of the pressure, temperature and properties of the mixture and was performed by Spadaccini [12] in the form: The simplest definition of this delay was given by Arrhenius on the basis of a semiempirical dependence: where p is the charge pressure at the end of the compression process [daN/cm 2 ]. Experimental and theoretical studies divide the spark ignition into three phases: breakdown, arc and glow discharge.They all have particular electrical properties.The plasma of temperature above 6000 K and diameter equal the diameter of the electrodes causes a shock pressure wave during several microseconds.At an early stage a cylindrical channel of ionization about 40 m in diameter develops, together with a pressure jump and a rapid temperature rise.Maly and Vogel [10] showed that an increase in breakdown energy does not manifest itself a higher kernel temperatures, instead the channel diameter causing a larger activated-gas volume.Since the ratio between the initial temperature of the mixture and the temperature of the spark channel is much smaller than unity, the diameter d of the cylindrical channel is given approximately by the following expression: where  is ratio of the specific heats, h is the spark plug gap and p pressure.Ebd represents the breakdown energy to produce the plasma kernel.Ballal and Lefebvre [6] considered the following expression for the breakdown voltage Ubd and total spark energy Et: One assumed, that the charge is isentropic conductive and the field attains a quasi-steady state (no time influence).Knowing the potential of the electromagnetic field  and electrical conductivity  the following equation can be used [12]: After a forming of the plasma between the electrodes the heat source e q  in the mixture can be calculated directly from the electrical current in the secondary coil circuit I, which changes during with time: where r and z are the coordinates of the ionization volume. At leaner homogenous mixture the discharging of the energy by spark plug leads sometimes to the misfire and increasing of the hydrocarbons emission.At stratified charge for the same total air-fuel ratio the sparking of the mixture can be improved by turning the injected fuel directly near spark plug at strictly defined crank angle rotation depending on the engine speed.The energy involved from the spark plug is delivered to the small volume near spark plug.The total energy, which is induced by the spark plug is a function of the voltage and current values in the secondary circuit of the ignition coil and time of the discharge.On the other hand, values of voltage U and current I change in the discharge time and total energy induced by the coil can be expressed as a integral of voltage U, current I and time t: where  is the time of current discharge by the secondary circuit of the ignition coil.Integration of the measurement values of voltage and current in the secondary circuit of the coil gives the total electric energy to the mixture charge near spark plug.The total internal energy of the mixture near the spark plug increases in the period t = 0.. and according to the energy balance in the small volume the temperature of the charge in this region continuously increases. The modern conventional ignition system can give the burning energy eburn = 60 mJ at the secondary voltage 30 kV and burning current iburn = 70 mA during 1.8 ms.In practice a required value of the secondary voltage of the ignition system is calculated from the following formula: For lower gaps and compression ratios the secondary voltage can be decreased.The required secondary voltage as a function of compression pressure is presented in Figure 6 for different gaps of spark plug electrodes from 0.3 to 0.9 mm. If one assumes that the electrical energy E is delivered during period  to a certain small volume V near spark plug with the temperature of the charge T1 and pressure p1 and concentration of CNG fuel adequate to the air excess coefficient , it is possible to calculate the change of the charge temperature in this space.On the basis of the law of gas state and balance of energy the specific internal energy u of the charge in the next step of calculation is defined. where i is the step of calculations and dE is the energy delivered from the spark plug in step time d.The internal energy is function of the charge mass m and temperature T, where mass m in volume V is calculated from the following dependency: and gas constant R is calculated on the mass concentration g of the n species in the mixture.Mass of the charge consists of the fuel mass mf and air mass ma, which means: For the mixture that contains only air and fuel (in our case CNG), the equivalent gas constant is calculated as follows: 1 In simple calculations the local relative air-fuel ratio  is obtained from the local concentration of air and fuel: where K is stoichiometric coefficient for a given fuel.For the CNG applied during the experiments K=16.04 [kg air/kg CNG].At assumption of the relative air-fuel ratio  the masses of fuel mf and air ma can be obtained from the following formulas: After substitution of the fuel and air masses to the equation ( 10) the equivalence gas constant R is defined only if the  is known.   For whole volume V the internal energy at the beginning of the ignition is defined as: The charge pressure during compression process increases as function of the crank angle rotation from p1 to p.When one knows the engine's stroke S and diameter D of the cylinder and compression ratio  it is possible to determine the change of pressure from start point to another point.If the heat transfer will be neglected the pressure change in the cylinder can be obtained from a simple formula as a function of time t and engine speed n (rev/min): where Vc is volume of the cylinder at crank angle  and k is specific heat ratio (cp/cv). For simplicity of calculations it was assumed that during compression stroke the specific heat ratio for small period is constant (k  1.36) and cylinder volume changes with kinematics of crank mechanism.Delivery of electrical energy to the local volume results on the increase of local internal energy and changing of temperature T, which can be determined from the following energy equation: The electrical energy can be performed in a different way: with constant value during time  (rectangular form or according to the reality in a triangular form as shown in Figure 7.If the total electrical energy amounts E and duration of sparking lasts  (1.8 ms) then for the first case the local power is E/ for whole period  of the sparking.For the second case electrical power from the spark plug changes and for the first period can be expressed as: For the second period the electrical power can be determined as follows: max 1 2 The temperature of the charge near the spark plug during the period  is computed as follows: For the first case (rectangular form) of variation of electrical power the change of the charge temperature is computed from the following dependency: For the second case (triangular form of power) the temperature of the local charge is calculated as follows: At assuming of specific volumetric heat cv as constant in a small period  the temperature of the local charge is simply obtained by integration of given above equations as function of time t (t = 0 .. ) The constant C is calculated for the initial conditions for t/ = tmax/ with the end temperature for 1 st period as an initial temperature for 2 nd period.The three cases are performed in a nondimensional time t/.Because compression stroke in 4-stroke engine begins usually a=45 CA ABDC and thus the cylinder volume [3] can be calculated at i  crank angle as follows: The simple calculations of the increment of the local temperature in the region of the spark plug were done at certain assumptions given below: swept volume of the cylinder -450 cm 3 , compression ratio -12, crank constant  -0.25, diameter of sparking region -1 mm, height of sparking region -1 mm, closing of inlet valve -45 CA ABDC, start angle of ignition -20 CA BTDC. For calculation the air-gas mixture was treated as an ideal gas (methane CH4 and air at =1.4).Two ignition systems were considered with ignition energy 40 and 60 mJ at assumption of: 1. constant sparking power (rectangular form) in period =2 ms 2. variable sparking power (triangular form) in period =2 ms. The results of calculations are performed in Figure 8 for those two ignition systems, respectively.It was assumed that compression process begins after closing of the inlet valve with constant coefficient of compression politrope k=1.36.In the moment of the sparking start the pressure in the cylinder amounts 1.577 MPa at temperature 726 K. Theoretical consumption of the air for combustion of 1 Nm 3 of the natural gas amounts 9.401 Nm 3 .For given concentration of the air and fuel (CNG) in the mixture the gas constant is R=296.9J/(kg K) and calculated mass of charge in the region amounts 0.465e-8 kg.As shown in both figures the final temperature in the region is the same for two considered variations of power.If the volume of the sparking region decreases the local temperature will increase, however ignition of the mixture depends on concentration of the fuel in the air.The final temperature does not depend on the shape of the ignition power during sparking but only on the total energy released during the sparking.In the gap of the electrodes at ignition energy 60 mJ a mean temperature amounts almost 17000 K after 2 ms and at 40 mJ amounts 12000 K.This is enough to ignite the mixture. Determination of thermal efficiency Only a small part of the delivered energy from the second circuit is consumed by gaseous medium, which is observed by increase of the temperature T and thus also internal energy Ei.The thermal efficiency of the ignition system is defined as ratio of the increase of internal energy and energy in the secondary circuit of the ignition coil: where E1 is the energy in the primary circuit and 0 is the total efficiency and e is the electric efficiency of the ignition system.The increase of the internal energy in volume V with initial pressure p1 can be determined as follows: Assuming a constant mass and individual gas constant R, the temperature after ignition can be defined from the gas state equation.At small change of the gas temperature from T1 to T2 the volumetric specific heat cv has the same value.In such way it is possible to determine the increase of the internal energy: After simplification this equation takes the form: The increase of the internal energy depends on the sparking volume, gas properties and a pressure increment in this volume.Because of constant volume and known R and cv the unknown value is only the increment of the pressure p.The direct method of measurement is using the pressure piezoelectric transducer with big sensitivity and with high limit of static pressure.For that case we have used the sensor PCB Piezotronic 106B51 (USA) with the following parameters: For that sensor the amplifier Energocontrol VibAmp PA-3000 was used.The filling of the chamber with fixing of the spark plug and tra n s d u c e r i s p r e s e n t e d i n F i g u r e 9 .T h e additional (medium) chamber with capacity 200 cm 3 is filled under given pressure (shown on the manometer) from the pressure bottle.The caloric chamber is filled from this medium chamber by the special needle valves.After sparking the chamber was emptied by opening the other needle valve.The needle valves were used in order to decrease the dead volume in the pipes connecting the chamber.The total volume was measured by filling the chamber by water and amounts 4,1 cm 3 .The target of the tests was to determine the amount of thermal energy delivered do the charge in the chamber after the sparking; it means the measurements of the pressure increment in function of initial pressure.For one point of each characteristic we carried out 10 measurements.For the tests two types of electrodes were used: the normal with 2.8 mm width and the "thin" with 25% cross-section of the first type.The measurements were carried out in nitrogen and air at initial pressure in the chamber corresponded to ambient conditions (over pressure 0 bar) and at 25 bars.For the "thin" electrodes there is observed a bigger increment of the pressure than while using the spark plug with normal electrodes both at low as at high initial pressure, despite the delivered energy from the secondary circuit of the coil is almost the same.Increment of pressure inside the chamber caused by energy delivered from spark plug is shown in Figure 10 for initial pressure 1 bar and 25 bars and by application of the spark plug with "thin" and "thick" electrodes.The duration of the sparking lasted about 4 ms and after this time the decrement of the pressure is observed which is caused by heat exchange with walls of the caloric chamber.In every case at the end of ignition process the sudden increase of secondary voltage takes place.The current in the secondary circuit of the ignition coil increases rapidly to about 80 mA after signal of the ignition and then decreases slowly during 4 ms to zero as one shows in Figure 11 for all considered cases.Variation of voltage in the secondary circuit is shown in Figure 12.For the considered ignition coil one reaches maximum voltage 3000 V in the case of higher initial pressure 30 bar.In every case at the end of ignition process the sudden increase of secondary voltage takes place.Thermal energy delivered to the spark plug (in the secondary circuit) was determined by integration of instant electric power (multiplication of current and voltage) with small time step.For the case with "thin" electrodes and at 1 bar the thermal energy amounts only 0,89 mJ and thus the thermal efficiency is about th = 1,29% (Figure 13).For normal electrodes at the same pressure the thermal energy is very lower 0,36 mJ which causes a small thermal efficiency th = 0,51%.The thermal energy and thermal efficiency increases with the increase of the initial pressure. For the case with "thin" electrodes of the spark plug the thermal efficiency amounts 13.49%, on the other hand for normal electrodes only 6.93%.The tests were done for five ignition systems from BERU at different initial pressure (0 -25 bars) and linear approximation variations of the thermal efficiencies are shown in Figure 14.With increasing of the pressure in the caloric chamber much more energy is delivered from the electric arc to the gas.The measurements of the pressure increase during spark ignition were carried out also for the air and the same pressures.Figure 16 presents the increase of secondary voltage in the ignition coil with increasing of initial pressure in the caloric chamber.For nitrogen and leaner mixtures a higher secondary voltage in the coil was measured. Determination of energy losses during ignition The model of ignition process takes into account only a small part of the spark plug and is shown in Figure 16.During the sparking the plasma is formed between two electrodes and it is assumed to be smaller than the thickness of these electrodes.After short time a pressure shock takes place and the charge is moving on outer side with high velocity [1] [13].The energy delivered directly to the charge is very low and therefore the energy losses should be assessed.As the experimental test showed, only a small part of delivered energy is consumed to increase the internal energy of the charge (maximum 10%).The energy losses during the ignition process can be divided into several kinds: radiation, breakdown, heat exchange with electrodes, kinetic energy which causes the turbulence, electromagnetic waves, flash and others. Radiation energy of ignition The part of the spark energy is consumed by radiation of the plasma kernel.The temperature T of plasma between two electrodes is above 6000 K.At assumption of the Boltzman radiation constant k=5.67 W/(m 2 K 4 ) and the coefficient of emissivity  of a grey substance [9] for the ignition arc, the specific heat radiation e can be obtained from the formula: The emissivity of the light grey substance was defined by Ramos and Flyn [4] and they amounted it in the range of 0.2 -0.4.For that case it was assumed that  = 0.3.The total radiation energy is a function of the ignition core surface Ai and sparking time ti: At assumption that the temperature T of the arc increases proportionally with time from 6000 K to 300 K the total radiation energy can be calculated as follows: Assuming the radial shape of the core equal the radius d/2 of the electrodes and its height h equal the gap of electrodes and also that maximum temperature of the arc amounts 6000 K after t1=20s and then decreases to 800 K after t2=2 ms, we can calculate the part of the coil energy as a loss of the radiation energy.Because 20 s is comparably small with 2 ms then the equation ( 14) can be rewritten as follows: where Ai -the surface of the plasma core amounts i A dh   . Ionization energy Our experiment was carried out in nitrogen and on the basis of the literature data there are three ionization energies [7]: ei1 = 1402.3kJ/mol, ei2 = 2856.0kJ/mol, ei3 = 4578.0kJ/mol.The energy required to breakdown of the spark is an ionization energy that can form later the arc.Total ionization energy can be calculated for n moles of the gas (nitrogen) in the core of plasma as: The initial temperature T amounts 300 K and universal gas constant (MR) = 8314 J/mol.For higher pressure, proportionally the higher ionization energy is required and the same is for lower temperature.However the plasma is formed with smaller radius, the ionization takes place in a higher volume with radius two times bigger. Heat transfer to electrodes A certain part of the energy delivered by the secondary circuit is consumed on the heating of the electrodes.In a small time of the sparking the heat transfer takes place on the small area approximately equal the cross section of the electrodes with diameter d.The main target is to determine the specific heat conductivity  between the gas and metal.This value  can be obtain from the Nusselt number Nu [2], gas conductivity p and a characteristic flow dimension, in this case the diameter of the electrode: where Nu is obtained from Reynolds number Re and Prandtl number Pr. However Ballal and Lefebvre [6] accounted for heat transfer the following expression for Nusselt number: 0,46 0,46 Nu 0,61 Re 0,61 where u  is gas velocity along the wall and  is kinematic viscosity of gas.On the other hand the kinematic viscosity of the gas depends on the temperature T and density  according to the formula: The conductivity of the gas is calculated based on the basis of Woschni [3] formula: Finally the cooling energy is calculated from the equation: Kinetic energy Liu et al. [9] assumed that some fraction of the input energy is converted into kinetic energy of the turbulence according to the following formula: where u is density , u is the entrainment velocity and d is the kernel diameter.Using this equation the kinetic energy can be calculated for given values: u = 1,403 kg/m 3 and for wave pressure moving with mean velocity u [m/s].During ignition time tl (less than 2 ms) the total kinetic energy amounts: Ignition efficiency Electric efficiency of the ignition systems define also the thermal resistance of these devices, because lower efficiency value decides about higher heating of the coil body and takes effect on their durability.On the basis of conducted tests by measurements of the primary (state 1) and secondary (state 2) current and voltage, it is possible to calculate the total electric efficiency of the ignition systems.The total electric efficiency can be defined as follows: The electric efficiency for the ignition system with transistor ignition coil from Beru No 0040102002 is shown in Figure 17.The test of energy efficiency was done for 6 probes for every point of measurements.The electric efficiency is very small and at assumed initial pressures does not exceed 30%.The rest of energy goes into the surroundings in a form of heat.Lower efficiency is observed for nitrogen as the neutral gas.The same input energy for all considered cases amounted 210.74 mJ.The energetic balance shows that the heat transfer to the electrodes consumes a half of delivered energy during the sparking process.Decrease of the cross-section of the electrodes to 25% of their initial value causes the increase of the thermal efficiency almost twice with decrease of the heat transfer to the electrodes.The work done by Liu et al [5] shows the discharge efficiency of different ignition system and for conventional spark ignition system this efficiency is below 0.1 (10%) despite the bigger coil energy (above 100 mJ). CFD simulation of ignition and combustion process of CNG mixtures Propagation of flame (temperature and gas velocity) depends on the temporary gas motion near the spark plug.The ignition process in SI gaseous engines was simulated in CFD programs (KIVA and Phoenics).Setting of the electrodes in direction of gas motion influences on spreading of the flame in the combustion chamber. Propagation of ignition kernel The propagation of the temperature during ignition process depends on the gas velocity between the spark electrodes.The experimental tests show an absence of the combustion process in the engine without gas motion.The combustion process can be extended with a big amount of hydrocarbons in the exhaust gases.The propagation of the temperature near the spark electrodes was simulated by use of Phoenics code for horizontal gas velocity amounted 10 m/s with taking into account the heat exchange, radiation, ionization and increase of the internal energy.The model of the spark ignition contained 40x40x1 cells with two solid blocks as electrodes and one block of the plasma kernel.The electrodes were heated during 1 ms with energy equal 8 mJ as it was determined during experimental tests.Propagation of the temperature near spark electrodes is shown in Figure 20 for two times 0.4 and 0.8 ms, respectively.The temperature inside the plasma grows as a function of the power of the secondary circuit in the coil and the velocity of the charge causes propagation of the temperature from the sparking arc outside of the plasma.Temperature inside the plasma kernel reaches value about 13000 K. CNG ignition process in caloric chamber The first step of the experimental tests was an observation of the ignition of the mixture of CNG and the air in the caloric chamber and the second step by use the simulation.The cylinder model has diameter D=34 mm and height B=22 mm.Volume of the chamber corresponds to the minimal volume of the combustion chamber in the engines of displacement 260 cm 3 and compression ratio 14. pressure (about 180 bars) after burning of the whole dose (Figure 20).Velocity of increment of the mean charge temperature inside the caloric chamber depends on the value of the fuel dose (Figure 21) and for bigger dose the quicker increment of the temperature is observed.The dose of fuel influences on variation of all thermodynamic parameters.The initiation of the combustion process lasted about 0.5 ms for all dose of the fuel.The complete combustion of all doses of the fuel without swirl and tumble follows after 4 ms with assumption of heat transfer to the walls.Four slides in Figure 22 show the spreading of the flame in the caloric chamber from the spark plug to the walls almost spherically.The maximum of temperature near spark plug amounts almost 3600 K and after combustion process decreases to 2700 K. Verification of ignition modelling The initial simulations of the CNG combustion was carried out on the model of the chamber used for the experimental tests on the Schlieren stand in a steady state initial conditions.The chamber had the volume equalled 100 cm 3 with diameter D=80 mm and width B=20 mm.The initiation of the ignition followed in the centre of the chamber by two thin electrodes. The chamber was filled by natural gas at 5 bars and =1.4.The initial temperature of the charge amounted 300 K, so this required much more electrical energy than for firing engine.The combustion process involves the change of thermodynamic parameters of the gas, which can be observed by moving of flame with different temperature, pressure and density in burned and unburned spaces.Full combustion of the methane-air mixture lasts longer than in the real engine combustion chamber at the same geometry of the combustion chamber.The propagation of chemical reactions is radial and the thick boundary of the combustion (about 8 mm) is observed because of the lean mixture.Propagation of the flame causes the radial compression of the gas between unburned and burned regions and thin area of twice higher density is formed.Figure 9 shows distribution of gas density in the chamber after 18 ms from start of ignition.Red colour indicates density on the level 0.0118 g/cm 3 and blue colour only 0.005 g/cm 3 . Figure 24.Gas density and absolute gas velocity after 18 ms from beginning of ignition Combustion process in the narrow area takes place with turbulent velocity.Turbulence causes penetration of the flame into the unburned mixture with velocity higher than laminar combustion speed.For the methane-air stoichiometric mixture the combustion laminar speed amounts only 40 cm/s.For the considered case the absolute velocity of combustion in the flame region amounts about 80 m/s as one is shown in Fig. 10.However, total combustion speed is very low and is close to the laminar speed of methane-air mixture 0.4 m/s. Experimental tests on the Schlieren stand done by Sendyka and Noga [11] showed also radial propagation of the flame defined by the change of the charge density.Figure 25 shows the films of the flame propagation in the chamber at 3, 7, 40 and 54 ms after start of the ignition, respectively.The ignition of the CNG and air mixture with initial pressure 5 bars and initial temperature 300 K was initiated by two thin electrodes in centre of the combustion chamber.The charge was fully premixed with air excess ratio =1. Mixture motion and ignition The most important factor influencing on the ignition is the charge motion through the spark plug.Two kinds of motions were considered: swirl and tumble caused by valve and inlet profile, combustion chamber and squish.The combustion process is strongly connected with turbulence of the charge and only small part is the laminar speed of the total combustion velocity.Simulation was carried out in the rectangular space with central location of the spark plug.The mesh of the combustion chamber model with length and width 5 cm and height 3 cm was divided into 288000 cells with rectangular prism (NX=80, NY=80 and NZ=45).The calculations were carried out in transient conditions (initial time step 1e-6 s in time t=5 ms).The spark plug was located in the centre of the calculation space and the object of the electrodes was created by CAD system.The mesh in the region of the spark plug electrodes contains fine grids with cell length equal 0.3 mm in x and y axis. At the first the ignition of CNG was simulated with "initial tumble" y =250 rad/s and p=20 bars.The charge with velocity about 15 m/s flew through the gap of the spark plug causing the propagation of the flame inside the chamber.The simulation of combustion and gas movement was carried out also by Phoenics, which takes into account turbulence model and simple combustion of compressible fluid.The charge motion is connected with high turbulence and this causes also the higher combustion rate. Distribution of the combustion products in the modelled space is shown in Figure 26 at 0.5 ms and 1.2 ms after start of the ignition, respectively.After short time (about 1 ms) the whole charge is burned in the calculation space.The higher flow velocity is between the electrodes of the spark plug.The other simulation was carried out for the central swirl around the spark plug with swirl velocity 15 m/s on the mean radius 1.5 cm.In this case the interaction of the electrode shape is seen -the propagation of the flame is faster in the opened site of the electrodes.Figure 27 presents development of combustion process after 1 and 4 ms from beginning of the ignition. The swirl in the chamber influences on the irregular propagation on the flame and extends the combustion process.Even after 4 ms the combustion of the methane is not full.Velocity of the gas flow in the spark plug gap is smaller than in the "tumble" case.For this reason the propagation of the combustion products and flame is not uniform. Conclusions The chapter contains results of theoretical, modelling and experimental work considered to factors, which have very big impact on the ignition of gaseous fuels in combustion engines. On the fact of more and more important role of gaseous engines, particularly those fuelled by natural gas, definition of good conditions for ignition of gaseous fuels is one of the task of development of modern spark ignition gaseous engines, particularly with high charging ratio.Experimental works with CNG ignition were done in the caloric chamber, however in conditions closed to real conditions of engine work.On the presented considerations one can draw some conclusions and remarks: 1. Gaseous fuels, such as CNG requires higher electric energy delivered by the ignition system.Higher pressure in the combustion chamber increases internal energy near the spark plug and requires also higher secondary voltage of the ignition coil.For gaseous leaner mixtures an ignition system with higher energy is needed (above 60 mJ). 2. The higher initial pressure increases the thermal efficiency of the ignition system.3.For the conventional ignition systems even with high secondary energy above 60 mJ only a small part maximum 15% of it is consumed by the charge.4. The maximum of thermal efficiency was obtained at initial pressure 25 bars with value of 13, 5% for the spark plug with thin electrodes and only 1% at ambient pressure and temperature.5.The spark plug with thin electrodes indicates higher thermal efficiency than the spark plug with normal electrodes.This is caused by small heat exchange with the electrode' walls.6.The energy losses consist of heat exchange, ionization energy (breakdown), radiation and others.The biggest of them are the heat transfer to the spark electrodes and radiation.7. On the basis of CFD simulation one proved that nature of mixture motion (tumble or swirl) in the combustion chamber influences on propagation velocity of the ignition kernel and combustion process.8. Ignition in CNG diesel engine can be caused by injection of small ignition dose of diesel oil. Figure 1 . Figure 1.Heat release rate in dual fuel Andoria 1HC102 diesel engine fuelled by CNG and ignition dose of diesel oil (index ON-diesel oil, CNG -natural gas) Figure 2 .Figure 3 . Figure 2. Mass variation of natural gas in Andoria 1HC102 diesel engine fuelled by CNG and ignition dose of diesel oil (index do-diesel oil, CNG -natural gas) Figure 4 . Figure 4. Influence of ignition angle advance on the engine torque where: U2 -secondary voltage [V],a -gap between electrodes of the spark plug,  -compression ratio. Figure 6 . Figure 6.The secondary voltage as a function of compression pressure and electrode's gap Figure 7 . Figure 7. Variation of electrical power from spark plug Figure 8 . Figure 8. Increment of the local temperature in the region of the spark plug for two ignition systems: a) with constant sparking power, b) with variable sparking power (triangular form) (a) (b) Figure 9 . Figure 9. Scheme of the direct measurement pressure in the caloric chamber Figure 10 . Figure 10.Pressure increment in caloric chamber filled by nitrogen at initial pressure 1 and 25 bars by application of spark plug with "thin" and "thick" electrodes Figure 11 . Figure 11.Secondary current in the coil during the ignition in the caloric chamber filled by nitrogen at initial pressure 1 and 25 bar by application of spark plug with "thin" and "thick" electrodes Figure 12 .Figure 13 . Figure 12.Secondary voltage in the coil during the ignition in the caloric chamber filled by nitrogen at initial pressure 1 and 25 bars by application of spark plug with "thin" and "thick" electrodes Figure 14 .Figure 15 . Figure 14.Thermal efficiency of five tested ignition systems Figure 16 . Figure 16.Model of spark ignition Figure 17 . Figure 17.Electric efficiency of ignition system for two mixtures and nitrogen Figure 18 . Figure 18.Balance of energy in the conventional ignition system for 2 types of the electrodes FactorsFigure 19 . Figure 19.Temperature in the charge during ignition after 0.4 and 0.8 ms Figure 20 . Figure 20.Increment of the pressure during combustion in the caloric chamber Prediction of the mixture parameters in the chamber during combustion process was carried out by using the open source code of KIVA3V [4].The complex test was conducted for 3 dose of CNG: 0.035, 0.04 and 0.045g, which corresponds to air excess coefficients : 1.58, 1.38 and 1.23, respectively at initial pressure 40 bars and temperature 600 K.At assumption of the high compression pressure in the caloric chamber it was obtained very high level of final Figure 21 .Figure 22 .Factors Figure 21.Variation of the temperature in the caloric chamber for different dose of CNG The ignition energy was simulated as additional internal energy in the centre of the combustion chamber.The LES model for fully premixed charge was used in the CFD open source program OpenFOAM.The classical idea is to use a filter which allows for the separation of large and small length scales in the flow-field.Applying the filtering operator to the Navier-Stokes equations provides a new equation governing the large scales except for one term involving the small velocity scale.The model of combustion chamber was created by hexahedron cells and contained 68x68x32 cells.Calculations of combustion process were carried out in 64-bit Linux system with visualisation of results by use Paraview software.The combustion process in the chamber lasted a long time (above 50 ms), because of absence of the gas motion.The oxidation of methane was simulated by the OpenFOAM combustion procedure in Xoodles module.Thermodynamic properties of the charge were calculated by using JANAF tables.Increase of pressure in the flat combustion chamber without initial swirl or "tumble" of the charge is shown in Figure23. Figure 23 . Figure 23.Increase of pressure in the chamber after ignition 4 . The flame is distorted by touching into the quartz glass in the chamber, which is observed by hell circle inside the black circle.The change of gas density influences on the distortion of the laser beam and photos show development of the flame during combustion process.The experimental test proves the result obtained from simulation by using LES combustion model in the OpenFOAM program. Figure 26 .Figure 27 . Figure 26.Combustion products with initial "tumble" charge motion after 0.5 ms and after 1.2 ms Table 1 . Ignition temperatures of the fuels in the air (mean values) Table 2 . Ignition limits and ignition temperatures of the most important technical gases and vapours in the air at pressure 1,013 bar Composition and properties of natural gas used in experimental tests are presented in Table3. Table 3 . Properties of the natural gas used in experimental research Factors Determing Ignition and Efficient Combustion in Modern Engines Operating on Gaseous Fuels 11
2017-09-17T18:45:51.676Z
2012-11-14T00:00:00.000
{ "year": 2012, "sha1": "85af0b925829f649027ccf1d208d2e31b61d5513", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5772/48306", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "7ff294ce96370744fde2d04193bc2b1f9494bd53", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Computer Science" ] }
118496975
pes2o/s2orc
v3-fos-license
Observation of a New Magnetic Response in 3-Dimensional Split Ring Resonators under Normal Incidence So far, research in the field of metamaterials has been carried out largely with arrays of flat, 2-dimensional structures. Here, we report a newly identified magnetic resonance in Split Cylinder Resonators (SCRs), a 3-dimensional version of the Split Ring Resonator (SRR), which were fabricated with the Proton Beam Writing technique. Experimental and numerical results indicate a hitherto unobserved 3-dimensional resonance mode under normal incidence at about 26 THz, when the SCR depth is approximately half the free space wavelength. This mode is characterized by strong currents along the cylinder axis which are concentrated at the cylinder gaps. Due to their orientation, these axial currents give rise to a magnetic response under normal incidence, which is not possible in shallow SRRs. Our results reveal new behavior in the SRR structure which arises from a change in its aspect ratio. Such new resonances can have a significant influence on the quest for practical, 3-dimensional metamaterials. Metamaterials are artificial composites which enable magnetism to be achieved over a range of frequencies [1,2,3]. There has been a great deal of research on metamaterials in recent years. The split ring resonator (SRR) first proposed by Pendry et al [1] has played a pioneering role in this research. SRRs consist of two concentric metallic rings with gaps situated oppositely. This design allows resonances where inductive currents circulate along the rings in conjunction with capacitive charge accumulation at the gaps. These circular currents, when excited by an external oscillating magnetic field, result in a magnetic response and thus considerably influence the effective permeability (µ) of the material. This can result in negative effective µ over a frequency range close to the resonance [1,4]. The circular currents can also be excited by an oscillating electric field parallel to the gap sides of the SRR [5,6]. We shall refer to resonances with circular currents as LC resonances. SRRs are also shown to have an electrical response similar to that of cut wires [7]. These electrical resonances are due to antenna-like couplings between the SRRs and the incident electric field and can result in a region of negative effective electric permittivity ( ). To date, SRRs have been experimentally studied over a wide frequency range, from the low GHz [4,8], to the THz regime [2,9,10], and finally to the near infrared [3,11]. Obtaining a magnetic response from SRRs requires the presence of significant magnetic field normal to the SRR plane. Thus the incident radiation must propagate along (or obliquely) to the SRR plane. The lack of a magnetic response from SRRs under normal incidence has been a stumbling block its use in the construction * Electronic address: phycsy@nus.edu.sg of practical materials. For practical applications, planar arrays of SRRs need to be stacked to give greater width perpendicular to the propagation direction. For example, Shelby et al stacked printed circuit boards to fabricate a prism used to demonstrate negative refraction in the Gigahertz range [8]. At higher frequencies, where nanofabrication techniques are needed, stacking becomes challenging. However, some effort has been made in this direction. Katsarakis et al fabricated a metamaterial of 5 layers of single split rings (SSRs) resonating in the far infrared regime (∼ 6 THz) [12]. Liu et al reported stacking 4 or more layers of sub-micron SSRs operating at about 100 THz [13]. The processes used in these works resulted in split ring structures separated by dielectric spacers. Some authors have applied 3-dimensional lithography techniques to fabricate nanostructures capable of coupling with the external magnetic field under normal incidence. For example, Zhang et al have used a process based on interference lithography which resulted in Au "staples" deposited on a pitch grating [14]. Removal of the pitch grating resulted in Au staples standing upright on the substrate. These structures exhibited a magnetic response under normal incidence when a the incoming magnetic field is perpendicular to the plane of the staples. Very recently, Rill et al fabricated a planar structure consisting of connected, elongated SRRs as a starting point for a stacked, "woodpile" structure of elongated SRRs [15]. Another possible alternative approach is to fabricate, in a single lithography step, high aspect ratio structures with great depth perpendicular to the lithography plane. Recently, Casse et al have used deep X-ray lithography for this purpose, demonstrating its application for resist 200 µm thick [16]. Such an approach would avoid some of the difficulties, such as alignment issues, arising from layering techniques. Furthermore deep structures allow cur-arXiv:0807.4585v2 [cond-mat.mtrl-sci] 12 Aug 2008 rents to flow vertically (perpendicular to the lithography plane). This can result in distinct resonant modes unavailable in stacked, 2-dimensional structures separated by spacers. Here, we report on the the fabrication of single layers of very deep Split Ring Resonators (SRRs) using a focused, sub-micron MeV proton beam. These Split Cylinder Resonators (SCRs) have excellent sidewall quality and high aspect ratio. We also present results from spectral measurements made using Fourier Transform Infra-Red (FTIR) Spectroscopy as well as simulated results obtained using the commercially available Microwave Studio TM software. Our experimental and numerical results give evidence of a hitherto unobserved magnetic resonance (∼26 THz) under normal incidence, which is not possible with convectional SRRs. A number of previous works have studied the effect of SRR depth on their LC resonance [17,18,19], although not at the high aspect ratios of this current work. Here, we fabricated and characterized 2 SCR samples with depths in excess of their ring diameters. For comparison, we also fabricated a regular SRR sample of lower depth, as well as closed rings and closed cylinders (where the gaps in the SRRs are eliminated). Fabrication work was carried out at the Center for Ion Beam Applications at the National University of Singapore using a fabrication process based on the direct write Proton Beam Writing (PBW) technique [20,21]. Si wafers were first sputtered with thin (∼20 nm) layers of chrome (Cr) and gold (Au), which served respectively as adhesion and electroplating seed layers. Proton Beam Writing was then used to write a latent image in polymethylmethacrylate (PMMA) resist spin coated onto the Si wafers. PBW utilizes a highly focused Megaelectronvolt proton beam to write latent images in resist. As protons maintain relatively straight tracts through tens of microns of resist, the technique allows fabrication of deep structures with vertical sidewalls. After writing an image in PMMA resist of sufficient depth, we used an electroplating step to define SRR structures in gold, using the resist as an electroplating mould. This allows higher aspect ratios than evaporation or sputtering techniques. The depth of the SRRs in this case is determined by the plating current and time. Care was taken to avoid overplating. The resist is then chemically stripped after serving its function as a plating mould. To prevent shorting of the SRRs, the sputtered Au and Cr layers were chemically etched off. Au etching times were carefully controlled to prevent damage to the Au SRRs, and a highly specific chemical etch was used for the Cr layer. Individual SRR structures have critical dimensions of around 350 nm and depths up to 5 µm (Figure 1). The unit cell of each SRR is 3.2µm, with each array measuring 200 µm by 200 µm. Each array thus contains over 4000 individual SRRs. The samples were characterized using a Bruker Hyperion 2000 IR microscope coupled to a Bruker IFS 66v/S Fourier Transform Infrared spectrometer (FTIR). Spectra at normal incidence were collected in reflection as well as transmission mode under different polarizations using the bare silicon substrate as reference. The beam spot covered almost the entire array of 4000 structures. A KBr beamsplitter and mid-band MCT infrared detector cooled to 77 K were used. The numerical aperture of the Schwarzschild infrared objective used was 0.4, corresponding to a maximum conical incidence angle of 23 • . Simulations were carried out using the commercial Microwave Studio TM software package. Due to the very tight packing of our arrays, which must lead to significant coupling between individual structures, we found that simulating a single unit cell resulted in spectra that were slightly red shifted relative to experimental results. Simulations were thus carried out for a four by four array of SRRs, where the top row of SRRs (i.e. those without gap-side neighbors) had their gaps closed. Closing the gaps of the top row destroyed their resonance and suppressed their red-shifted spectral contributions. The simulation domain had perfect electrical and perfect magnetic conductor boundary conditions for the sides and open boundary conditions for the ends. We modeled gold as a lossy metal with conductivity = 4.09 × 10 7 Sm −1 . The electrical permittivity of the silicon substrate is taken to be 11.6 [22] with tangent δ = 4 × 10 −3 The measured spectral response (with sample plane normal to beam axis) of two SRR samples and their closed ring versions are shown in Figure 2. A prominent reflection dip is seen at about 26 THz for the 5.6 µm deep SRR sample. This dip is strongest under parallel polarization, when the electric field is parallel to the gap The origin of the reflection dip in the deep SRRs is the main focus of this letter. Being present only in SRRs under parallel polarization and absent in closed rings, it appears to be associated with an LC resonance. However, LC resonances in SRRs under normal incidence typically result in transmission dips, instead of reflection dips. Normal incidence results in there being no magnetic field normal to the rings. In shallow SRRs, the LC resonance can only couple to the external electric field, influencing solely the behavior of [6]. A region of negative without negative µ leads to a stop band. To further investigate the nature of the reflection dip, we measured both the reflection and transmission (Figure 3) for an additional SRR sample with depth of 4.8 µm. This revealed that the reflection dip is accompanied by a corresponding transmission peak, clearly indicating the presence of a passband. We also observe that the resonant frequency shifts downwards with increasing SRR depth. This trend, which was captured in our simulations, is also unexpected. Previous experimental studies have shown that the frequency of LC resonances shifts upward with depth [18,19]. These observations indicate that the reflection dips are due to a resonance other than the regular LC resonance. field snapshots for SRRs of depths 1.0 µm and 4.8 µm under parallel polarization. The frequency for both snapshots is 27.3 THz, where the deeper SRR shows a reflection dip. For the 1.0 µm SRR, the current does not show the characteristic circular pattern of LC resonances and is driven mainly by the electric field with no coupling to the external magnetic field. In the 4.8 µm deep SRR, we observe circular currents along the cylinder circumference, as well as strong currents flowing along the cylinder axis. The axial currents, which are concentrated at the gaps, flow up on one side and down the other. This gives rise to enhanced gap fields and results in a magnetic response in deep SRRs. This feature is absent in the shallow SRRs. We explain the axial currents as being induced by the external magnetic field passing through the gap. Its time derivative creates an electromotive force in a 3dimensional loop formed by the gap edges and the cylinder circumference at both ends. In the simulations of Figure 4, the SRRs are attached to a silicon substrate. This duplicates the experimental results well but gives rise to asymmetric current about the mid-depth plane. To see the undistorted resonant current, Figure 5 shows simulated currents (and a schematic diagram) for freespace SRRs at the corresponding resonance at 37 THz. We observe opposing circular currents at either end of the split cylinder, with axial gap currents. The combined currents lead to charge accumulation at the gap corners, leading to electric field enhancement across the gaps. The result is thus a simultaneous concentration of electric and magnetic fields in the same region of space between the gaps. This contrasts with the case of regular LC resonances in SRRs, where the electric field is enhanced across the gaps and the induced magnetic field is normal to the plane of the SRRs. In conclusion, our high aspect ratio PBW technique has allowed the fabrication of deep SRRs. These structures have sub-micron minimum feature size and depths of several µm. We observed in these Split Cylinder Resonators (SCRs) a magnetic resonance at 26 THz with distinct 3-dimensional currents. This resonance is characterized by strong axial currents and occur when the SRR depth is approximately half the wavelength of the incident radiation. Here, we have demonstrated that stretching the aspect ratio of the well-studied SRR structure can lead to the appearance of an entirely new resonant mode. Such resonances can also be expected in other metamaterial structures and will have an impact on attempts to designed 3-dimensional metamaterials for practical use. The possibility of obtaining a magnetic response from SCRs under normal incidence will create new flexibility and possibilities in the design of metamaterials.
2008-08-12T06:38:59.000Z
2008-07-29T00:00:00.000
{ "year": 2008, "sha1": "09862ae29fd0987967af6ca93615b4d1416464ae", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "09862ae29fd0987967af6ca93615b4d1416464ae", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
1362221
pes2o/s2orc
v3-fos-license
On Shapley Ratings in Brain Networks We consider the problem of computing the influence of a neuronal structure in a brain network. Abraham et al. (2006) computed this influence by using the Shapley value of a coalitional game corresponding to a directed network as a rating. Kötter et al. (2007) applied this rating to large-scale brain networks, in particular to the macaque visual cortex and the macaque prefrontal cortex. Our aim is to improve upon the above technique by measuring the importance of subgroups of neuronal structures in a different way. This new modeling technique not only leads to a more intuitive coalitional game, but also allows for specifying the relative influence of neuronal structures and a direct extension to a setting with missing information on the existence of certain connections. INTRODUCTION In this paper we consider the problem of computing the influence of a neuronal structure in a brain network. The aim of this paper is to improve upon the techniques underlying the methodology proposed by Abraham et al. (2006). Abraham et al. (2006) considered a coalitional game in which the worth of a coalition of vertices, the neuronal structures, is defined as the number of strongly connected components in its induced subnetwork within the whole brain network. Subsequently, Abraham et al. (2006) computed the influence of a neuronal structure in a brain network by using the Shapley value of this coalitional game as a rating. Kötter et al. (2007) applied this rating to large-scale brain networks, in particular to the macaque visual cortex and the macaque prefrontal cortex based on real-life data of Young (1992) and Walker (1940). In this paper we introduce an alternative coalitional game which in our opinion has several advantages. First of all, by satisfying superadditivity the game is more intuitive from a game theoretical point of view. Secondly, using the Shapley value of this game as an alternative rating it allows to directly specifying relative influence of neuronal structures. We apply our alternative rating model to the brain networks considered by Kötter et al. (2007) and, generally speaking, our results corroborate the findings of Kötter et al. (2007). Finally, a third advantage of the alternative approach is related to missing information on possible connections in a brain network. As this feature is a common problem, as argued by Kötter and Stephan (2003), we illustrate how our alternative approach allows for a direct incorporation of probabilistic considerations regarding missing information on the existence of certain connections. SHAPLEY RATINGS IN BRAIN NETWORKS A brain network is a directed graph (N, A) where N is a set of vertices, representing a set of neuronal structures, and A is a set of arcs, representing the connections between the neuronal structures. Let A denote all ordered pairs (i, j) of vertices in N for which there exists a directed path from i to j in (N, A). A graph (N, A) is called strongly connected if for every two vertices i and j in N there is a directed path from i to j and from j to i in (N, A), i.e., if A contains all ordered pairs in N. The induced subgraph (S, A[S]) is a graph where a subset S ⊆ N is the set of vertices and A[S] is the set of arcs consisting of any arc in A whose starting and end point are both in S. A strongly connected component is a maximal induced subgraph which is strongly connected, i.e., there is no other strongly connected subgraph containing this strongly connected component. Let SCC(N, A) denote the number of strongly connected components in graph (N, A). A coalitional game is a pair (N, v) where N denotes a nonempty, finite set of players and v is a function which assigns a number to each subset S ⊆ N (also called a coalition). By convention, v(∅) = 0. Abraham et al. (2006) introduced a coalitional game (N, w A ) corresponding to a brain network (N, A) defined by for all S ⊆ N. Hence, the worth of a coalition in w A is defined by the number of strongly connected components in its induced subgraph. Alternatively, we define the brain network game (N, v A ) corresponding to (N, A) by v A (S) = |A[S]|, 1 This instance of a brain network is also used in Example 1 in Section 3.1 of Moretti (2013). for all S ⊆ N. Hence, the worth of a coalition S in v A is defined by the number of ordered pairs (i, j) of vertices in S for which there exists a directed path from i to j in (S, A[S]). A basic property for coalitional games is superadditivity. A coalitional game is called called superadditive if breaking up a coalition into parts does not pay, i.e., for all S, T ⊆ N with S ∩ T = ∅. From a game theoretical perspective it is desirable that coalitional games satisfy this basic property since it provides a clear incentive for cooperation in the grand coalition and thus provides a motivation to focus on fairly allocating the worth of the grand coalition. Unfortunately, this property is not satisfied by the coalitional game (N, w A ). This is illustrated in the following example. Example 2.2. Reconsider the brain network (N, A) presented in Example 2.1. The worth of every coalition in the games (N, Note that (N, w A ) is not superadditive since, e.g., It is readily checked that (N, v A ) is superadditive. △ In contrast to the coalitional game (N, w A ), we show in the following proposition that the brain network game (N, v A ) does satisfy superadditivity. Proposition 2.1. Let (N, A) be a brain network. Then, the brain network game (N, v A ) is superadditive. . Hence, the Shapley value looks at the marginal contributions of a player to all possible coalitions. The weight p S is such that all marginal contributions are weighted adequately to obtain an efficient allocation of the worth of the grand coalition. In the context of coalitional games corresponding to brain networks, the Shapley value can be interpreted as a measure for the influence of a neuronal structure. Abraham et al. (2006) considered the Shapley value (w A ) as a rating for the neuronal structures in a brain network. Similarly, we consider the Shapley value (v A ) as a rating. (v A ) = 2 1 6 , 4 1 6 , 2 5 6 , 2 5 6 , both determining a ranking (2, 3, 4, 1) or (2, 4, 3, 1) (there is a tie for the second highest ranking). We note that a lower Shapley rating in w A indicates a higher influence in a brain network. On the contrary, a higher Shapley rating in v A indicates a higher influence. Since a Shapley rating in w A can be negative, as is the case in this example, it is not possible to determine the relative influence of two vertices on the basis of (w A ). On the other hand, a Shapley rating in v A can not be negative by definition because of superadditivity. Therefore, using (v A ), we can say that the influence of vertex 2 in the brain network (N, A) is almost twice as large as the influence of vertex 1. △ A common problem in the analysis of brain networks is the fact that it is not known whether some specific connections (arcs) are present or not [cf. Kötter and Stephan (2003)]. Using a certain probabilistic knowledge about these unknown connections, this lack of information can readily be incorporated in the brain network game. We assume that each possible arc (i, j) is present with probability p ij ∈ [0, 1]. Clearly, for each present arc we set p ij = 1 and for each absent arc we set p ij = 0. All probabilities are summarized into a vector p. Given such a vector p, we define the stochastic brain network game (N, v p ) in which the worth of a coalition equals the expected (in the probabilistic sense) number of ordered pairs for which there exists a directed path in its induced subgraph. Without providing the exact mathematical formulations the following example illustrates how to explicitly determine the coalitional values in a stochastic brain network game. Example 2.4. Reconsider the brain network presented in Example 2.1. Only now suppose that the arcs (1, 4) and (3, 1) are present with probability p 14 and p 31 , respectively. The complete corresponding vector p can be found below. The entire Shapley rating (v A ) of the macaque visual cortex can be found in Figure A1 in the appendix. Correspondingly, we can roughly divide the brain regions in five classes based on the relative difference with the brain region with the highest Shapley rating. We consider the following five classes based on the differences in terms of percentage: 0-5%, 5-10%, 10-15%, 15-20%, 20% and higher. The first class consists of the single brain region V4 with the highest Shapley rating. The second class consists of the brain regions FEF to TF as ordered in Figure A1 that differ 5-10% with V4. The brain regions in the third class are MSTd to V3, in the fourth class we have MSTI to PITd and in the fifth class we have the single brain region VOT with a relative influence which is 23% lower than that of V4. The second large-scale brain network is the macaque prefrontal cortex with twelve neuronal structures as illustrated in Figure 3A of Kötter et al. (2007) [cf. Walker (1940]. In this case there is a lack of information about the presence or absence of nine connections. To get some insight, Kötter et al. (2007) considered two extreme cases. First, they assume that connections with unknown presence are absent. Second, they assume that those connections are present. For both extreme cases the Shapley ratings are calculated separately. Our stochastic brain network game provides a way to incorporate lack of information into one Shapley rating on the basis of probabilistic information. For simplicity, we assume that each connection with unknown presence is absent with probability 1 2 . Note that, in case more information would become available, more adequate probabilities can be readily inserted. Having the complete vector p of arc probabilities, one readily computes the corresponding stochastic brain network game (N, v p ) and the corresponding Shapley rating (v p ). The ranking based on the Shapley rating (v p ) can be found below. AUTHOR CONTRIBUTIONS MM is the first author and the corresponding author. MM was present at all processes: the early research process, the programming process and the writing process. BD and PB contributed to the early research process and later on to the process of commenting on the work written by MM.
2017-05-04T16:18:55.104Z
2016-11-29T00:00:00.000
{ "year": 2016, "sha1": "5086356bd5de019f189dd3ab3c455b4bbb1438d2", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fninf.2016.00051/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5086356bd5de019f189dd3ab3c455b4bbb1438d2", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Psychology", "Medicine", "Computer Science" ] }
12169194
pes2o/s2orc
v3-fos-license
Designing Badges for a Civic Media Platform: Reputation and Named levels Badges are gaining momentum on the social web but known HCI design experiences are very few. In this paper we present a design experience related to badges and named levels for representing reputation within the context of a Civic Media Platform called timu . We describe our design methodology and the exercise we have devised for designing named levels together with users. We then describe our design vision which unites contextual details about timu together with the rhetoric of progression provided by the language proficiency system. In the conclusion we generalise our design results INTRODUCTION The advent of the networked digital society has brought an increased level of citizen engagement in civic life (Gillmor, 2004) through the means of Civic Media (web) Platforms (CMPs).Civic Media can be defined as any "medium" that fosters civic engagement and direct participation of people to civic life (Jenkins, 2007).The internet and social media allow a strong civic engagement of citizens that can take advantage of a many-to-many communication. For CMP participants being able to recognise, formalise and display skills, achievements or reputation acquired with their participation is of crucial importance.This in turn could help participants to unlock further civic opportunities (including work opportunities), formalizing what they have informally learned and facilitating interactions among themselves.Badges, we believe, can greatly help participants in formalizing, assessing and displaying what has been achieved in a CMP.A badge can be defined as a visual representation of an accomplishment, skill or reputation gained in the context of a specific community or institutional setting.A legacy of badges can be found in the military where they represent rank and authority or in the scout movement where they represent pupils' achievements (Halavais, 2011). Although badges are in widespread use in web platforms relatively little research exists providing lively and detailed badge design experiences.In this paper we describe our experience in the design of a badge system for a CMP platform called timu (http://www.timu.it/)developed by the <ahref Foundation (http://www.ahref.eu/en) in Italy and whose goal is to enable citizens to collectively contribute to a bottom-up information ecosystem (De Biase, 2011).We propose the design of a badge system whose goal is to be a visual and efficient representation of the reputation of participants in timu.Specifically, the timu reputation can primarily be seen as a measure of how a user has contributed to the goals of the platform and to the development of the community.The badges will therefore be awarded by the timu platform after tracking user participation and contribution to the platform goals. In the remainder of the paper we will: analyze the concept of badges in the Human Computer Interaction (HCI) literature; describe the reputation system of timu; introduce the requirements for the timu badge system; describe our design methodology; present the result of our design and the first sketch of the badge system for timu. BADGES ON THE WEB AND IN HCI Designing effective and usable badge systems for the web is an important task for HCI research.Badges are increasingly becoming an important component of social media user interfaces, whose goal is to facilitate interactions among users and make the quality of the user profile explicit.An increasing number of mainstream social networks (e.g.Foursquare), online multiplayer games/platforms (e.g.World of Warcraft, Xbox Badges are often devised in order to make the participation in social media more engaging and motivating (Antin and Churchill, 2011).Obtaining a badge is something that should motivate users and it is also a mark of achievements within a community.In social media, badges can, indeed, serve as a synthetic representation of various aspects (although a badge does not need to represent them all) including community membership, authority, competence, experience, identity and reputation (Halavais, 2011).Badges may also support the transferability of skills, reputation and/or achievements to other platforms. Badges are often conceptualised as a "game mechanic" and in literature they are often treated as an instance of gamification1 (Zichermann and Cunningham, 2011;Schell, 2010).Gamification is the use of game mechanics and other game elements in non game situations.Game mechanics are rules that shape the game experience.Gamification is a new concept, which is gaining momentum in User Experience research.The concept is also entering HCI conferences (Deterding et al., 2011) and there are also services such as Badgeville (http://www.badgeville.com)that offer gamification services with, as the name suggest, a strong focus on badges. The concept of gamification originates in the areas of marketing and (positive) psychology (McGonigal, 2011) with a focus on making customers more loyal, and influencing their behavior toward desired results (such as buying a product) by the means of positive reinforcement (feedbacks and triggers).Under this marketing oriented umbrella a badge can therefore be seen as feedback meant to foster addictive behavior such as loyalty programs, something that suffers from some ethical concerns (Man, 2011).This, mostly correct, criticism of gamification as a way to create marketing feedback and triggers does not entirely undermine the role that badges could have in CMPs and social media more generally.If current existing contributions to badge design come only from gamification, this is perhaps because of a lack of HCI literature in general and not of gamification in itself. Badges, for example, can help in fostering an Online Connected Learning (OCL) ecosystem. Where OCL is the type of learning that happens in online spaces, and which, is informal, open-ended and motivating for people.Badges can help formalise and recognise OCL achievements and skills enabling users to transfer these to other contexts. A recent Digital Media and Learning Competition (http://dmlcompetition.net/Competition/4/about.php) supported by the MacArthur Foundation has been entirely devoted to the role of badges for fostering OCL.The competition has had tracks related to pure research on badges as well as on the design of badge systems.The design competition asked for the production of badges that could augment the infrastructure of the Mozilla Open Badge project (http://openbadges.org/).The Mozilla Open Badge is meant to foster the formalization of skills and achievements by the means of badges: making it easy for anyone to issue, earn and display badges across the web --through a shared infrastructure that's free and open to all. Despite the clear focus of the competition on designing badge systems, contributors are proposing already-designed solutions and do not provide detailed description of the design process behind a badge system design. TIMU AND ITS REPUTATION SYSTEM In this paper we describe the process that led to the initial design of a badge system for the CMP timu.More specifically we account for the process that led to the description of the labels (see section 3.1) of a badge system meant to be a visual and efficient summary of the reputation of participants within the context of this platform.Reputation is a key concept of contemporary web platforms and can be defined as (Dellarocas, 2011, p. 4): a summary of one's relevant past actions within the context of a specific community, presented in a manner that can help other community members make decisions with respect to whether and how to relate to that individual (and/or to the individual's works). Reputation is also a form of trust: an attitude which allows for risk-taking decisions.In the digital environment people often interact with other unknown users and interaction is often risky.Reputation can support the creation of social order in digitally mediated interactions (Taddeo, 2010): it is the kind of trust that one develops in an unknown agent by considering only the recommendations about that agent provided by other agents or by other information sources, such as newspapers or televisions.Referential trust is one of the main kinds of trust developed in digital environments in which communication processes are easily performed. In web platforms a detailed and well-designed reputation system facilitates therefore trustworthy relationships among unknown participants.Reputation systems allow for better decisionmaking for users and bring structure to web communities (Farmer and Glass, 2010).With a reputation system in place the user is not "unknown" in the eyes of other users. timu is a CMP whose goal is to stimulate the creation of a bottom-up and participative information ecosystem.Each timu user has a personal timu profile (Figure 1), which is composed of a personal picture, the name (e.g.Cinzia Massa) and username (e.g.cinzia), the number of inquiries (e.g. 2) to which she is contributing as well as the personal contributions (red icons) the user has made to the platform inquiries. Figure 1: example of a timu user profile timu has a reputation system (currently being developed) whose goals are to encourage individual participation and to stimulate the creation of a trusted network of participants.Reputation can be acquired with individual contributions or, in other words, with the upload of individual works in timu such as videos (first red icon of the profile), audio files (second icon), photographs (third icon), and written documents (fourth icon).For example, a person familiar with photography can enjoy a certain reputation that comes from the uploads she has made in timu and from the quality these uploads bring to timu inquiries.timu, indeed, hopes that citizens will be able to participate with their capabilities and abilities to create information that originates from civic life. Besides tracking the individual's contributions /uploads, the reputation system's goal is to allow the identification of those users to whom more trust could be delegated.More trust means that the users will be able to increasingly undertake certain administrative tasks such as flagging inappropriate contents or leading new inquiries.The timu reputation system is organised around 13 numerical levels2 (from the entry level 0 to the final level 12) and at each level the user acquires more administrative rights and/or unlocks new parts of the platform (see Table 1). Named Levels rather than Numbered Levels We have identified a few key shortcomings with numbered levels3 (from 0 to 12) as a representation of reputation levels: they are too "context dependent" and they do not provide good information about the user reputation.Being too context dependent means that numbers do not provide enough information to assess the user reputation outside timu.For instance, saying that a user has reputation 3 is unclear outside timu.It could be 3 out of 5 levels (quite good reputation) or 3 out of 100 levels (very low reputation).Secondly, numbers do not give enough details about the participant's contribution to timu (is a person with level 3 a good contributor or not?).These shortcomings of the numbers as a representation of reputation led the design to team to recognise the need for a different representation.Badges were identified as a possible solution to both the contextual and the informational problem of current/numerical reputation representation. In the Yahoo!pattern library there are two different patterns related to badges: non sequential and sequential.This second pattern, called "named levels" was considered an appropriate solution for our problem: Define a family of reputation levels on a progressive continuum.Each level is higher than the one before it. Unique names give the levels a fun and approachable quality.Quick comparisons between levels, however, become slightly more difficult. The Yahoo! pattern did provide a clue for the solution but not the solution in itself.The problem was therefore to identify a "family" of reputation names or labels able to represent numbered levels (from 0 to 12) of the timu reputation system.This was the central focus of the design experience. Designing Badges for a Civic Media Platform: Reputation and Named Levels De Paoli, De Uffici, D'Andrea More specifically, the idea was not to have a label for each number but to have a label grouping three numbered levels at once (e.g.badge 1: levels 1-3; badge 2: levels 4-6 and so forth), so as to have a compact badge set of four badges covering the whole 12 timu reputation levels. DESIGN METHODOLOGY As design methodology for the identification of labels we have adopted a light version of the interaction design process described in Cooper et al (2007, p. 24).Because we were not designing a full web based product (e.g. an entire platform) or complex wireframes, but just the badge system, we decided for a reduced version of the process with emphasis only on selected aspects that are described in the next paragraphs. User Research In order to better understand the usage of badges in web communities we conducted a qualitative indepth case study.Our goal was to obtain a better idea of the role of badges in social media with particular attention to users' desiderata and also to identify key aspects of badge systems that we could use on our own design.In particular, we conducted a virtual ethnography (Hine, 2000) of the Mozilla Open Badge project.Ethnography is a qualitative method whose goal is to obtain greater awareness of the point of view of social actors. Virtual ethnography is instead the ethnography of online communities.The Open Badge project was selected after a preliminary identification of a number of suitable cases (including for instance foursquare and World of Warcraft), because of the Digital Media Competition and the debates about badges fostered by such an initiative. The ethnography was conducted as part of the MA dissertation of one of the authors of this essay during the period from July 2011 to September 2011.Data was gathered from blogs of Mozilla Open Badge Developers, mailing lists, online articles and forum discussions.Data gathered was analyzed using a grounded theory approach (Charmaz, 2006) These results helped us develop the requirements for the timu badge system as well as modeling a number of scenarios (Carroll, 1995) that we used during design activities. Personas & Scenarios With the results from the ethnography at hand we developed a number of personas and context scenarios (how the badge would fit into the participants' lives).The design team already had personas that were prepared for the design of the timu user profile.These were just adapted to suit the specific design task of badges.For the design of the named levels (i.e.labels), the team decided to use 4 personas each corresponding roughly to a set of numbered levels (level 0-3: person Gianluca; 4-6: person Giovanna; 7-9: person Olga; 10-12: person Francesca).This is one of our personas Gianluca is 28 years old5 .He recently graduated in philosophy at the University of Parma and is now working as a clerk at a post office in a town in the Emilia Romagna Region.Recently Gianluca moved into a small rented apartment and, during leisure time, uses his computer to keep in touch with friends through Facebook and Twitter.He also loves playing online games.Using social networks, he got back in touch with old friends that he had when, as a child, he lived in another Italian city.Gianluca wouldn't mind knowing nice people on Facebook living nearby that he could meet offline during his free time. Context scenarios were developed using the results of the ethnography as well as using knowledge of the current timu user base.Four scenarios were developed taking in account the set of numbered levels (e.g.Scenario 0-3, Scenario 4-Designing Badges for a Civic Media Platform: Reputation and Named Levels De Paoli, De Uffici, D'Andrea 6 and so forth).We propose here an example where we also emphasize in bold some of the aspects that came from the ethnography results: Scenario for the levels 0-3, Person Gianluca Gianluca spends a lot of time on his PC when he is not at work.On Facebook he has seen that a friend has shared some new content via a platform called timu.Gianluca got interested and wanted to know more about this "timu" and typed the url www.timu.it on his browser. Everything on timu looked very interesting.He decided to join timu to participate in an inquiry on "The twenty years of the web".As we will better see, personas and scenarios have been used during the design activities. Requirements Definition With the ethnography results as well as with the preliminary identification of the general objectives of a badge system a number of requirements were identified by the team.The requirements for the timu badge system can be summarised as follows: (i) Badges should take the form of the pattern Named Levels: badges should be labels with a progressive increase, so as to differently represent numbered levels.This will allow the timu reputation to be better represented to users.(ii) The labels must also be easily understandable outside timu.This will facilitate portability of reputation, for example on the participant's personal blog.(iii) The labels of badges need to provide participants' specific motivations both in how they contribute to timu as well as for the obtaining of a badge.(iv) The named levels should, where possible, create a sense of participation in the community of timu both inside and outside the platform. DESIGN FRAMEWORK In this section we describe the design process that led to our named levels.Our work could be divided in three steps: preparation, design and post design. During the final step we created our badge's design vision for the timu platform. Pre-Session preparation A pre-design session was planned for the design team with a focus on making a detailed schedule for the design sessions.During the preparatory stage it was decided to have two different design sessions with users and to conduct the same experiment/exercise with each group of users.The advantage of doing two sessions is to create different views within the two different groups as different users can emphasize different aspects.In some cases, as has happened in this experience of design, people treat similar elements in different moments and with different terms, also providing different visions of possible solutions to a problem. During the preparatory stage the following schedule for the design sessions was decided: to prepare an introductory presentation towards the goal of the session and towards the goals of the timu reputation system; to present users with the personas and scenarios; to present users with the design exercise (that was prepared by the teamsee next section); to give the participants the following materials: the list of requirements, a copy of each persona with the corresponding scenario and the exercise templates; to conduct the exercise with them; and finally to foster a final discussion. Design Exercise For the design of labels we have conceived an exercise which is an adaptation of a popular User Experience exercise (from Adaptive Path) called "six-to-one" (for a description see Bowles and Box, 2011).In the original exercise, participants receive a template with six basic grids and are asked to produce six interface sketches within a specified time frame 6 .After this, participants are asked to select their best idea out of the six and to report it on a single (one) grid template.The timu design team considered that this exercise could be used for the design of badge levels and devised an adaptation called "three-to-one 7 " with also a specific template for named levels (Figure 3). A template with three simply named level grids (each with 4 green boxes) was prepared (Figure 3).Each grid represented roughly the Yahoo!named Designing Badges for a Civic Media Platform: Reputation and Named Levels De Paoli, De Uffici, D'Andrea levels pattern which we thought we would introduce during the design session.The same pattern image was reported on the right upper side of the template in order to facilitate its use.Further components of the template are: a recall to numbered levels of the reputation system corresponding to each of the named levels (e.g.0-3 under the first box; 4-6 under the second box and so forth).Also corresponding personas/scenarios were placed below each box.We thought that these two additions would help participants in developing their ideas. Figure 3: "three-to-one" template for named levels A second template with just one single named level grid was then prepared.As in the "six-to-one" exercise, the idea was to ask participants to select their best solution.The best solution would then be discussed by participants and the design team in order to better understand the logic of each proposed idea and to gather further material for the design of reputation labels. Design Session The purpose of the design sessions was to have potential users and actual users of the timu platform identify and propose ideas for the named levels of the badge system.Both sessions took place on the same day, 21 December 2011, at the meeting room of the <ahref Foundation and lasted for about 3 hours.The first session took place in the morning, starting at 9:30.The second session took place in the afternoon starting at 14:30.The two sessions were attended by ten people.These people/users were selected and contacted based on previous knowledge and experiences with the platform.We had planned to have 5 people for each session.However, the morning session was attended by three people only.The missing people from the morning session participated in the afternoon session, because they had transportation problems and this prevented them from taking part in the morning session.The afternoon session therefore had seven participants.This issue did not negatively affect the sessions. Design sessions were conducted by one of the authors and were organised according to what was planned in the preparation phase of the design.Firstly the author described to participants the goal of the design session, namely the need for designing something related to the reputation system of timu.Then, the mechanics of the timu reputation system were introduced as well as the need to represent numbered levels (i.e.reputation levels from 0 -12) in different ways.Reasons for this need (i.e.lack of contextual information provided by numbers) were also described.Badges were then introduced as a concept both in general terms and specific terms (i.e.badges in web based online environments).2) was presented and was related to the numbered levels of the timu reputation (figure 4). Also the requirements of the badge system were presented.Participants were told that the purpose of the session was to identify and propose names/labels for the timu badge system that would be able to replace the numbers and to express the idea of progress with effective names.During the presentation the personas and scenarios prepared for the design session were also presented.It was explained to participants that these would help them in proposing the labels. Figure 4: named levels and timu numbered levels The "three-to-one" exercise was then introduced, explaining to users that they were required to propose 3 different label solutions.The following material was then given to participants: a list of the requirements; a copy of each persona and scenario; and a template with the "three-to-one" exercise.Approximately 20 minutes were given for completing the task. After completing the task, users were provided with another template with the space for proposing just one solution.They were asked to choose the best of their three options.The final templates were then collected and discussed by participants using a sketch board. The discussion and requirements After each exercise the design team fostered a discussion among participants.The focus of discussions was to consider how much each of the proposed best solutions would meet the requirements.Each participant was asked to explain to others his/her proposed solution.Some important results were achieved during the discussions.The reflections we provide here come from our analysis of the exercise materials, including analysis of the design session audio file transcripts.We also provide a table (below) showing all the proposed best solutions. Progressive names Users of the morning session highlighted that a label can be easily expressed by a noun such as "participant" or "reporter".But to define reputation levels in a progressive way this is clearly not enough.Participants of both sessions said that a further name or adjective would be crucial (although not necessarily) for creating a sense of progression.An example of progressive names is the case 6, with the noun "citizen journalist" accompanied by progressive names similar to those of Olympic medals (i.e.bronze, silver and so on), which also give a sense of upward progression of named levels. Figure 5: moment of the discussion An adjective instead is a word that determines the quality of nouns or their situation.Participants therefore underlined that for meeting the progression of named levels a progressive adjective would better accompany the noun.As pointed out by a participant in the morning session: the adjective describes an argument, a growth.The noun must instead contextualise and tell what you're getting8 . User number 3's proposal is an example of this with a noun "member" accompanied by a series of upward adjectives (Junior, Fellow, Senior).Another participant pointed out that the noun (e.g.member) gives information about the context of the platform while adjectives instead relate to the user's increases of reputational levels.Number 1 is an example of this with the nouns "civic reporter" accompanied by progressive adjectives (Junior, master and so on). The proper selection of progressive adjectives and/or names allows us to meet the first requirement: creating progressive labels.During the discussion it emerged that some choices were definitely better in characterizing progression and in particular: i. those similar to the language proficiency system (beginner, intermediate and so forth) (cases 2 and 4); ii. those recalling seniority (cases 1 and 3); Other proposed solutions did not provide a clear progression.For instance in the solutions 5 or 8 the progressive logic is not clear.While discussing solution 5, a participant noticed that: You can see the progression if you look at all four together.The central pair [participative, constant] progression however is not immediately apparent. Designing Badges for a Civic Media Platform: Reputation and Named Levels De Paoli, De Uffici, D'Andrea Those choices that did not provide a clear sense of progression were therefore identified by users themselves. Portability Portability is the second key requirement of the timu named levels.Most participants in their best models emphasized the importance of universally understandable labels.Portability means that a label should provide immediate feedback about the reputation level, when a badge is seen outside the context of timu.For example some participants emphasized the importance of universallyunderstandable nouns related with the timu platform, in this way linking the context (timu) with the outside world.Therefore nouns such as "reporter or "citizen journalist" (cases 1 and 6) could be understandable outside timu.All these nouns are also strongly linked to the platform goals.However, for understanding the reputation levels, contextual nouns are not enough.The noun "reporter" as such does not emphasize progression or reputational levels.Further adjectives and names characterizing progression are therefore important. During the discussion, four out of ten people said that the adjectives related to the language proficiency system are quite universally understandable: beginner, intermediate, advanced.One of the people which proposed these adjectives emphasized their portability: This system has maximum portability because it is universally used not only at platform level but at any language proficiency rating level. These four adjectives that qualify the person's activities, effectively express the idea of progression.Seeing the label "beginner reporters" for instance provides immediate information about the reputation level (i.e. a user with a low level, just at the beginning of her experience).The label "expert reporter" also provides immediate feedback about the reputation (i.e. the user is quite expert). A second case was identified as fully portable: those with the names related to Olympic medals.Also this model is universally understandable because of the wide diffusion of this system to characterise winners in sporting competitions. Motivation Another key requirement is the motivation that badges should trigger.This requirement emerged from our user research on the case study of Mozilla Open Badge.All participants said that they tried to express labels able to motivate people's participation in timu.User number 7's example has a clear focus on user motivation and comes from the personal experience of a user with video game platforms which often provide badges with funny and entertaining labels. Another solution, however, emerged as clearly motivational and able to better represent numerical levels (in this way augmenting also the progressive names).This is the case for number 6.The person proposing this system said this came from his experience as an alpine skiing instructor at a professional level: For four years I was an alpine ski instructor and we use stars: 1, 2, 3 bronze-stars, 1, 2, 3 silverstars and 1, 2, 3 gold-stars which are an evaluation of how good the person is at skiing.The discussion about this solution lead to a realization that each of the four badges could be further augmented by stars or points.Therefore the "badge1" would also have 1, 2, 3 stars corresponding to level 1, level 2 and level 3.The "badge 2" would also have 1, 2, 3 stars corresponding to levels 4, 5 and 6 and so forth.This idea would allow a better granularity of reputation representation and provide more motivation because a user would unlock not only badges at each of the 3 levels but also unlock a "star" at each level.A participant noticed that this system: is very motivating if at the beginning everything is turned off.I really like the idea of putting the stars in the badge.If you are between 0 and 1 you will get a star. We considered this suggestion an important contribution in meeting the motivational requirement of our badge system. Participation Another requirement for badges is their role in triggering participation with the community.Some of the proposed solutions (5 and 7) have an emphasis on participation.The person proposing solution 5 also emphasized how personas and scenarios helped this definition: I saw Gianluca as someone who contributes occasionally. […] Giovanna instead is "participative" in the sense that she is already present on the platform, she has already contributed. It should be noted however that not only words directly related to "participation" (e.g.participative) was understood as meeting the requirements.In fact, another user pointed out that his choice of other words was made with participation in mind: "member", because we were thinking about participation. The noun "member" therefore was meant to give a sense of membership."Member" however does not give a sense of activity and contribution to the creation of information but it only gives a sense of participation.The design team realised that this requirement was poorly met by the user proposals. Post-Design: design vision On January 10 th 2012, approximately 3 weeks after the exercise, the design team met in order to discuss the results.The time between the sessions and the final meeting was used to transcribe and analyze audio files.The goal of the meeting was to formalise a design vision for named levels for the timu reputation.The vision can be described using the original requirements: (i) Progressive Named Levels: the Olympic medals rhetoric (bronze / silver / gold) and the language proficiency rhetoric (beginner / intermediate / advanced) were both considered the best solutions.Some changes were, however, introduced as in tables 3 and 4. For instance the Olympic medal model has only 3 steps so it was decided to add platinum after gold.In the case of the language proficiency we decided to use "expert" instead of "advanced" and to add "master" as the fourth level.This change was considered to provide a better sense of progression over 4 levels. (ii) Portability: Nouns should reflect the user's ability to contribute to timu as well as the contextual-universal relation between timu and the remainder of the information ecosystem: the noun "reporter" was considered the best option among those selected by the users.The design team also decided to resort to a noun often used by users in their "three-to-.one"models but which was not picked up in their "best models": the word "contributor".Contributor gives a sense of active participation. In relation to progressive named levels, the chosen systems -language proficiency and Olympic medals alike -were both considered easily portable and understandable also outside timu.This also emerged from the user discussions.(iii) Motivation: the idea of having stars for each numbered level was considered an important motivational aspect.The design team decided to adopt this view.(iv) Participation: the design team decided that the name of the platform would always accompany the named levels to give a sense of participation (e.g."timu" expert reporter; "timu" silver contributor).This was decided because user proposed models did not sufficiently meet the original requirement. Based on these considerations 4 different systems of named levels were created.Sketches representing each of these models were also prepared.These sketches would provide a general graphical idea of the concepts developed during the design.Figure 6 provides an example also showing an emphasis on how the design choices meet the original requirements. VALIDATION On January 20 th 2012 badges were validated during a session in which one of the designers presented the result of this research to a number of people involved in the creation of the timu platform: the president and the director of the ahref Foundation, the team manager and another researcher.The whole design process was reconstructed for them including the identified requirements and how the four proposed solutions would meet them. The solutions with the Olympic medal rhetoric were discarded during the validation as they imply competition.timu is not a competitive community and therefore the medal progression would not adequately represent the spirit of the community.The solutions related to language proficiency were instead considered suitable for the platform because they imply a learning activity and personal improvement, something directly related to the goals of timu.Also the noun "reporter" was considered better than "contributor" because it is closer to the idea of producing a bottom-up civic information ecosystem.Solution number 3 was validated for further graphical development. CONCLUSION: LESSONS LEARNED Badges are gaining momentum on the social web but known design experiences are very few.In this paper we presented a design experience related to badges and named levels for Civic Media.In particular we showed how we solved the problem of representing numerical levels of reputation for the users using named levels.Our design experience provided interesting reflections about the use of nouns and adjectives for building the badge's named levels.A further contribution of our design experience is the "three-to-one" exercise.Although in itself the exercise could be still improved, it provides an interesting and reusable solution for the design of a badge's named levels. Finally, it emerged during design sessions, that users brought their past experience (for example as online gamers) into their formulation of named levels and in discussions.The case of the alpine skiing instructor is revealing in this regard.His idea of stars was incorporated into the badge's design vision and offered an interesting solution for representing the granularity of reputation levels. Figure 2 : Figure 2: named levels from Yahoo! pattern library 4 in order to identify recurring themes.The user research lead to the following results, which partly confirms some of the few existing analyses of badge systems (Antinn and Churchill, 2010; Halavais, 2011):  Badges have a key role in triggering motivation for user participation in web platforms;  Badges can support the transferability of users' reputation, skills and achievements across contexts and different platforms;  Badges can play a crucial community building role strengthening the ties among participants;  Badges can play a crucial role in representing users' reputation, skills and achievements in online platforms. Table 1 : Excerpt of the timu reputation table with achievements for the first four levels He knows badges, which represent achievements and skills [reputation], from his personal experiences in playing World of Warcraft.He also thinks it would be quite engaging to gain further badges [motivation] in the future. To better explain what badges are, named level examples were proposed related to military hierarchy and to web platforms with examples from World of Warcraft and Yahoo! sport.Another topic of the presentation was to provide information about the named level pattern and its possible relation to the timu reputation.The named levels picture (figure Table 2 : Best named levels solutions proposed by users Table 3 : Contributor Named Levels Table 4 : Reporter Named Levels
2015-07-06T21:03:06.000Z
2012-09-10T00:00:00.000
{ "year": 2012, "sha1": "12bdfee3e0b5b2c601d2c2d323a0c1fd432bcd23", "oa_license": "CCBY", "oa_url": "https://www.scienceopen.com/document_file/f2b65fef-b436-4069-acf2-a7343c26c9fd/ScienceOpen/059_Paoli.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "12bdfee3e0b5b2c601d2c2d323a0c1fd432bcd23", "s2fieldsofstudy": [ "Computer Science", "Political Science" ], "extfieldsofstudy": [ "Engineering", "Computer Science" ] }
237548059
pes2o/s2orc
v3-fos-license
Voice Controlled Fire Fighting Robot : Even though there are a lot of advancements in technology, there have been an increased number of devastating losses in the field of fire-fighting. Fire accidents that occur in industries like atomic power plants, petroleum refineries, chemical factories and other large-scale fire industries end in quite serious consequences which can cause injuries or even death of individuals. Therefore, this paper is enhanced to develop an automated fire extinguishing robotic vehicle that saves the lives of firefighters and other persons in those areas. The proposed robotic vehicle is controlled using specified speech commands. The language input is more familiar which makes interaction with the robotic vehicle much easier. The advantages of voice-controlled robots are hands-free and rapid data input operations. The speech recognition process is done in such a way that it recognizes specified commands from the user and the designed robot navigates based on the instructions via the speech commands. The fire can be extinguished using a water tank that is fitted along with the robotic vehicle. Consequently, the site of fire is live monitored using ESP 32 and the status of the fire zone is updated to the user through message. I. INTRODUCTION Lately, extinguishing fires is a dangerous issue. Various analysts are chipping away at different techniques for fire stifling. Creator Ratnesh Malik et al. has developed a methodology towards a sort of putting out fires. The robot is arranged and created so that it can douse the fire. The robot is totally self-initiated. It completes thoughts like environmental distinguishing and care, relative motor control. The robot gets information from its sensors and interfaced segments. Photosensitive sensors are utilized to recognize fire dependent on their sources. At the point when the fire is perceived, the robot alerts the environmental factors. Around then, it starts water to be sprinkled on the fire. The use of sensors and microcontrollers permits it to perceive fire normally at the very least delay. This robot is used at a piece of zones that are in high danger. [1] Swati Deshmukh et al have built up a remote firefighting robot. It includes a framework that can perceive fire and pass it over. It can explore in a forward and reverse way and turn left or right. Thusly, a fireman can work with it over significant distances. These resistors are very delicate and are prepared for perceiving a little measure of fire. It is a brilliant multi-sensor based security framework. [2] A mobile controlled robot with fire detecting sensors was built by Lakshay Arora which includes a cell phone that controls a robot by making a call to the mobile phone which is added to the robot. Other than the call activation period, if any key is pushed on the phone, the tone contrasted with the key pushed is heard at the furthest edge of the call that is determined to the robot. The robot faculties Dual-Tone Multiple-Frequency (DTMF) tone with the help of a phone mounted on the robot. The got code is set up by the microcontroller and from that point forward, the robot performs according to the requisites. In the proposed structure, DTMF development is used to situate the position of the motor at a necessary point with different sensors, each playing out its operations. The paper analyzes the development to find movements using an android propelled mobile which has an inbuilt Bluetooth module and an accelerometer to control the vitality of the robot. The Microcontroller controls the different indications of the Bluetooth module. Favorable circumstances like easy interfacing, minimization of space occupied and weight-less can make it a better alternative when contrasted with the other models. [4] Saravanan P has developed an Integrated Semi-Autonomous Fire Fighting Mobile robot. The System takes control of four D.C. motors that are energized by Atmega2560 and constrained by course structure. The course structure comprises of fused ultrasonic sensors and infrared sensors. The robot is fitted with a distinct camera that records video and communicates it. The fire area comprises of LDR and temperature sensor. In the occasion when there is a fire, the sensor recognizes it and the robot will arrive at the wellspring of root of the fire and douses it. The smothering system comprises of a BLDC motor with a water holder. The SABOT is utilized for uncommon conditions and it contains a GUI support through which robots can be provided orders. [5] All the above papers have their issues, this paper is enhanced to overcome those issues and to propose a fire fighting robot that is operated using speech commands which is entirely an easy interfaceable and quick access process. is used to pass the information gained from various sensors to the firefighter. According to the message received, the firefighter passes the speech commands through the Bluetooth module connected wirelessly using a mobile phone. The second section consists of ESP32 which has both WiFi accessibility and camera access. This is used for live monitoring the fire site which can be viewed by the firefighter away from the fire accident zone. Whenever the fire is detected through the flame sensor, the water tank attached to the robotic vehicle sprinkles water all around the site. The above-described sections are assembled over a robotic vehicle chassis. The robotic vehicle consists of four wheels that are operated using the motor drivers. Thus, the entire robotic vehicle is navigated using voice commands and the fire is extinguished using water tank. A. Temperature Sensor The thermistor is used as a temperature sensor. Usually, the temperature increases when the voltage signal produced by the temperature sensor increases. The temperature sensor is used to sense the smoke and fog in the environmental surroundings. If the sensed value is 1, it indicates that there are no fire accidents. If any fire accidents occurred, it shows the value as 0. Then the sensor sends SMS to the concerned person about the status of the fire-prone area. B. Ultrasonic Sensor The obstacle sensor used here is ultrasonic sensor. The distance between the obstacle and the sensor is calculated by the ultrasonic sensor. It performs this function using ultrasonic waves. The ultrasonic waves are those whose frequencies are about 20000 hertz. It is used to detect the obstacles in the path of the robotic vehicle. If it detects the obstacles in its path, it automatically stops the robot's movement. C. Gas Density Sensor The gas density sensor used here is smoke detecting sensor. It senses smoke and can be used a fire indicator. It can be used for high pressure and high-temperature applications. A gas density sensor is used to detect the gases in the atmosphere. They can be used for both indoor and outdoor environments. Very fine particles like cigarette smoke can be effectively detected using a gas density sensor and it is generally used in the air purifier system. D. ESP32 Module ESP32 can be operated at a temperature ranging from -400C to +1250C and it is capable of operating reliably in an industrial environment. ESP32 is developed for mobile phones, wearable electronics and IoT applications. It is used to achieve ultralow power consumption and it has a high level of integration with inbuilt antenna switches, RF balun, low noise receive amplifier, filters, and power management modules. It is a wifi module that is used to give the live stream of that particular area. It has access to hybrid wifi and Bluetooth chip. E. Arduino Arduino Uno is an 8-bit ATmega328P microcontroller. Besides ATmega328P, it has other components such as a crystal oscillator, voltage regulator, serial communication, etc., to support the microcontroller. It can be used to communicate with a PC, another Arduino or other microcontrollers. Arduino Uno can be programmed using Arduino IDE. It is used in the prototyping of electronic products and systems. Here, Arduino Uno is used to controlling the various sensors such as ultrasonic sensor, gas density sensor and temperature sensor. It provides the status of the fire zone to the firefighter as a message. It also controls the HC-05 Bluetooth module that is connected to the firefighter mobile phone through an application that uses Bluetooth. This application is used to provide commands to the robotic vehicle. III. RESULT AND DISCUSSION The results are accomplished as per the proposed model. The status of the fire zone is sensed by the signals obtained from various sensors connected to the Arduino controller and updated to the user through GSM connected. Fire can be extinguished by sprinkling water whenever the fire gets sensed. The overall status of the fire site is live monitored using ESP32. Robotic vehicle motion is entirely controlled using voice commands like "forward", "reverse", "left" and "right" which is given wirelessly to the robot using an inbuilt mobile app. Fig. 7. Home screen of Arduino Voice Control App The above fig. 7 shows the home screen of the Arduino voice control app that helps in processing the given speech commands to the Arduino using the HC-05 Bluetooth module. Fig. 8. Hardware Model The above fig. 8 shows the prototype model of a voice controlled firefighting robot. IV. CONCLUSION This paper presents our proposal on the concept of developing a voice-controlled fire-fighting robot. The advantages of our proposed paper include an easy user interface, quick access and rapid-fire extinguishing process. It helps in minimizing the work of firefighters and saving their lives. The proposed model can be further enhanced by replacing the material used for the robot with materials suitable for fire-resistant, maximum strength and fatigue resistant. The ESP32 can be replaced by a 360 o camera to view the entire site at a time. The water can be replaced with some chemical agents that can easily extinguish the fire occurred or with foam that is filled in a tank as a fire extinguishing agent.
2020-10-28T18:50:21.581Z
2020-09-30T00:00:00.000
{ "year": 2020, "sha1": "ecd0ca2986e1336308e436b28c9d60bcf38c2ad2", "oa_license": null, "oa_url": "https://doi.org/10.35940/ijrte.c4407.099320", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "67ab233a5ee62c6f5aafa2a7e94bc83f1e425fbe", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
31987289
pes2o/s2orc
v3-fos-license
Development of a rapid resolution liquid chromatography-diode array detector method for the determination of three compounds in Ziziphora clinopodioides Lam from different origins of Xinjiang Context: As a traditional Uygur medicinal plant, Z. clinopodioides Lam has various uses in Xinjiang. Aims: A reversed-phase rapid resolution liquid chromatography (RP-RRLC) method with diode array detector (DAD) was developed for simultaneous determination of diosmin, linarin, and pulegone from Ziziphora clinopodioides Lam, a widely used in traditional Uygur medicine for treating heart disease, high blood pressure, and other diseases. Settings and Design: Compounds were separated on a XDB-C18 reversed-phase analytical column (50 mm × 4.6 mm, 1.8 μm) with gradient elution using methanol and 1% aqueous acetic acid (v/v) at 0.9 mL/min. he detection wavelength was set at 270 nm. Materials and Methods: Ziziphora clinopodioides Lam. were collected from ten different origins in Xinjiang, including the Ban fang ditch, Tuoli, the Altay mountains, Terks, Xiata Road, Zhaosu Highway, Guozigou, Fukang, Jimsar, Wulabo. Statistical Analysis Used: The intra-day and inter-day precisions of all three compounds were less than 0.89% and the average recoveries ranged from 97.4 to 104.1%. There were highly significant linear correlations between component concentrations and specific chromatographic peak areas (R2 > 0.999). Results: The proposed method was successfully applied to determine the levels of three active components in Z. clinopodioides Lam. samples from different locations in Xinjiang. Conclusions: The proposed method is simple, consistent, accurate, and could be utilized as a quality control method for Z. clinopodioides Lam. INTRODUCTION Herbal medicines have been used over many centuries in Asia and have become more popular worldwide in recent decades. Medicinal herbs may contain hundreds of complex active components, and it is often impractical to identify all these substances by quantitative analysis [1] Therefore, we can only determine the composition is relatively high, and have bioactive ingredients in order to quality control. We decided to develop a Rapid Resolution LC method suitable for determination of different compounds in crude extracts of selected medicinal herbs, on a short C 18 analytical column packed with 1.8 μm silica-based particles, using methanol as an organic solvent in a binary mobile phase system. Traditional Uygur medicines are natural therapeutic agents used in accordance with the guiding theory of traditional Uygur medical science. They have been widely used in China since antiquity for prevention and treatment of diseases. Ziziphora clinopodioides Lam of the family Lamiaceae is indigenous to China, Mongolia, Turkey, Kazakhstan, and Kyrgyzstan. It is a semi-perennial shrub-like plant that grows on low hills, grasslands, and arid slopes. [2] As a traditional Uygur medicinal plant, Z. clinopodioides Lam has various uses in Xinjiang, including the treatment of heart disease, high blood pressure, asthma hyperhidrosis, palpitation insomnia, edema, cough, bronchitis, and lung abscess. [3] Several studies have revealed that it has a wide range of antimicrobial [4,5] and antioxidant [6] effects. Senejoux and others also explored the vasodilating effects of Z. clinopodioides Lam and the underlying mechanisms. [7] Till date, research on Z. clinopodioides Lam has focused mainly on the chemical constituents of its essential oils and their bioactive constituents, of which pulegone is considered the main ingredient. [8][9][10] In preliminary studies, our research group studied the stability of the plant's essential oils, [11] inhibit bacterial activity screening, and volatile oil chemical composition analysis of Z. clinopodioides Lam, [12] determined its oleanolic acid and ursolic acid content by HPLC [9] and simultaneous determination of caffeic acid and rosmarinic acid in Z. clinopodioides Lam from different sources in Xinjiang. [13] Moreover, we investigated the total polyphenolic and flavonoid content as well as the antioxidant activity of Z. clinopodioides Lam extracts of different polarity [6] and determination of ten metal elements in Z. clinopodioides Lam by microwave digestion-FAAS. [14] Other components in Z. clinopodioides Lam are caffeic acid, rosmarinic acid and oleanolic acid, ursolic acid, salylic acid, and flavonoids. [15] Flavonoids are known to possess potent anti-inflammatory activity in both humans and animals, and recently their topical application has met with considerable interest. [16,17] Diosmin and linarin are common naturally occurring flavonoids with a number of interesting biological activities. As a flavonoid, diosmin [ Figure 1a] also exhibits anti-inflammatory, free-radical scavenging, and antimutagenic properties, [18] linarin [ Figure 1b] is believed to possess parasiticide, anti-microbial, analgesic, anti-viral, anti-proliferative, anti-hypertensive, anti-oxidant, and anti-inflammatory properties. [19] These pharmacological effects are consistent with the herbs of Z. clinopodioides Lam. Pulegone [ Figure 1c], a monoterpene hydrocarbon reported to be one of the major active components of Z. clinopodioides Lam, can act against Gram-positive and Gram-negative bacteria. It also has a wide range of antimicrobial and antioxidant effects. [20] We develop an RRLC method that allows for the simultaneous determination of at least three of these putative bioactive ingredients, diosmin, linarin, and pulegone. This method may form the basis for a more efficient analytic procedure to assess the medicinal quality of Z. clinopodioides Lam samples and as a preparative aid for future studies on therapeutic mechanisms. Materials and reagents Whole plant samples were collected from ten different origins in Xinjiang. Samples from the Ban fang ditch, Tuoli, the Altay mountains, Terks, Xiata Road, Zhaosu Highway, and Guozigou were obtained in July, 2010 (NO:ZYMZY20100701-07). Samples from Fukang and Jimsar were collected in August, 2009 (NO:ZYMZY20090801) and the samples from Wulabo were collected in July, 2008 (NO:ZYMZY20080701-02). All specimens were stored in the Traditional Chinese Medicine Ethnical Herbs Specimen Museum of Xinjiang Medical University. The plant materials were identified by Yonghe Li, a chief apothecary of the Chinese Medicine Hospital of Xinjiang. The standards of pulegone were purchased from the National Institute for the Control of Pharmaceutical and Biological Products (Beijing, China). The standards of diosmin and linarin ware purchased from Sigma (USA). Acetic acid of analytical grade was obtained from Tianjin, Fuyu Chemical Reagents Company. Reverse phase RRLC grade methanol was supplied by Fisher Scientific (USA) and water was obtained from a Millipore Q3 ultra-pure water system (Millipore, USA). Rapid resolution liquid chromatography analysis Analyses were carried out on an Agilent 1200 system with diode array detector (DAD). The detection wavelength was set at 270 nm. An Agilent XDB-C 18 column (50 mm × 4.6 mm, 1.8 µm) was used with a flow rate of 0.9 mL/min. The injection volume was 5μl and the column temperature was maintained at 30°C. The mobile phase consisted of solvent A (1% acetic acid in water) and solvent B (methanol), using gradient elution as follows: 0-10 min, 62-52% A; 10-20 min, 52-40% A. Preparation of standard solutions Each standard of diosmin, linarin, and pulegone were accurately weighed, dissolved in methanol:DMSO (3:1 v/v) and diluted to the appropriate concentration for analysis. All stock solutions were stored at 4°C. Preparation of samples The herbal samples of Z. clinopodioides Lam. were first crushed into coarse powder. The pulverized powder (0.2000 g) was added to methanol-DMSO in a 30 ml Erlenmeyer flask. For chemical extraction, the mixture Method development and optimization To obtain chromatograms with well-resolved peaks and minimal analysis time per run, chromatographic conditions were optimized, including the mobile phase, column temperature, and flow rate. Various mixing ratios of water to methanol as the mobile phase were used but no satisfactory separation was achieved. It was found that the presence of acids in the mobile phase enhanced the resolution. The results showed that 1% acetic acid buffer in the mobile phase significantly improved the retention behavior and peak shape of the different components in Z. clinopodioides Lam. However, using isocratic elution, the three compounds could not be separated effectively and so gradient elution was used throughout the study. Other chromatographic variables were also optimized, including column temperatures (25, 30, or 35°C) and flow rate (0.6, 0.8, 0.9, and 1.0 mL/min). Eventually, the optimal separation was achieved at a column temperature of 30°C and flow rate of 0.9 mL/min. To determine the appropriate wavelength for simultaneous determination of diosmin, linarin, and pulegone solutions, standards were injected into the RRLC system and the UV spectra measured over the range 190 to 400 nm by DAD. The UV spectra of all three compounds showed the same absorption maximum at 270 nm. System suitability System suitability tests are an integral part of method development and are used to ensure adequate performance of the chromatographic system. Resolution (R), retention time (RT), number of theoretical plates (N), and tailing factor (T) were evaluated in five replicate injections of the drugs at 5μl. As shown in Table 1, all parameters were within acceptable limits. Method validation Linearity, limits of detection, and quantification Regression analyses were performed using GraphPad Prism 4.00. The correlation coefficient r 2 and linear regression equations were computed by the partial least square method and the quality of the curve fit obtained was assessed. The linear calibration curves were constructed from five concentration assays performed in triplicate. The linear regression equation was Y = aX + b, where Y and X are the values of peak area and concentration of the reference compound, respectively. The results of the regression analyses and the calculated correlation coefficients (r 2 ) are listed in Table 2. The high correlation coefficient values (r 2 > 0.999) indicated good linearity between peak areas (Y) and compound concentrations (X, mg/mL) over relatively wide concentration ranges. The limit of quantification (LOQ) was defined as the lowest concentration with peak height 10 times the baseline noise (S/N = 10). The minimum detectable concentration, as defined by a signal-to-noise ratio (S/N) of 3, was considered to be the limit of detection (LOD). The LOQ and LOD values for the three chemical components are also listed in Table 2. Stability and precision Intra-day and inter-day precision and accuracy were evaluated by analyzing quality-control samples. The intra-day variation was examined by analyzing five individual sample solutions from the same crude sample of Z. clinopodioides Lam on the same day. Inter-day precision and accuracy were determined by once daily trials for three consecutive days. Variations were expressed as relative standard deviation (RSD). The values of the intra-and inter-day variations were less than 2.0%. The instrumental precision was evaluated by five replicate injections of the ban fang ditch sample solution, and RSD value was below 0.89%. Reproducibility The reproducibility of extraction was also investigated for the three components by comparing six samples from six independent extractions. Six 0.2000 g samples of Z. clinopodioides Lam power from Ban fang ditch were accurately weighed, prepared, and analyzed by RRLC. The RSD values of the six replicates were less than 2.0% for all compounds, demonstrating the high reproducibility of the sample preparation procedure. Recovery Recovery tests were performed to further investigate the reproducibility and efficiency of the extraction and analysis method. Recoveries of the three compounds were determined by the method of standard addition. Three concentrations of the compound standard solutions were used to spike Z. clinopodioides Lam samples containing known amounts of each compound (namely 50% of the compound). The mixture was extracted and analyzed as described. The mean recoveries of the three compounds were 104.1% for diosmin, 102.3% for linarin, and 97.4% for pulegone, with RSD values of 1.6, 1.2, and 2.1%, respectively. The obtained results indicate that the developed analytical method was reproducible with high accuracy. It is therefore satisfactory for quantitative analysis. Application to the analysis of Ziziphora clinopodioides Lam samples The developed analytical method was successfully applied for the simultaneous determination of the three components in ten different samples of Z. clinopodioides Lam. All three compounds were detected from every sample. Each sample was determined in triplicate and the peaks in the chromatograms were identified by comparing the retention times and UV spectra with the authentic standards. The three compounds were quantified in the ten samples [ Table 3]. DISCUSSION We describe a new method of RRLC separation, using 1.8 μm particle size of stationary phase instead of the usual 5 μm columns, is proven to be efficient, precise, accurate, sensitive and time saving, and enabled determination of diosmin, linarin, and pulegone from Z. clinopodioides Lam using reversed-phase chromatography with gradient elution and a DAD detector. Simultaneous detection could allow for a large number of herbal samples to be analyzed in a relatively short period of time. Further studies are ongoing in our laboratory to further characterize activate compounds of Z. clinopodioides Lam. It is clear that there was a significant variation in the contents of the three compounds between the ten samples obtained from different regions. Similar variations have been found for other components and may be attributed to the different growing conditions at the sampling sites. These variations in the bioactive components influence the medicinal quality, so it was necessary to develop an effective qualitative and quantitative method to evaluate the overall quality of Z. clinopodioides Lam. The simultaneous analysis of several chemical components is a promising tool for the quality control of medicinal herbs.
2018-04-03T02:03:17.419Z
2012-10-01T00:00:00.000
{ "year": 2012, "sha1": "6be8a4295411fd7e4e3c9004bb4bb092f96ed389", "oa_license": "CCBYNCSA", "oa_url": "https://europepmc.org/articles/pmc3785165", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "e8d6b7ae07996a02c39ba3e62cef44742152eceb", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
20590044
pes2o/s2orc
v3-fos-license
An empowering approach to promote the quality of life and self-management among type 2 diabetic patients Type 2 diabetes is one of the most serious health concerns and policy agendas around the world. Epidemiological evidence suggests that it will likely continue to increase globally. Diabetes is strongly associated with the patients’ unhealthy lifestyle and behavioral patterns and socio-economic changes. New model of thinking is required to recognize whether the patients are in control of and responsible for the daily self-management of diabetes. Such a new approach should be based on ‘empowerment and involvement’ to be more applicable to daily activities in diabetic patients. Rapid changes toward patient empowerment and increasing involvement of patients in their care plan indicate more emphasis on disease prevention and health promotion and education than on mere disease and its treatment. Such changes make a step toward pervasive sense of responsibility among patients about their illness for their daily activities. Using the empowerment approach, healthcare professionals would help patients make informed decisions in accordance with their particular circumstances. Patient empowerment implies a patient-centered, collaborative approach that helps patients determine and develop the inherent capacity to be responsible for their own life. Empowerment is something more than certain health behaviors. Empowerment is more than an intervention, technique or strategy. It is rather a vision that helps people change their behavior and make decisions about their health care. It has the potential to improve the overall health and well-being of individuals and communities, and to change the socio-environmental factors that cause poor health conditions. The main concept of this change is the tendency to change. INTRODUCTION In the past, infectious diseases and malnutrition were the central elements on which a nation health policy was made. Although, many low and middle-income countries are still dealing with the said issues, health care and immunity promotion can tackle with the problems to some extent. In different nations, on the other hand, rapid changes in nutritional lifestyles and the lack of physical activities has taken place along with the changes in the patterns of non-communicable diseases diabetes, osteoporosis, cardiovascular disease and obesity and a large number of malignant diseases, just to name a few. Developing countries are experiencing an epidemiologic transition and what has become known as new world syndrome that is following an unhealthy nutritional pattern, adopting sedentary lifestyle, consuming junk food and increasingly taking drugs. Consequently, nations are prone to non-communicable diseases epidemics in future years. Type 2 diabetes is one of those diseases. Adult diabetes is a major health problem in the world. World Health Organization (WHO) is introducing diabetes as an overt epidemic strongly associated with the patient life style and economic conditions? Given the increasing statistics in diabetes prevalence, WHO introduced diabetes as a covert epidemic and has called upon all countries worldwide to fight with this disease. Diabetes prevalence is worriedly increasing worldwide. The total number of people with diabetes is projected to rise from 171 million in 2000 to 366 million in 2030. [1] Currently, there are more than 3 million diabetic patients in Iran, which is going to be around 7 million, if necessary measures are not taken in this regard. According to the latest report delivered by WHO, the world's adult population is going to increase by 65% from 1995 to 2025 and diabetes epidemic rise from 4% to 5/4%. The world's diabetes affected population is going to increase by 123%. The major part of this numerical will occur in developing countries. [2] According to the statistics produced by different resources, diabetes prevalence varies in Iran. Azizi et al. [3] stated that the prevalence of adult diabetes has risen from 2% to 10% and Iran Ministry of Health and Medical Education [4] revealed 2/3 percent diabetes prevalence. [5] Despite of the impressive achievements in controlling the disease, diabetes still causes a premature death in patients. [6,7] The early death happens as a result of an aggravation in cardiovascular defects and other failures. In 1990, life expectancy was reduced by 0/22 years in women and 0/31 years in men with diabetes, but the negative effects of diabetes on life expectancy have been sharply increasing. [8] Diabetes is the fifth leading causes of death in western countries and the fourth common reasons for a doctor visit. [3] Also, approximately 15% of health care expenses in the Unites States have been devoted to diabetes. [9] Diabetes death rate is 1/5-2/5 percent higher than that of the general population. Diabetes causes 75% of death in people under 35. Compared with the general population, people with diabetes, [10] particularly women, are 2-4 times more likely to die from cardiovascular diseases caused by diabetes. [11,12] Considering the fact that diabetes is an acute, non-communicable and costly disease, a high financial burden should be borne by the patient, his family, society and the country. [13] According to an estimate made by Dali Index, diabetes financial burden equaled 306,440 years in 2001 in Iran. This value is rising due to an increase in diabetes. [14] The chronic nature of diabetes greatly affects the patient body, mentality and his socio personal functions. Therefore, a careful evaluation of the patient's health and life quality is of a great importance. Diabetes as a general hygiene problem, poses a threat to patient' life quality, and causes chronic and acute consequences. Also, in many countries diabetes is a major cause of disability and death. [15] Scientific evidences indicate that only a small proportion of chronic diseases like diabetes are treated by specialists, whereas a number of diseases are managed by the patient himself and his family. [16] Self-management interventions cause positive changes in attitudes, expanding the relevant health and hygienic knowledge and developing health skills in patients. [17] Life style-activities such as physical activities, nutrition and rest, controlling and monitoring blood sugar, interacting with specialists and people who affect the patient, self-control activities and following a regime therapy are adopted as self-management variables. [18] Today, various choices and options are proposed in health care and treatment. With increasing costs of health care and treatment, health care resources limitations and changing disease patterns, different assessments are carried out with respect to the evaluation of the effectiveness of different types of treatment strategies. Such assessments make the decision process difficult. This measure is given priority in order to treat chronic diseases, particularly diabetes, for this disease can be controlled through self-managing and adopting selfcare behaviors. [19] It seems, therefore, that comprehensive management of diabetes through educating and managing the disease is effective in the improvement of glycemic control. It is necessary for diabetic patients to learn self-blood sugar monitoring. Blood sugar monitoring facilitates the changes in lifestyle by using a feedback mechanism on controlling blood sugar level. The changes are made to improve hygienic behaviors through physical activities and nutritional behavior. [20] The studies reveal that the type of treatment (insulin therapy) provided for diabetic patients affects their quality of life. Although, the type of treatment is affective for the patient, it is important to pay meticulous attention to his supportive care issues. The issues need to receive full attention in all aspects to develop the metabolic control. [21] Mosaku et al. [22] pointed out in his study that depression is the most common mental disorder among the patients. Factors such as the patient's age, poor condition under which blood sugar is controlled and the duration of disease can predict the depression in diabetic patients. Also, factors like depression and anxiety are associated with the patient's general welfare. Depression along with underlying diseases is predictors of the patient's low quality of life. Over the past decades, the approach for diabetes education changed and strengthened the motivation in both educators and patents. Consequently, patients enjoyed greater benefits. Fresh information on the importance of metabolic control, exploration of new treatment strategies, development in the technology of monitoring and measuring blood sugar were all the factors that raised hope in patients. The mentioned factors decreased patient's dependence and increased diabetes self-management. Also, theory and research-based education were introduced to diabetes education and great attention was devoted to its value. And finally, educational standards were set for educators. [23] Although, conventional education could sufficiently meet the patient's knowledge requirements, knowing the environmental and socio-psychological effects on patients' behavior lead to employing educational techniques for the purpose of changing their behavior. Focus was shifted from "building capacity to adhere to the treatment" approach to "self-effectiveness and self-management" approach. The educator was substituted by the patient-educator interaction and the power between them. And also the focus on "the lack of responsibility to build the capacity for the patients who were experiencing a poor adherence to treatment" was shifted to "their participation in taking responsibility for their own health" through interaction with educator. [24] It is only the patient who can estimate which knowledge or behavior he has learned. [23] The present study aims at the assessment of a dominant approach with respect to the education of diabetic patients and the development of both management skills and life quality. DISCUSSION The global focus shift toward the empowerment and involvement of patient into self-caring, reflects a stress and focus on health, disease prevention and the education of health care rather than a mere focus on the disease and its treatment. This is a step towards developing the sense of responsibility of the patient about his disease. In the past, treatment guidelines in association with the medical model were presented. It was a mandatory practice in adherence to treatment of chronic disease. The communication strategies employed for this purpose were the only attempts in managing the disease. According to the experiences, such strategies are not effective enough, particularly if they are related to chronic diseases. People are empowered when they are fully provided with the necessary information to make wise decisions, exercise an appropriate control over themselves and having a fine condition under which a decision was taken into action, also when they have a wealth of experience to evaluate the efficacy of the decision. [25] The patient empowerment movement started in early 1970s at the same time when the patient rights charter was drawn up. The goal of patient empowerment is to build up the capacity of patients to help them to become active partners in their own care, to enable them to share in clinical decision making, and to contribute to a wider perspective in the health care system. [26] Empowerment is a positive concept that refers to the patient's facilities, abilities and surrounding environment. The concept was formed in order to detect problems, defects and interfere in them. It enables and empowers people and causes the power and strength to pass from one person or one group to another one. [26] Power is an inner feeling of self-awareness and self-education. [27] Empowerment is both a process and a consequence. [28] Empowerment is achieved through interaction between people and causes interpersonal and intrapersonal communications. [29] By 2010, empowerment will be hygienically an achievable goal for patients and they try to improve their health conditions through active participations and making smart decisions. [30] Empowerment is a practical strategy in improving health condition. [31] Empowerment skills include solving problems, boosting self-confidence and creating strategies to create mutual trust. [32] Empowering a patient in health care issues means improving the patient's self-determination and self-regulation. Therefore, people's potential for health and welfare will rise to maximum. Empowerment process begins with providing the patient with information and education and ends when he can actively participates in making smart decisions about his disease. [33] In this pattern, health professionals help patients make informed decisions regarding their particular conditions. Patients are encouraged to fully participate in their treatment process by sharing their knowledge and experiences and making decisions through mutual assistance. Empowerment discovers and expands one's inner capacity to accept responsibility toward their health. The main concept of this change is the tendency to change. Empowerment is something more than certain health behaviors and develops the potential to develop the overall health and well beings in people and communities. Empowerment is an intervention or a strategy to help people change their behavior in order to adhere to the treatment plan. Empowerment is a practical strategy in promoting health. [31] Craig and Lindsay define empowerment as a process through which people can dominate their condition. [32] Jones and Meleis describe the concept of empowerment as a "social process of recognizing, promoting, and enhancing people's abilities to meet their own needs, solve their own problems, and mobilize necessary resources to take control of their own lives." [28] In other words, patient empowerment is a process of helping people to assert control over factors that affect their health. Empowerment is also defined as a skill and ability to participate. Empowerment skills cover issues such as problem-solving, self-confidence and strategies to develop trust. [32] Funnel et al. [25] define empowerment as improved self-concept; critical analysis of the world; and identification with members of a community participating in, organizing for, and carrying out environmental change. Based on his writings, "empowerment education" places people in a group effort, enables them to assess the social and historical roots of the problem, and allows them to envision a healthier society, thus empowering them to develop strategies to solve their problem. Such community/group participation enhances a person's belief in their ability to influence change in personal and social realms. Empowerment education targets individual, group, and structural change. To empower individuals, the motivation and skills that enable them to advocate for social reforms must be developed. In this definition, empowerment includes prevention, as well as community connectedness, self-development, improved quality of life, and social justice. Funnel et al., also, state that empowerment include self-reliance matters, self-responsibility and self-care, however, hygienic behavior has been reported more often. [33] There is a strong and close link between empowerment and development in the society. WHO health promotion glossary illustrates a difference between individual empowerment and community empowerment. Individual empowerment refers primarily to the individuals' ability to make decisions and have control over their personal life. Community empowerment involves individuals acting collectively to gain greater influence and control over the determinants of health and the quality of life in their community. [25] Empowering outcomes include having positive self-esteem, having and achieving goals, gaining control over life and having a sense of hope for the future. [34] The empowerment process can be achieved through training and support. There are ranges of options available including providing information sheets, multimedia programs, use of information technology, and skill building such as a diabetes self-management program. The initial step in gaining respect and meeting patient's needs or preferences is to solicit their views and listen to what they say. Multiple studies have demonstrated that patients who are involved with decisions about their care and the management of their conditions have better outcomes than those who are not involved. [25] In order to build capacity and adhere the treatment program, Various theories of learning and behavior such as the health belief model, the socio-behavioral model, self-efficacy and empowerment analyze the information from the perspective of short-term and long-term results, base on mechanisms by which the patient's psychosocial and environmental context affects his/her acceptance, capacity building and adherence to regimens. They also provide guidance for investigators in their efforts to develop patients diabetes education (PDE) approaches to fit better with human behavior. This would allow improved compliance and regimen adherence and consequently long-term diabetes control. [35] Pattern is a major plan that sets the general view about a subject. The pattern clarifies educators' view on what activities should be done. [22] Pattern is an educational process which provides the necessary guidelines for educational assessment and intervention design and facilitates this process. Models are used to help people understand a particular problem to organize information. They are often used to present the process and sometimes to explain the process. Models provide health educators with a framework for design, implementation and assessment of the program. Choosing a proper pattern in health education is the first step to design an educational program. One of the theories frequently advocated in the literature as a useful model for PDE is patient empowerment. It has been suggested as a new approach for PDE, in order to cope with rapidly changing patterns of diabetes care and management, and to integrate its clinical, psychosocial and behavioral components and self-management education. This approach recognizes the nature of the actual experience of having diabetes and views the health care professional as a resource person/consultant. The purpose is to provide a combination of diabetes knowledge and self-management skills, and heightened self-awareness regarding values, beliefs, needs, and goals so that patients can use this power to make informed decisions about their behaviors and act for their self-care. Advocates believe that empowerment expand overall health status by affecting individuals behavior and using personal and social resources. [23] PDE designed to empower patients to self-manage diabetes in the bio-psychosocial context has a very different goal than PDE designed simply to persuade patients to comply with the treatment recommendations in order to improve their health status. Empowering is based on mutual respect, which is the result of placing value on human life and building a patient-caretaker relationship. To empower, the PDE approach needs to be adapted to meet patient's needs, and to reflect and express his/her lived experience with diabetes through recognition and promotion of individual strengths, informed choices, and personal goals. [23] Empowerment includes several hidden concepts that can be evaluated: Perceived concepts, knowledge, attitude, self-efficacy, skill, self-expectancy, health definition, motivation, self-confidence. [36] Perceived threat consists of two parts: Perceived susceptibility and perceived severity. Perceived susceptibility is one's subjective perception to harmful condition resulting from specific behaviors and has a cognitive dimension and depends on one's knowledge. To build perceived susceptibility, it is important to state the negative consequences and highlight the possible hazards for the patients. However, unrealistic fear or phobia should not be aroused. Perceived severity One's belief of how serious a disease and its consequences are, has a strong cognitive component, which is dependent of one's knowledge. Different people have different perceptions of risk. Health educators need to build perceived severity by describing the serious negative consequences and personalizing them for the patient. One of the key concepts in empowerment is self-efficacy, which was defined by Albert Bandura. Self-efficacy has become a key variable in clinical, educational, social, developmental health and personality psychology. It has been proved that self-efficacy not only matches the disease with treatment, but it affects health activities. It also has many uses in behavior change. Bandura defines self-efficacy as capacity perceived by an individual to successfully execute a given behavior. [37] Self-efficacy is a cognitive construct that contrasts instrumental behavior demand with personal abilities. Perceived self-efficacy is defined as people's beliefs about their capabilities to produce designated levels of performance that exercise influence over events that affect their lives. Unless people believe they can produce desired effects by their actions, they have little incentive to act. Self-efficacy is the most important precondition for behavior change. There are four efficacy-enhancing strategies: (a) The client needs to feel successful in implementing new skills. (b) Another strategy involves breaking down the overall tasks of behavior change into smaller, more manageable subtasks that can be addressed one at a time. (c) Instead of focusing on a distant end goal, the client is encouraged to set smaller, more manageable goals. (d) Therapist can also enhance self-efficacy by providing the clients with positive feedbacks. Bandura points to four sources affecting self-efficacy: 1. Mastery Experiences 2. Social Modeling 3. Social Persuasion 4. Psychological Responses. Moods, emotional states, physical reactions, and stress levels can all impact how a person feels about their personal abilities in a particular situation. Self-efficacy or one's belief in the ability to do a specific behavior is a principle connection between knowledge. Self-efficacy also affects the choice of behavior, settings in which behaviors are performed, and the amount of effort and persistence to be spent on performance of a specific task. The level of self-efficacy in diabetic patients can be assessed through self-management behaviors and consequences. [38] Self-esteem is a concept following self-efficacy in empowerment. Self-esteem is the degree to which one feels confirmation, verification, acceptance and value as a person. Self-esteem and self-efficacy are two primary components in learning process. They are correlated and complementary to each other and there is a mutual relationship between them. Study shows that people who have low self-esteem and place low value on themselves, poorly look after their health and also encourage the others to do so. They experience desperation, depression symptoms, bad eating habit, the sense of victimhood and the lack of ability to improve communication with others. Increasing selfesteem and consequently improving self-efficacy could be of a great importance in empowering diabetic patients. [39] There is a meaningful relationship between self-esteem and health behavior and also between self-efficacy and one's vision of his/her ability. Boosting self-esteem through group discussion can raise self-efficacy. Therefore, one can expect that preventive health behavior adoption will be promoted following this program. [36] Self-control is another concept of empowerment theory. Internal locus of control promotes one's sense of responsibility toward their behaviors, for if the person takes the responsibility of his own health, he will try to change bad behaviors and adopt acceptable behavior. People with low self-esteem have external locus of control and people with high self-esteem have internal locus of control. In this theory, self-control means people's perceived severity will be developed once they acquire enough knowledge about their disease. Having high self-esteem and enough level of self-efficacy, they develop skill at adopting preventive behavior. Therefore, they reach self-control gained with cognition, decision-making, self-efficacy and a value system to stabilize preventive health behavior. The empowerment approach is based on three key principles related to diabetes, its management and the psychology of behavior change. The principles are summarized below: • The reality of diabetes care is that more than 95% of that care is provided by the patient; therefore, the patient is the locus of control and decision-making in the daily treatment of diabetes • The primary mission of the health care team is to provide ongoing diabetes expertizes education and psychological support so that patients can make informed decisions about their daily diabetes self-management • Adults are much more likely to make and maintain behavior changes if those changes are personally meaningful and freely chosen. Key concepts of empowerment relevant to diabetes education are listed below: • Emphasis on whole person: This approach takes into account the cognitive, biophysical, psychological and social aspect of a person. It assumes that the person's value, beliefs and opinions are to be respected and considered. In addition to providing information, the major contribution of the educators is to provide a trusting relationship in which patients feel valued, trusted and psychologically safe • Emphasis on personal strengths, rather than deficits: Each person has useful knowledge and there is value in each person's culture and ethnic tradition • Patient selection of learning needs: This helps to ensure the relevancy of the information presented and decreases the likelihood of so-called inert knowledge-that patients will know but still not able to do • Setting of shared or negotiated goals: Treatment and behavior-change goals are mutually agreed upon. Behavioral strategies are not used as a way of getting patients to do what the educator wants, but rather as ways to help patients attain their personal blood glucose level, weight or other goals • Transference of leadership and decision making: Because diabetes education and care are currently delivered in an episodic way with limited follow-up, and because diabetes requires multiple daily decisions, persons with diabetes must assume responsibility for their care to ensure its adequacy • Self-generation of problems and solutions: Problems that are identified and solutions that are chosen by patients tend to be more relevant and meaningful because they are generated within the context of their life-styles, values, beliefs and support systems. The educator facilitates this process by helping patients to explore problems, express feelings, develop alternative options, consider the consequence of various options and come to appropriate decisions. The educator serves as a sounding board and a resource person • Analysis of failure as problems to be solved rather than as personal deficits: This approach helps patient maintain the long-term motivation needed for a lifelong illness • Discovery and enhancement of internal reinforcement for behavior change: One can expect more consistent, long-term adaptions when changes are internally motivated rather than externally imposed and reinforced by others • Promotion of escalating participation: As patients gain control over their diabetes through the acquisition of knowledge, problem-solving experience ad negotiation skills, they are able to assume more and more responsibility for their own care. This responsibility is gradually transferred to the patient through systematic education and support • Emphasis on support networks and resources: This philosophy assumes that, although most people have learned some behaviors that are barriers to health, they still have a fundamental drive for health and desire to overcome barriers to optimal self-care. [25] There are two major challenges health care professionals often face in successfully implementing empowerment approach to diabetes care. • The first challenge is the discomfort some health care professionals experience when discussing the emotional content of diabetes or a diabetes problem that a patient has identified. Having and caring for diabetes has a potent emotional component for most patients. Adults seldom make and sustain significant changes in their lives unless they feel a strong need to change. If the change process is to be successful, it is crucial for the health care professional to elicit the patient's feelings related to the issue. If the patient does not experience strong feelings about the current situations, the likelihood of sustained behavior change is small. Health care professionals are not required to solve or change patient's emotions but rather to create an environment in which the patient's emotional experience is validated and can be express freely • The second major challenge is the tendency of many health care professional to solve problems for patients rather than with them. If a patient is clearly asking for technical expertise possessed by the health care professional, such behavior is appropriate. Most of the problems involved in the daily treatment of diabetes are more psychological than technical. The process of helping patients discover their capacity to solve their own problem reinforces their self-efficacy and personal responsibility for the treatment of their diabetes. [35] There are also challenges that patients may need to face to successfully implement this approach to diabetes care. Many patients in the past were blamed or criticized for their efforts at diabetes self-management that they were reluctant to visit health care professionals. Discussing openly their daily efforts related to diabetes care, expressing any disagreement with health care professionals and asserting their own needs or values related to the treatment of their diabetes all points out that the patient needs to actively participate in the process of his own care. Effective diabetes care requires new roles for both health care professionals and patients. By creating a collaborative relationship, both the health care provider and the patient can find themselves in a satisfying partnership that results in improved glycemic control for the patient and an enhanced sense of self-efficacy and a level of satisfaction with care for both parties. CONCLUSION Considering the rapid spread of diabetes in developed and developing countries and the chronic nature of diabetes, the evidence revealed that the interventions made based on self-management information caused positive changes in beliefs, expanded health information related to diabetes and developed health care skills. [17] Enhancing self-management behaviors is being discussed as a bridge built to reach the welfare and quality of life for diabetic patients. We need to point out five general principles of self-management education in this regard. Diabetes education is effective in improving and developing clinical results and a better quality of life, at least in the short term. [40][41][42][43][44][45][46] Diabetes self-management education program has shifted from traditional approaches to empowerment-based models. [43][44][45][46][47] Since there are many factors involved in choosing educational approach, there is no perfect program or approach. They change according to the patient's needs and goals. Besides, group education is effective. [40,44,[48][49][50] Within the educational program, Continuous support is crucial to stabilize the changes in participants. [43][44][45][46][47][48][49][50][51][52][53] Setting behavioral objectives is a fundamental strategy in supporting self-management behavior. [54] Empowerment has been discussed as a dominant approach in supporting the patients with chronic disease, particularly type 2 diabetes. It is hoped that it could be possible to shift from traditional approach to empowerment approach in dealing with patients with chronic disease by building capacity to strengthen their skills, competencies and abilities, so that they can manage to enhance the quality of their lives.
2018-04-03T05:21:58.731Z
2015-03-26T00:00:00.000
{ "year": 2015, "sha1": "a5852dd83a8c37b5be50b0a8f00bfed061676eed", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/2277-9531.154022", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "6b04f553c275c981fc9f965fef3c229bb2158809", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
14778883
pes2o/s2orc
v3-fos-license
Barriers and facilitators to use of non-pharmacological treatments in chronic pain Background Consensus guidelines recommend multi-modal chronic pain treatment with increased uptake of non-pharmacological pain treatment modalities (NPMs). We aimed to identify the barriers and facilitators to uptake of evidence-based NPMs from the perspectives of patients, nurses and primary care providers (PCPs). Methods We convened eight separate groups and engaged each in a Nominal Group Technique (NGT) in which participants: (1) created an individual list of barriers (and, in a subsequent round, facilitators) to uptake of NPMs; (2) compiled a group list from the individual lists; and (3) anonymously voted on the top three most important barriers and facilitators. In a separate process, research staff reviewed each group’s responses and categorized them based on staff consensus. Results Overall, 26 patients (14 women) with chronic pain participated; their mean age was 55. Overall, 14 nurses and 12 PCPs participated. Seven healthcare professionals were men and 19 were women; the mean age was 45. We categorized barriers and facilitators as related to access, patient-provider interaction, treatment beliefs and support. Top-ranked patient-reported barriers included high cost, transportation problems and low motivation, while top-ranked facilitators included availability of a wider array of NPMs and a team-based approach that included follow-up. Top-ranked provider-reported barriers included inability to promote NPMs once opioid therapy was started and patient skepticism about efficacy of NPMs, while top-ranked facilitators included promotion of a facility-wide treatment philosophy and increased patient knowledge about risks and benefits of NPMs. Conclusions In a multi-stakeholder qualitative study using NGT, we found a diverse array of potentially modifiable barriers and facilitators to NPM uptake that may serve as important targets for program development. Background The landmark 2011 Institute of Medicine report on pain care in the U.S. highlighted that multimodal, biopsychosocially-oriented treatment that promotes patients' self-management skills is the optimal paradigm for improving the effectiveness of chronic pain treatment [1]. However, the report noted that chronic pain treatment is frequently solely pharmacologic and excludes evidence-based non-pharmacological pain treatment modalities (NPMs) [2]. With mounting evidence that treatment relying solely on pharmacotherapy is often unsafe and/or ineffective in chronic pain treatment, consensus recommendations increasingly promote a multi-modal treatment strategy [1,3,4]. This strategy seeks to shift the clinical paradigm away from heavy reliance on medications to a treatment approach that incorporates a diverse array of NPMs targeting the complex nature of chronic pain and promotes patient self-management [5,6]. Because of the widespread prevalence of chronic pain and the major impact it has on quality of life, integrated health systems such as Kaiser Permanente and the Veterans Health Administration (VHA) have sought to make multi-modal pain care widely available [19], even establishing virtual treatment networks relying on telehealth to deliver some NPMs to remote areas [20]. Despite these efforts, at some centers, NPM utilization remains relatively low [21]. In response, the Institute of Medicine, and more recently the Department of Health and Human Services, called for a comprehensive examination of barriers "to help close the gap between empirical evidence regarding the efficacy of pain treatments and current practice." [1] In an effort to identify such barriers and facilitators to ultimately inform the design of effective strategies for health systems to increase utilization of NPMs, we studied the perspectives of two stakeholder groups: patients with chronic pain and healthcare professionals (nurses and primary care providers (PCPs)). While other studies have examined qualitative factors related to pain management from patient [22,23] and provider perspectives [24][25][26], our study is novel in its focus on non-pharmacological treatments, examining patient and provider perspectives simultaneously, our use of the nominal group technique (described below), and our inclusion of nurses, whose role in delivering multimodal, team-based pain care is essential. Overview Because our aim was more to identify themes than interpret perspectives, we employed qualitative description methodology with thematic analysis to study the question what are the consensus-based most-important barriers and facilitators to greater uptake of NPMs for chronic pain? To obtain data for the study, we convened eight separate groups of participants and engaged them each in a Nominal Group Technique (NGT) process. The NGT process, described in detail below, allows researchers to generate consensus among stakeholders regarding answers to focused questions [27,28]. We viewed NGT as an attractive data gathering approach for a number of reasons: 1) it encourages balanced participation among participants; 2) it offers the opportunity to generate a breadth of factors, albeit potentially sacrificing depth; and 3) it brings with it group consensus and closure that may be lacking in other group methods [29]. Participants Most of the included patients were recruited by their PCPs as having previously shared opinions on pain treatment; a small minority we recruited through a flyer. Patients were age 70 or younger with an average numeric pain rating scale score of 4 or higher on most days of the past month as ascertained on phone screening by a research assistant. Patients needed to be fluent in English and cognitively intact, ascertained in the screening phone call; psychiatrically and medically stable, determined by the absence of inpatient admissions in the prior 30 days; and lack cancer diagnoses documented by electronic health record review. Patients received $20 for participation. We recruited nurses and PCPs at staff meetings and via email as PCPs and primary care nurses in VA settings are the first line of treatment for chronic pain. They are also responsible for the vast majority of referrals to NPM services. There were no eligibility criteria for healthcare professionals beyond employment in the setting and direct provision of patient care. Those who agreed to participate generally considered pain treatment an important topic for discussion. Study design and NGT description We convened eight separate nominal groups: four patient groups (two with women only and two with men only), two nurse groups and two PCP groups. We separated patient groups by sex to ensure comfort participating in groups; at our center, as well as other VHA medical centers, women have separate primary care clinics and we sought to achieve a similar environment. Each group, consisting of the recommended 5-9 subjects [30], participated in an NGT session facilitated by members of the research team (authors WCB, LD and LI). After providing a brief background of the study and explanation of terms and procedures to orient participants, we asked participants about barriers to NPM uptake as follows: "What are some barriers to patients using non-pharmacologic pain treatments? In other words, why don't some patients use these kinds of treatments or what makes it harder for patients to use them?" We showed all groups a picture card depicting five specific NPMs and briefly described each one: physical therapy, cognitive behavioral therapy, yoga, chiropractic and mindfulness based stress-reduction. After completing the full NGT process described below and a 1-2 min break, we then asked about facilitators as follows: "What are some of the things that make it more likely for patients to use non-pharmacologic pain treatments? What makes it easier to use these kinds of treatments?" In each round, we asked participants to silently write down as many responses to the question as possible in five minutes. After this, a researcher asked each participant to read one answer aloud in a round robin fashion, while another researcher wrote the responses on a flip chart without discussion or editorializing. Once all answers were on the flip chart, we engaged the group in discussion for the purposes of clarifying any of the responses, editing as necessary, and consolidation of very similar or identical answers, as judged by the participants. Once the final list of consolidated answers was complete for "barriers," each participant anonymously voted on a most important, 2nd most important and 3rd most important response by writing her or his votes on a note card. The same sequence of processes was repeated for "facilitators." We did not instruct participants to link facilitators to the barriers they provided in the first round. Data analysis We collected basic demographic information on participants to provide a description of the sample. Following standard methodology [31], for each nominal group, we tallied voting points for each of the barriers and facilitators identified: "Most important" votes received 3 points; "2 nd most important" received two points and "3 rd most important" received one point. The tallied points allowed for a group-level ranking of responses. To facilitate comparisons across groups and interpretation of the findings, the research team used thematic analysis to categorize responses based on our consensus interpretation of their meaning. Since several individual nominal groups listed over 20 barriers and facilitators, but there was striking similarity in responses between similarly composed groups, we combined both women groups' , men groups' , nurse groups' and PCP groups' responses. Participant characteristics Overall, data was collected from 52 participants: 26 patients and 26 healthcare professionals. Demographic and clinical data on the participants is presented in Table 1. Response categories We identified five categories of responses present across participant groups: (1) access; (2) awareness or knowledge; (3) patient-provider interaction; (4) treatment beliefs; and (5) support. Table 2 displays barriers and facilitators, by category, highlighting the highest rated factors by participant groups. Barriers Several barriers related to access were identified, including factors related to transportation, scheduling, out-ofpocket costs, and resources. Patients rated distance to travel, high cost of treatment, and lack of some NPM availability as important access barriers; each of these barriers was also mentioned by providers. The highest rated access barrier by providers was the travel required by patients. Regarding barriers related to NPM awareness or knowledge, providers acknowledged that both patients and providers are unsure of what some NPMs entail or the rationale for NPMs. Patients agreed that a lack of knowledge on the rationale for treatment was a barrier. Further, both patients and providers reported being unaware of what NPMs are available. A range of barriers related to patient-provider interactions were identified. Provider groups mentioned patient trust in PCPs, patient perceptions of NPM providers, and personal preferences regarding treatment modalities as barriers. The most important patient-identified barrier in this domain was patient's lack of motivation. Beliefs about treatment were identified as barriers to NPM use. Medication-related beliefs such as patient perception that medications are more effective were identified as one of the most important barriers reported by providers. Skepticism about the efficacy for NPMs by both patients and providers was identified as a barrier, including patient and provider beliefs that NPMs are not effective, that NPMs will fail, or that NPMs are substandard treatment (i.e., as compared to medications). The burden of NPMs, including the long course of treatment, the time commitment required, and the perceived pain or stress that may accompany engagement in NPMs were also identified, as were concerns about the potential harm of NPMs (e.g. worsening pain, exacerbation of health problems). Finally, support (or lack thereof ) from the healthcare system and from one's social support system were identified as potential barriers. For example, providers noted a lack of positive influence from family or friends, while patients noted a lack of patient support from families and from doctors. Facilitators After reviewing barriers, participants were asked to brainstorm potential facilitators that may help patients engage in NPMs. Regarding access, having NPM sessions closer to home (or in the home) was rated highly important by both patients and providers. Other facilitators included having a wider variety of NPMs readily available and ensuring timely and easy access to NPMs. To enhance awareness and knowledge of NPMs, patients highly rated the need for better explanations of what to expect and a better rationale for NPM treatments; similarly, providers rated increased patient knowledge and better advertising about services offered as important. In addition to patient education, both groups mentioned educating providers about the evidence for and availability of NPMs. Patients identified several facilitators related to patient-provider interaction, including provider empathy, respect for patients' preferences and open communication. Providers identified empathy and compassion, as well as shared decision making, as potential facilitators. Similar, within the domain of support, patients noted that support and encouragement from the medical team was an important facilitator. Providers identified several facilitators related to treatment beliefs, including patient belief in the efficacy of NPM treatment and the belief that NPM recommendations are part of a standard protocol. Patients and providers both noted that reinforcing positive NPM-related beliefs, such as the belief that NPMs can be effective and may have fewer adverse side effects than medications, were important. Discussion This multi-stakeholder qualitative study on barriers and facilitators to use of NPMs for chronic pain elicited a wide array of patient-, provider-and systems-related factors that likely contribute to use and non-use of these evidencebased treatments. These factors, which we categorized as related to access, patient-provider interaction, treatment beliefs, and support, represent a number of important targets for implementation efforts locally and may generalize to other populations and health systems. Overall, patients and providers identified very similar barriers and generated many of the same facilitators, demonstrating consistency in beliefs about NPM use among various stakeholder groups. As chronic pain is a highly prevalent and costly condition, we focus below on potential interventions, discussed by category, that have broad applicability and relevance. Particularly from the patients' perspectives, barriers related to access to NPMs-transportation, cost, scheduling and resources-were especially prominent. VHA's systemwide access challenges have been in the spotlight recently [32]; some of the proposed solutions to these broader access issues were echoed in this study. For example, subcontracting pain treatment services to private facilities closer to patients' homes may leverage transportation cost savings to offset increased treatment expense for the health system. Technology may also play a role in enhancing access as web-based and telehealth platforms for delivering NPMs such as cognitive behavioral therapy continue to advance [33]. In-person treatments using group formats-for example yoga [34], structured exercise classes, chronic pain schools, and mindfulness-based stress reduction groups [26]-can expand treatment capacity. Offering group sessions outside of typical work hours may be another effective approach to expanding access. Awareness and knowledge-related factors as well as treatment belief-related factors revealed a number of fundamental issues that, considered together, suggested a need for a broad-based, multi-pronged implementation strategy. A primary concern was the perception that referring providers and patients alike are skeptical about NPMs, do not understand the rationale for NPMs, nor do they know what many NPMs entail. Academic detailing, in which providers are educated about treatment strategies [35] could be one approach to enhancing education; however, our findings suggest that targeting provider education alone would not be sufficient since patients' attitudes and preferences were also identified as barriers, suggesting that provider and staff training in communication and education (of patients) about the multimodal pain treatment philosophy is needed. Furthermore, patients and providers' lack of awareness of NPMs' availability suggested the need for advertising campaignsperhaps using novel methods such as social media. A broad-based promotion of the multimodal treatment paradigm, reflecting an institutional belief in and commitment to the treatment philosophy, may help support culture change. Australia's "Back Pain: Don't Take It Lying Down" campaign is one such successful example [36]. Several inaccurate but commonly held treatment beliefsfor example, that NPMs cannot or should not be used if patients are experiencing stress or other significant medical issuescould be specific targets for motivational enhancement and educational messages. Embedded in addressing treatment beliefs and increasing knowledge and awareness of NPMs is the need to improve patient-provider interactions. Distrust in providers, the belief that referral to NPMs occurs because pain is not believed, and patient's lack of motivation to engage in NPMs all suggest that training providers in more effective communication is important. The use of motivational interviewing strategies as well as other pain communication strategies such as validation [37], are needed to help providers more effectively engage with patients with chronic pain. Similarly, lack of support from medical providers, peers, friends, and family was identified as a potential barrier to NPM utilization, suggesting that support is needed for successful engagement in NPM treatments. Indeed, encouragement from the medical team was identified as one of the most important facilitators to NPM engagement. It is also clear from our findings that while we explicitly focused on NPMs, educational and clinical interventions must consider the role of pharmacologic treatments, especially opioids, when educating patients and providers about pain. Primary care providers, most often the prescribers of opioids for chronic pain in VHA, expressed frustration about the lack of tools for communicating with patients receiving opioids about the importance of NPMs and lack of support in follow up for patients around this issue. Scripted messaging from providers about the relative efficacy of NPMs compared to opioids-that, in fact, NPMs show at least equivalent and perhaps superior benefit-may increase acceptance of NPMs. Also, expert recommendations strongly support NPMs in conjunction with long-term opioid therapy [3], suggesting engagement with NPMs could be considered a pre-requisite for ongoing opioid therapy as part of treatment agreements. Evidence-based collaborative care models [38,39] in which nurses or other midlevel providers follow up on multimodal pain treatment plans to assess barriers to adherence and enhance patient motivation may be critical to re-distributing work load away from PCPs and improving quality of care. These models are also consistent with patient report that continued encouragement from their care team would be a significant facilitator to NPM engagement. This last point also related to the factor of support, another recurring theme in our data. Besides care management and structured follow up, our findings suggested peer and family support interventions as other potentially effective strategies, models well-supported in the treatment of other chronic conditions [40,41]. Strengths of the study included the large and diverse group of participants from two stakeholder groups, increasing the likelihood that important barriers and facilitators were not missed. The NGT methodology itself also contributed to this strength since it encourages active involvement of all participants. This study has limitations. The nominal groups were performed in one integrated health system with a relatively robust array of NPMs; the barriers and facilitators identified may not be generalizable to other settings. Furthermore, many patients reported some use of NPMs in the recent past; a different sample of patients with less experience with NPMs may have identified different kinds of barriers and facilitators. Also, because the drawbacks of longterm opioid therapy and calls for renewed focus on NPMs have dominated discourse in the U.S., we asked participants to consider all NPMs as a singular group; however, asking participants to consider different kinds of NPMs as a homogenous group may have obscured important barriers and facilitators to uptake of specific NPMs. Finally, participants' responses to the study questions may have been subject to bias, including social desirability bias. To mitigate this possibility, participants were asked to consider not only their own perspectives, but perspectives of others they may have heard about. Conclusions In this large qualitative study of barriers and facilitators to use of NPMs for chronic pain, the inclusion of multiple stakeholder groups led to a robust array of factors that could serve as targets for developing interventions aimed at improving uptake of these evidence-based treatments.
2017-08-08T19:36:58.054Z
2017-03-20T00:00:00.000
{ "year": 2017, "sha1": "9609b96104026325a77e007eb698a643231004b0", "oa_license": "CCBY", "oa_url": "https://bmcfampract.biomedcentral.com/track/pdf/10.1186/s12875-017-0608-2", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9609b96104026325a77e007eb698a643231004b0", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
227119082
pes2o/s2orc
v3-fos-license
Effects of the Background Turbulence on the Relaxation of Ion Temperature Anisotropy in Space Plasmas Turbulence in space plasmas usually exhibits two distinct regimes separated by a spectral break that divides the inertial and kinetic ranges in which wave-wave or wave-particle interactions dominate, respectively. Large scale fluctuations are dominated by MHD non-linear wave-wave interactions following a -5/3 or -3/2 slope power-law spectrum, that suddenly ends and after the break the spectrum follows a steeper power-law $k^{-\alpha}$ shape given by a spectral index $\alpha>5/3$. Despite its ubiquitousness, few research have considered the possible effects of a turbulent background spectrum in the quasilinear relaxation of solar wind temperatures. In this work, a quasilinear kinetic theory is used to study the evolution of the proton temperatures in a solar wind-like plasma composed by cold electrons and bi-Maxwellian protons, in which electromagnetic waves propagate along a background magnetic field. Four wave spectrum shapes are compared with different levels of wave intensity. We show that a sufficient turbulent magnetic power can drive stable protons to transverse heating, resulting in an increase in the temperature anisotropy and the reduction of the parallel proton beta. Thus, stable proton velocity distribution can evolve in such a way as to develop kinetic instabilities. This may explain why the constituents of the solar wind can be observed far from thermodynamic equilibrium and near the instability thresholds. INTRODUCTION In many space environments the media is filled by a poorly collisional tenuous plasma. As Coulomb collisions represent an efficient mechanism for relaxing plasma populations towards a thermodynamic equilibrium state in which the particle Velocity Distribution Functions (VDFs) achieve a Maxwellian profile [1,2], when collisions are scarce Coulomb scattering becomes ineffective in establishing equilibrium. Subsequently, kinetic collisionless processes may dominate the dynamics of the system and be responsible for many of the observed macroscopic and microscopic properties of the plasma. Under these conditions the plasma VDF usually develops non-Maxwellian characteristics that can provide the necessary free energy to excite micro-instabilities that subsequently can induce changes on the macroscopic properties of the plasma. Among the fundamental problems of plasma physics belongs the understanding of the excitation and relaxation processes of these poorly collisional plasmas and the resultant state of nearly equipartition energy density between plasma particles and electromagnetic turbulence [3]. In particular, these processes play an important role in space plasma environments such as the solar wind [4,5,6,7] and the Earth's magnetosphere [8,9,10], specially at kinetic scales [11,12,13]. It is well known that in space plasmas turbulence usually exhibits two distinct regimes separated by a spectral break that divides the scales in which wave-wave or wave-particle interactions dominate. Namely, the inertial (at larger scales) and the kinetic range (at ionic and sub-ionic scales) [14,7]. Large scale fluctuations are dominated by MHD non-linear wave-wave interactions following a -5/3 or -3/2 slope power-law spectrum, that suddenly ends and after the break the spectrum follows a steeper power-law k −α shape given by a spectral index α > 5/3. The break is related with the scales in which kinetic effects and wave-particle interactions become dominant, and depending on the local plasma conditions the break can coincide with the ion inertial length or gyroradius [15,16]. Also, different plasma environments can exhibit different spectral indices. For example, considering Van Allen Probes observations Moya et al. Moya In a magnetized plasma such as the solar wind or the Earth's magnetosphere, one of the most typical deviations from the Maxwellian equilibrium is the bi-Maxwellian distribution, representing a composed Maxwellian VDF that exhibits different thermal spreads (different temperatures) in the directions along and perpendicular to the background magnetic field. These distributions are susceptible to temperature anisotropy driven kinetic micro-instabilities that can effectively reduce the anisotropy and relax the plasma towards more isotropic states. However, in the absence of enough collisions, these instabilities are usually not able to lead the system to thermodynamic equilibrium and the plasma allows a certain level of anisotropy up to the so called kinetic instability thresholds [11,20]. From the theoretical kinetic plasma physics point of view, on the basis of the linear and quasilinear theory approximation of the dynamics of the plasma, it is possible to predict the thresholds in the temperature anisotropy and plasma beta parameter space that separate the stable and unstable regimes, and how the plasma evolves towards such states. These models are useful to study the generation and first saturation of the electromagnetic energy at the expense of the free energy carried by the plasma. To do so, in general quasilinear calculations consider initial conditions with a small level of magnetic field energy that grows as the temperature anisotropy relaxes. A comprehensive review of linear and quasilinear analysis of these instabilities considering a bi-Maxwellian model can be found in Yoon [21] and references therein. Since the first studies by Weibel [22] and Sagdeev and Shafranov [23], the research about temperature anisotropy driven modes and the stability of the plasma have been widely studied in the last decades, and represent an important topic for space plasmas physics [24,25,26,27,28]. Predictions based on a bi-Maxwellian description of the plasma are qualitatively in good agreement with observations of solar wind protons (see e.g. Hellinger and Trávníček [29], Bale et al. [6]) and electrons (see e.g. Hellinger et al. [30], Adrian et al. [31]. However, as turbulence is ubiquitous in space environments (see e.g. Bruno and Carbone [7]), all these relaxation processes should occur in the presence of a background turbulent magnetic fluctuations spectrum. To the best of our knowledge, only a few quasilinear studies such as Moya et al. [32] or Moya et al. [28] have considered a background spectrum but nonetheless a study focused on the possible effects of a magnetic field background spectrum is yet to be done. Here we perform such systematic study by computing the quasilinear relaxation of the ion-cyclotron temperature anisotropy instability, considering different choices of the initial level of the magnetic field fluctuations, and the shape of the spectrum. We analyze their effect on the relaxation of the instability and the time evolution of the macroscopic properties of the plasma that are involved. In the next section we present the linear and quasilinear of our model. Then, in Sections 3 and 4 we show and discuss all our numerical results. Finally, in the last section we summarize our findings and present the main conclusions of our work. QUASILINEAR TEMPERATURE EVOLUTION We consider a magnetized plasma composed of bi-Maxwellian protons and cold electrons. The kinetic dispersion relation of left-handed circularly polarized waves, propagating along a background magnetic field B 0 is given by [33,32,34,35,36] where ω k = ω+iγ is the complex frequency that depends on the wavenumber k; v A = B 0 / 4πn p m p is the Alfvén speed, with n p and m p the density and mass of protons, respectively; Ω p = eB 0 /m p c is the proton gyrofrequency with c the speed of light; A = R − 1 where R = T ⊥ /T is the temperature anisotropy; T ⊥ and T are the proton temperatures perpendicular and parallel with respect to B 0 , respectively; ξ = ω k /ku and ξ − = (ω k − Ω p )/ku are resonance factors [37]; u = 2k B T /m p is the parallel proton thermal speed, and k B the Boltzmann constant. Z(ξ) is the plasma dispersion function [38], which is calculated numerically with the Faddeeva function provided by scipy. We also define the parallel proton β = u 2 /v 2 A . In Eq. (1), we have assumed charge neutrality (i.e. zero net charge such that the electron density n e is equal to the proton density), and v A /c 1. Numerical roots of Eq. (1) are calculated through the Muller's method [39] using our own Python code. The dispersion relation Eq. (1) supports an infinite number of solutions for ω k for each value of k, most of them being sound-like heavily damped modes with frequencies above and below the proton gyrofrequency [40,41]. Here, we focus on the quasilinear evolution of the plasma due to Alfvén-Cyclotron Wave (ACW) instabilities. Figure 1(b) shows the effect of the anisotropy on the ACW frequency for β = 0.1. In all cases, this solution seems to approach asymptotically to ω = Ω p at large wavenumbers. This description is very similar to the solutions of the dispersion relation Eq. (1) in the cold-plasma approximation. However, for β > 0.01 [panel (a)] or , the frequency curve deviates from the cold-plasma approximation for wavelengths around the proton inertial length v A /Ω p . Kinetic effects can damp ACWs of large wavenumbers even at low beta, and large temperature anisotropies can drive the wave unstable. Figures 1(c) and (d) show the imaginary part of the frequency for the same parameters as in Fig. 1(a) and (b), respectively. The wave is damped if its frequency satisfies Fig. 1(c) that the ACW is marginally stable (γ = 0) at a fixed wavenumber value v A k/Ω p ∼ 2.27 and fixed R = T ⊥ /T for all values of β . It can shown from Eq. (1) that this happens at (v A k/Ω p ) 2 = (R − 1) 2 /R [42]. Thus, for a lower value of the temperature anisotropy the waves are marginally stable at lower wavenumbers, as seen in Fig. 1(d), and the damping becomes stronger as the anisotropy approaches R = 1. Also, the instability decreases both with lower β and lower T ⊥ /T . It is important to mention that a semi-cold approximation of the plasma (ξ − 1) fails to describe these properties, making it inappropriate for the quasilinear evolution of the plasma temperature. The quasilinear approximation assumes that the macroscopic parameters of the plasma evolve adiabatically, thus ω k solves the dispersion relation Eq. (1) instantaneously at all times. The quasilinear evolution of the perpendicular and parallel thermal speeds are given by [33,32] where L is the characteristic length of the plasma, |B k | 2 is the spectral wave energy satisfying such that Eqs. (1)-(4) form a closed system to address the quasilinear evolution of the ACW instability. In the next sections we explore the effects of the B k spectrum on the relaxation of the proton anisotropy. NUMERICAL RESULTS. THE EFFECT OF A BACKGROUND SPECTRUM In order to solve numerically the system of differential equations given by Eqs. (1)-(4) we use a fourth order Runge-Kutta method. First, here we compare the quasilinear relaxation with three different initial background spectra, namely a uniform noise |B k (0)| 2 = const., a Gaussian spectrum and a Lorentzian spectrum where the normalization constant A is adjusted depending on the initial total magnetic energy W (d)]. Thus, we should expect that the temperatures will remain almost constant in time [41]. However, an striking feature for all the spectrum shapes, is that the anisotropy can grow in time if a sufficient level of magnetic energy is provided. For an uniform spectrum of total level W B (0) = 0.1 (blue lines in Fig. 2), the anisotropy can grow up to high values R 14 in a small time frame. This results in a sharp increase in the perpendicular beta from β ⊥ = 0.01 to ≈ 0.014, and consistently a rapid fall in the parallel β = 0.005 towards ≈ 0.001. Afterwards, the anisotropy decreases while β rises, both steadily, towards a quasi-stationary state around R 7 and β ≈ 0.002. We note that this anisotropy growth is not as explosive for a Lorentzian (with α = 5/3, right column in Fig. 2) and a Gaussian spectrum (middle column) compared to a uniform spectrum, although they all relax to a final state around the same temperature anisotropy. This shows that high levels of a power spectrum may play a role on the regulation of the temperature anisotropies observed in different plasma environments. These results also suggests a possible mechanism to push the plasma towards the marginal stability thresholds starting from a linearly stable plasma. The growth of the anisotropy is limited for smaller values of the magnetic field intensity, e.g. W B (0) = 0.01 (orange lines in Fig. 2) and W B (0) = 0.001 (green lines). As W B (0) is lowered to noise levels W B (0) < 10 −5 (not shown), the anisotropy and other parameters remain constant, which is consistent with the fact that the plasma is in an equilibrium state for β = 0.005, R = 2, and low levels of magnetic energy. In all cases, we observe that the total magnetic energy decreases monotonously, meaning that the quasilinear approximation is valid through the simulation runs. Figure 3 shows the time evolution of β , T ⊥ /T , and W B . A set of numerical simulations with evenly spaced initial conditions were chosen in the range 0.005 ≤ β ≤ 0.5 and 2 ≤ T ⊥ /T ≤ 7, with a uniform magnetic wave spectrum of power W B (0) = 0.1 for all cases. Following the same trend as observed in Fig. 2, the magnetic energy always decreases monotonously from W B (0) = 0.1 to values between 0.03 < W B (t f ) < 0.07. In all cases, β drops rapidly while the temperature anisotropy increases to high values above the stability thresholds. Afterwards, the magnetic wave power is not enough to supply energy to protons, so that T ⊥ /T slowly relaxes towards values where the maximum growth-rate of proton-cyclotron instability is of the order of γ/Ω p 0.001. [43] showed that the dynamical interaction between protons and electrons play a counter-balancing effect, which prevents the progression of protons toward the marginal firehose states. Similarly, Yoon [21] showed that collisional effects may drive the plasma from stable conditions towards the instability thresholds. Here, however, we show that this can happen due to a sufficient level of magnetic fluctuations. Although not shown here, the quasilinear evolution in the cases of a Gaussian or Lorentzian power spectrum are similar. They all excite some level of proton heating in the initial stage of the simulations, and then relax slowly towards the marginal stability state, with properties similar to the ones shown in Fig. 2. In the case of a Lorentzian or power-law wave spectrum, the proton heating depends on the slope of the spectrum, which is explored in the next section. NUMERICAL RESULTS. THE EFFECT OF A TURBULENT SPECTRUM WITH A SPECTRAL BREAK In the previous section we considered different spectrum for academic purposes, to illustrate how a different initial magnetic field background spectrum can produce different results on the evolution of macroscopic parameters of the plasma. A noise level of fluctuations can lead to the propagation of Alfvén waves, and instabilities are regulated in a quasilinear fashion towards marginal stability, normally far from thermodynamic equilibrium (or thermal isotropy with T ⊥ = T ). However, solar wind observations show that the plasma is mostly in a state below the instability thresholds, far from the isotropic state [4,29], with a non-negligible level of magnetic fluctuations [6,41], and that the magnetic field has a spectral break around the ion inertial length [15,16]. The inertial range v A k/Ω p < 1 typically shows a Kolmogorov-like spectrum B 2 k ∝ k −5/3 . For ion or sub-ions scales (in the kinetic range) the turbulent spectrum steepens to k −α , with α ≥ 2.0, arguably due to the characteristics of the dispersion relation of Alfvén or other waves in that range. Here we compute the quasilinear relaxation considering a solar wind-like spectrum including a spectral break at the ion inertial range scale, with different slopes α in the kinetic range: W B (0) = dk|B k (0)| 2 /B 0 , with the integral calculated in the range 10 −3 < v A |k|/Ω p < 8. We tested with broader ranges in wavenumber, and we do not observe noticeable differences in all simulation runs. In what follows, the initial anisotropy and total magnetic energy are chosen as T ⊥ (0)/T (0) = 2 and W B (0) = 0.1 for all simulation runs. For an initially low β (0) = 0.005, Fig. 4 (left column) shows that the proton distribution is cooled in the parallel direction with respect to the background magnetic field, as the parallel β decreases in time. Similarly, protons are heated in the transverse direction for all tested values of α. It is interesting to note that the magnetic energy decreases just 1% from the initial value, but causing a monotonous growth in the temperature anisotropy from T ⊥ /T = 2 to 4 for a low value of α = 5/3 1.7. For higher values of α, this parallel cooling and transverse heating is less efficient. This can be explained as a magnetic field with stepen slopes in their spectrum for v A k/Ω p > 1 do not contain enough energy to be transferred to the particles. For β (0) = 0.05, we see in Fig. 4 (middle column) that the parallel cooling and transverse heating still occurs. It is worth noticing that the magnetic energy actually decreases more, but the parallel cooling and transverse heating is less efficient, than in cases with the same value of α and lower β (0) = 0.005. Comparing with Fig. 3, we see that both cases β = 0.005 and β = 0.05, with T ⊥ /T = 2, correspond to linearly stable plasmas. However, β = 0.05 is closer to the instability thresholds, meaning that the quasilinear evolution in this case takes shorter times to reach the a stationary state near the stability margins. For high β = 0.5, Fig. 4 (right column), the plasma is initially unstable. However, we observe that the magnetic energy is reduced in the first stages of the simulation, and it starts to grow until long times with different growth rates depending on the value of α. Since β increases in time while the temperature anisotropy is reduced, protons are effectively heated in the parallel direction and cooled in the transverse direction with respect to B 0 . However, the temperature evolution seems to be independent of α, at least in the cases we tested for high β (0). (bottom), respectively). As time goes on, the wave spectrum decreases for high values of k in all cases. For β (0) = 0.005, the spectral break is unmodified at all times of the simulation run, but the spectrum steepens for v A k/Ω p > 1.2. For β (0) = 0.05, the wave spectrum becomes smooth around v A k/Ω p = 1. In these two cases, transference of energy from the wave to protons results in a monotonous drop in magnetic energy as discussed in Fig. 4. For β (0) = 0.5, in the first stages of the simulation the electromagnetic wave loses energy at high values of k as in the previous cases. However, since the wave is unstable for this value of β , the wave energy starts to increase after some time. This results in a bump in the spectral wave energy around the wavenumber of the instability, 0.3 < v A k/Ω p 0.7. As shown in Fig. 4, the temperature anisotropy decreases and β increases. Comparing with Fig. 1, this implies that the range of unstable wavenumbers shifts towards smaller values, which in turns means that the bump in the spectral wave energy also shifts to smaller values of k. At the same time, previously unstable modes with high k become damped, thus the wave transfers energy to protons for values of v A k/Ω p > 0.7, resulting in a steep spectrum near the initial spectral break. At larger times and since the rate at which the wave damps is negligible compared to its growth, this results in a total growth of magnetic energy as discussed in Fig. 4. CONCLUSIONS Starting form the Vlasov-Maxwell system of equations we have developed a quasilinear approach, in which it is possible to write differential equations that can describe the nonlinear wave-particle interaction between Alfvén cyclotron waves and protons in plasmas. Under this context We compare the quasilinear evolution of the plasma due a background magnetic field exhibiting different spectrum shapes: a uniform flat spectrum, Gaussian, Lorentzian, and a turbulent power-law spectrum including a spectral break at the proton inertial range. A striking feature of all these spectrum shapes, is that the proton temperature anisotropy can grow in time if a sufficient level of wave intensity is provided. This happens even if protons are initially in an equilibrium state, where the proton velocity distribution is nearly isotropic and no kinetic instabilities are excited. In all cases we observe perpendicular heating and parallel cooling of protons with respect to the background magnetic field. This effect is more efficient as the intensity level of the wave spectrum is higher. For an uniform magnetic spectrum of total energy W B /B 2 0 = 0.1, the temperature anisotropy can rapidly grow from T ⊥ /T = 2 up to high values T ⊥ /T = 14 in a small time frame. Even though the plasma was considered initially in a stable state, this results in a fast decline of the parallel β = 0.005 towards β = 0.001, and the subsequent excitation of proton-cyclotron instabilities. Afterwards, the anisotropy decreases and β grows towards the instability thresholds where the temperatures stop evolving. A similar behavior is observed for a Lorentzian spectrum with power-law tail k −5/3 . On the other hand, the quasilinear evolution due to a Gaussian spectrum also results in an anisotropy growing starting from stable conditions. However, this growth is smooth over time and never exceeds the anisotropy values of the instability thresholds. As the magnetic spectrum in space plasmas is not uniform, Gaussian, nor Lorentzian, we have laso tested with a Kolmogorov-like magnetic spectrum k −5/3 including a spectral break at v A k/Ω p , such that the kinetic range has a turbulent spectrum k −α with α > 5/3. Starting from T ⊥ /T = 2 and a total magnetic energy W B /B 2 0 = 0.1, the quasilinear equations show that the perpendicular heating and parallel cooling occurs only for low values of the initial β (0). This heating/cooling is more efficient for a spectrum with slope near α = 5/3, although the loss in the total magnetic energy is almost negligible for all α values and low beta. This happens because the magnetic field transfers energy at high values of k, a wavenumber range in which the magnetic intensity is too low, thus changes to the total magnetic energy are not appreciable. For values of β = 0.5 and T ⊥ /T , the plasma is unstable. Independently of the value of α, protons that are initially unstable suffer parallel heating while the temperature anisotropy is reduced. In this case, the magnetic field transfers energy at high values of k, but also absorbs energy in the range of wavenumbers where the ion-cyclotron instability develops. This results in the steepening of the wave spectrum at v A k/Ω p > 1, and an effective growth of magnetic energy. This in turn results in a reduction of the proton-cyclotron instability, which can be observed as a reduction of the temperature anisotropy and heating of protons in the parallel direction with respect to the background magnetic field. In summary, we have shown that the presence of a sufficient level of turbulent magnetic spectrum can drive an initially stable proton plasma towards higher values of the temperature anisotropy. Depending on the spectrum shape, the anisotropy can grow smoothly towards the instability thresholds, or reach a large temperature anisotropy value, far above from the kinetic instability threshold. Thus, an initially due to the presence of a turbulent magnetic field spectrum a stable plasma can evolve in such a way that instabilities can be triggered during the quasilinear dynamics. These results suggest a possible mechanism to explain why the solar wind plasma can be observed near the instability thresholds or far from thermodynamic equilibrium.
2020-11-23T02:00:54.822Z
2020-11-07T00:00:00.000
{ "year": 2021, "sha1": "7b2ddeadfdf8faeef4cd3e5d7446f113c3cefcd5", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphy.2021.624748/pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "ebd0b0d6719c4d4085f7966aa2d37eb124fbc573", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
121708427
pes2o/s2orc
v3-fos-license
Transient Time in Unidirectional Synchronisation We consider here the behaviour of a dynamical system consisting of unidirectionally coupled Du(cid:14)ng oscillators. Under certain conditions the subsystems may become synchronised corresponding to a stable invariant subset of the full dimensional phase space. The distribution for the transient time that trajectories take to converge onto the synchronised state is investigated via numerical simulations. Some initial conditions undergo a very long transient motion distinct from the drive, before synchronisation occurrs. The dependence of this time on the governing system parameters and on the initial conditions of the driving system is discussed. Introduction When considering a system of nonlinear equations, the dynamics may show convergence towards a stable invariant subset if the Lyapunov spectrum transverse to that subset is negative Pecora & Carroll 1990, Heagy et al. 1994b]. The spatial evolution and transient phenomena before a trajectory may settle onto the subset is largely unknown, especially since complex phenomena such as intermittency Heagy et al. 1994a, Ott & Sommerer 1994 and riddled basins Alexander et al. 1992, Sommerer & Ott 1993] may occur. In this paper we report some detailed simulations which focus on the convergence of the dynamics onto the invariant subset, looking at possible relationships between exponents of scaling laws and the relative change of parameters and/or initial conditions. To illustrate some observed characteristics we investigate a system consisting of two separate Du ng oscillators with identical parameters which are coupled through a unidirectional linear relationship. Our interest is focused on the behaviour of the system when it is in a chaotic state with values of the coupling parameter which lead to synchronisation of the two oscillators and, in particular, in the time of transient decay onto the synchronised state. We shall discuss a simple problem: if we have a system with a stable synchronised state, how long does a trajectory take to converge onto it? Di erent initial conditions may converge at di erent rates, thus it may be important to assess not only the rate of convergence but also any spatial variation. It is also important to examine how these two aspects alter as we vary system parameters. A particular system but viewed at di erent parameters, or alternative systems, may also have di erent degrees of stability for the invariant subset, in uencing the convergence of the trajectory towards the synchronised state. Two di erent notions of stability have been recently pointed out for such invariant subsets, called states of asymptotic stability and monotonic stability Kapitaniak & Thylwe 1996]. Asymptotic stability is a condition achieved when the chaotic attractor in the invariant subset is Lyapunov stable and its basin of attraction (for the transverse stability) contains a neighbourhood of the attractor, while in terms of the transverse Lyapunov spectrum, asymptotic stability is a state for which the supremum of all transverse exponents, computed for all di erent ergodic probability measures supported in the attractor, is negative Ashwin et al. 1996]. On the other hand, monotonic stability is given in terms of the scaling of the distance (not necessarly the Euclidean one) between the trajectories of the subsystems, i.e. the stability of the chaotic attractor in the invariant subset is monotonic if the distance goes to zero monotonically decreasing in time Kapitaniak & Thylwe 1996]. Our study will discuss if there is a particular distribution of the transient time to synchronisation when the degree of stability varies (i.e. as we change a coupling parameter, the maximum transverse Lyapunov exponent may become smaller in magnitude, weakening the stability) and, for xed parameters, in the space of initial conditions of the driving system. In Gupte & Amritkar 1993] the authors stated that \The length of the transient after which the system settles down onto the desired orbit depends on the value of the largest SLE (Sub-Lyapunov Exponent, sic.) of the response system", pointing out that, as in the case of much simpler attracting solutions, the speed of convergence towards a stable invariant subset depends on the contraction rate in the direction transverse to the subset. Coupled Du ng Equations A single driven Du ng oscillator, is a second-order non autonomous di erential equation that describes the motion of a unit mass particle in a double-well potential, subject to viscous damping and forced by a cosinusoidal term Thompson & Stewart 1986]. For the purposes of this paper, the parameters are xed at = 0:1, ! = 0:3, and A = 3:0 producing a cross-well chaotic response, which is the only stable attractor. Two identical oscillators are then coupled together, with a unidirectional coupling type to yield: x + _ x x + x 3 = A cos(!t) y + _ y y + y 3 + K(y x) = A cos(!t): (2) The Eqs. (2) represent an extended system in which, in addition to the forcing term A cos(!t), the x system can be thought of as a driving for the slave or response y system through the coupling term, where K is a real non-negative parameter. This type of coupling produces an extended 5-dimensional system for which the invariant set x(t) = y(t) exists, i.e. a synchronised motion. To illustrate the synchronisation property (for the moment regardless of the time to convergence) we consider the parameters for both systems to be identical as previously de ned. Thus, without loss of generality, if we select initial conditions for the driving system (x) to be (x 0 ; _ x 0 ) = (1:1; 0:5) the variable x will undergo a chaotic motion. The initial conditions for the slave system (y) are set at (y 0 ; _ y 0 ) = (1:0; 0:9), and using a usual Runge-Kutta numerical scheme the full system (2) may be integrated. The Euclidean distance d = p (x y) 2 + ( _ x _ y) 2 between the two trajectories is monitored for various choices of the coupling parameter K, as shown in Figure 1. In this plot the system is set to run for approximately 70 cycles of the periodic forcing to allow for the decay of transients. Over the next 10 cycles an average of the mean value between successive maxima and minima of the function d(t) is evaluated. For K = 0 the two systems are independent and they show an average distance d ' 1:5 (approximatively half of the size of the attractor); increasing the value of the coupling parameter we see from the average distance that the transition to the synchronised state occurs at K thr ' 2:0 for this numerical experiment, whereafter the two subsystems display the same output. For values of K greater than K thr the synchronised state is stable. This scenario is speci cally for the initial conditions given but di erent initial conditions would qualitatively produce the same response. De ning a precise threshold value (i.e. for all initial conditions) of the coupling parameter for the convergence of the dynamics onto an invariant subspace involves two intrinsic problems: 1. For low values of K the invariant set is strongly unstable and for high values is strongly stable, while for values of the coupling constant close to the threshold, weak stability/instability causes the basin of attraction for the convergence onto the subset to have a very complicated structure, sometimes riddled, even in regions very close to the invariant set. So distinct pairs of initial conditions (x 0 ; _ x 0 ), (y 0 ; _ y 0 ) may lead to slightly di erent values for K thr . 2. Simulations for locating the synchronisation threshold, or for computing characteristic exponents, are produced by running the integration over a nite time, supposed to be large enough to avoid problems of transient time. The length of the transient time may depend on the coupling parameter and on the initial conditions, and as yet no distribution is known for this latter. Returning to the question of the time to synchronisation, in relation to the second problem above, we plot, in Figure 2, two typical curves representing the full evolution of d(t) (in linear-log scale), for two values of K above the synchronisation threshold, now without ignoring the rst 70 cycles. The time, here expressed in cycles of the periodic forcing (2 =!), has been left running until d(t) = 0 exactly up to the precision of the computer. The two curves are computed for identical initial conditions (given in the caption) but K = 2:7 and K = 2:05, so that the invariant subset has two di erent degrees of stability, and we note that the observed decay is qualitatively different. The evolution of d(t) when K = 2:7 can be notionally split into three di erent parts. The rst evolution ( o ), until approximatively 50 cycles of the periodic forcing, shows no appreciable change in the order of magnitude of the distance measure d(t) (about unity), and in fact it is related to the chaotic wandering that the response system is performing about the attractor of the driving system, so that an approximate horizontal line can be seen. After this, in the time interval d , the trajectory starts to decay towards the synchronised state, and in logaritmic scale its decay is almost linear, demonstrating an exponential dependence of the form d(t) / exp( t), with a rate of contraction < 0. After approximatively 80 cycles d(t) reaches the level of roundo error, where a pseudo-random oscillatory phenomenon takes place for values of d(t) around 10 15 . We refer to the rst non-decaying part of the transients o as the orbiting transient, while the second part d the decaying transient. When K = 2:05 the degree of stability is weaker than in the previous case (this statement can be quanti ed by the largest transverse Lyapunov exponent), and this condition is re ected in the decay of d(t), for which a division of the trajectory into qualitatively di erent decaying parts is no longer possible. We consider the system in the rst case (K = 2:7) to be in a state of monotonic stability, while for the second case (K = 2:05) the system shows asymptotic stability. The condition of monotonic stability, as reported in Kapitaniak & Thylwe 1996], is that d(t) must be a monotonically decreasing function of time; a very strict condition that we are actually taking, in a more liberal sense, to include the orbiting transient and the overall oscillatory motion even during the decaying part of the transient. It seems reasonable that for the monotonic stability the decaying part of the transient is an exponential function of time, with the maximum transverse Lyapunov exponent as coe cient of the exponential. We show in the following that a linear t of the decaying transient in logaritmic scale is very close to the maximum transverse exponent. What is actually less clear is if there is any distribution for the orbiting transient. Another intuitive comment is that when the motion in the invariant subset posesses a weaker stability, transients will produce longer relaxation times, so we might expect longer orbiting transients on average, a condition that is already ful lled by the decaying part of the transient. The orbiting transient is dependent on the intial conditions chosen. In Figure 3 we show curves representing the evolution of d(t), again on linearlog scale, with K = 2:7 xed, for three di erent initial conditions. More precisely, (y 0 ; _ y 0 ) = (1:1; 0:5) and _ x 0 = 1:1 are kept xed, and we use three di erent starting points x 0 , as given in the gure below each end point of the three curves. The slope of the linearly decaying part or the decaying transient for each of the three curves is almost the same, corresponding to the intuitive conjecture that the convergence is governed by the strength of dissipation transverse to the attractor. However, the three trajectories shown converge in very di erent times; almost a delay of approximatively 50 cycles of the forcing for x 0 = 3:1 when compared to the trajectory starting at x 0 = 1:1. If we look closer at the trajectory of the response system in the phase space, we see that this is not just simply orbiting about the attractor of the driving, but rather it is performing a di erent orbit, larger in size that the orbit of the driving system. A transient trajectory of the response system (y; _ y) before synchronisation is achieved is shown in Figure 5. This tyoe of trajectory (either steady state or transient) has not been seen in the single Du ng oscillator. We conjecture that this orbiting transient is following a path close to either of an unstable orbit of the extended 5-dimensional system, or perhaps the path of a solution (stable or unstable) that no longer exists, but which can be found in some nearby region of parameter space. Transient Distribution We have seen in Figure 2 that not all transients behave the same but, for all cases in which we have been able to divide the trajectory into di erent parts (e.g. the curve for K = 2:7 in Figure 2, and all curves in Figure 3), we can linearly t the decaying part ( d ) in linear-log scale. The comparison between the slope of this t and the maximum transverse Lyapunov exponent (denoted TLE) is given in Figure 4 for values of K between 2.3 and 3.0 in steps of 0.1. For K < 2:3 a trajectory may still undergo orbiting and decaying transient behaviour, as described in the previous section, but the evaluation of a suitable linear t becomes much less accurate. Moreover, progressively lowering the coupling parameter, as we approach K thr from above, we expect to nd decaying trajectories which resemble the curve for K = 2:05 in Figure 2. The maximum transverse Lyapunov exponent ? has been computed as rate of contraction of perturbations transverse to the synchronised state, i.e. as the Lyapunov exponent of the di erence system (x y; _ x _ y) Ashwin et al. 1996]. The values reported for the t , and for the maximum transverse Lyapunov exponent ? have been obtained as an average of 20 di erent initial conditions for each value of K. All the values obtained for the t were very close to the average, with a standard deviation of the order of 10 4 for almost all values of K. The estimate for K = 2:3 showed a less precise t, with standard deviation 5 10 4 . Our aim is to determine whether, once all parameters are xed, there is any particular spatial or time distribution for the transient time given a representative set of initial conditions in the phase space. Again using the xed parameters given in Eqs. (2), and xed K, we set the initial conditions for the response system to be (y 0 ; _ y 0 ) = (1:1; 0:5) and we vary (x 0 ; _ x 0 ) denoting x 0 = y 0 + 0 and _ x 0 = _ y 0 + 0 . We follow here the natural choice of varying the initial conditions of the driving system. Setting a grid of ( 0 ; 0 ) values, for each ( 0 ; 0 ) chosen, we run time forward until the Euclidean distance d(t) reaches a cuto value of 10 6 . For K xed the slope of the decaying transient, in linear-log scale, is almost the same for all initial conditions, so a xed cuto will give us a faithful representation of the orbiting transient times o . The results of the computations carried out for the case K = 3:0 are given in Figure 5. On the upper left diagram we show a histogram describing how all initial conditions in the grid distribute themselves with respect to their orbiting time o . This \density" plot shows a peak at about o ' 8, and we can ideally divide the distribution into three parts; as a distribution before the peak (these are the points whose synchronisation is the fastest), then the peak, that contains almost all the \mass" of the distribution, and then nally the tail, formed by all points achieving synchronisation in the longest time possible (at least, within the grid we set). For K = 3:0, initial conditions which achieve synchronisation slowest take about 40 cycles of the periodic forcing. The picture in the top right of Figure 5 shows a spatial organization of the initial conditions whose synchronisation is in the pre-peak region, arbitrarly taken at o < 6 cycles of the forcing. The initial conditions are taken in the range 0 ; 0 ] = 6; 6] 8; 8]. The rst thing we can notice is that the distribution of points shows some organized shape, indicating the existence of privileged zones of the phase space in which the convergence towards the invariant subset is the fastest, taking into account also the arbitrarity of the section cut. To check the robustness of the distribution in the presence of noise, we have repeated the same procedure as above but introducing noise in both the drive and response systems. The amplitude of the noise has been increased up to 10 5 , without any considerable change in the shape of the pre-peak section, and in some enlargments of it. The two lower pictures represent the dynamics of the drive (left) and response (right) systems in the phase space for a trajectory which displays very delayed synchronisation. The two pictures are shown to the same scale, to illustrate the di erence of the pre-synchronised transient. The left picture shows a trajectory of the cross-well motion typical of the Du ng oscillator, for initial conditions of the driving (x 0 ; _ x 0 ) = (1:6; 1:1). The rst 50 cycles of the response system are shown in the lower right part of Figure 5, for initial conditions (y 0 ; _ y 0 ) = (1:1; 0:5). This picture is produced using the same initial points as the third time history in Figure 3, where a o of approximatively 80 cycles of the forcing are displayed. In this case the value of K is di erent, but still the numerical integration produced a time history with very long orbiting transient. For t < o the orbit displays the spatial evolution of the orbiting transient before convergence to synchronisation state, after which the trajectory rapidly settles onto the attractor displayed in the lower left picture. The orbit of the response is not a simple chaotic wandering about the attractor of the driving, but instead it performs a di erent orbit, before suddenly converging onto the driving. As far as we can deduce, this orbiting transient is not following a stable orbit of the single Du ng equation. The fact that noise does not strongly in uence the overall behaviour seems to indicate that neither it is following an unstable orbit of the full 5-dimensional system. This leaves the possibility that an orbit exists in nearby parameter space onto which the transient orbit becomes trapped for a lenght of time before synchronisation occurrs. Conclusions In this paper we have considered the time to synchronisation of two unidirectionally coupled chaotic systems, using the driven Du ng oscillator as a demonstrative tool. Speci cally, the e ect of changes in the coupling param-eter and initial conditions have been investigated. For this particular system for values of the coupling parameter greater than the threshold for stability of the invariant set corresponding to synchronised motion, trajectories undergo a transient motion which can be split in two parts. Once close to the invariant manifold, trajectories decay at a rate governed by the TLE. However, prior to this decay, a transient may \orbit" the invariant manifold for an almost indeterminate time. It is conjectured that an orbit may exist in the nearby parameter space of the full 5D system which is not a solution of the 3D driving system, giving rise to transients which follow this orbit before decaying onto the steady state attractor, so the convergence onto the synchronised state would not only be governed by the dynamics within the invariant subset, but also depends upon the global dynamics of the full phase space. Here we have investigated the spatial distribution in the initial condition space of trajectories which converge to the synchronised state in speci c times. Further work needs to be carried out to identify the full underlying dynamic which determinate global behaviour. x(t)) and (y(t); _ y(t)) for di erent values of K. The transition to a stable synchronised state is located approximatively at K thr = 2:0. plane. Lower panel: Attractor of the drive system (left) and a phase plane representation of the trajectory of the response system (right) in its state of orbiting transient before synchronisation takes place.
2019-04-19T13:10:47.911Z
1999-12-01T00:00:00.000
{ "year": 1999, "sha1": "862db8366e3b5ecd63c6dd59944c23931eb66cc3", "oa_license": "CCBYNC", "oa_url": "http://www.p-arch.it/bitstream/11050/1041/1/4203.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "f8e2736b01e9294741eb4d652cdbb9e6fd33e24e", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics" ] }
12032871
pes2o/s2orc
v3-fos-license
Send Orders of Reprints at Reprints@benthamscience.net Gene-based Genomewide Association Analysis: a Comparison Study The study of gene-based genetic associations has gained conceptual popularity recently. Biologic insight into the etiology of a complex disease can be gained by focusing on genes as testing units. Several gene-based methods (e.g., minimum p-value (or maximum test statistic) or entropy-based method) have been developed and have more power than a single nucleotide polymorphism (SNP)-based analysis. The objective of this study is to compare the performance of the entropy-based method with the minimum p-value and single SNP–based analysis and to explore their strengths and weaknesses. Simulation studies show that: 1) all three methods can reasonably control the false-positive rate; 2) the minimum p-value method outperforms the entropy-based and the single SNP–based method when only one disease-related SNP occurs within the gene; 3) the entropy-based method outperforms the other methods when there are more than two disease-related SNPs in the gene; and 4) the entropy-based method is computationally more efficient than the minimum p-value method. Application to a real data set shows that more significant genes were identified by the entropy-based method than by the other two methods. INTRODUCTION Single nucleotide polymorphism (SNP)-based genomewide association studies (GWAS) have been a popular and successful method to identify disease-related SNPs. However, this approach has much lower power when the number of SNPs increases and SNPs are correlated, especially when their effect sizes are small and only their cumulative effect is associated with a disease. Gene-or region-based analysis may have higher power to identify the causal variants that affect the complex disease, because it takes into consideration the correlations among SNPs within a single gene. The simplest method for gene-based analysis is the SNPbased method, in which each genotyped SNP is tested for association, and multiple testing corrections based on the Bonferroni procedure are applied to control the type-I error rate. The most widely used single SNP-based association test method is Cochran-Armitage trend test (CATT) which has high power under additive and multiplicative disease models but much low power under recessive disease model [1][2][3][4]. The genotypic test based on a 2 3 contingency table is robust to different disease models [5]. Some other innovative methods include entropy-based method which is generally as good as or even more powerful than the genotypic test [5,6]. The SNP-based method for gene-based analysis has low power when the causal variants are highly correlated with one or more genotyped SNPs and when the causal SNPs are not genotyped. The power of the SNP-based method can be improved by combining the information from neighboring SNPs within a single gene. Several methods have been developed to analyze multiple SNPs within the same gene simultaneously. These methods include Fisher's method for combining p-values by a logarithm function of p-values and the minP (minimum p-value) or maxT (maximum test statistics) method in which the significance level can be determined by the observed p-value. However, the empirical pvalue must be calculated by using permutation, because the limiting distributions of Fisher's statistic and minP (maxT) statistic are unknown under the null hypothesis that the gene is not associated with the disease. Another alternative method to combine multiple SNPs is to do multivariate tests. Chapman and Whittaker proposed a multivariate score test statistic that is equivalent to the score test for the logistic regression model [7]. Another test statistic based on an empirical Bayesian model for the parameters was similar to the above multivariate score test statistic [8]. Wang and Elston proposed a test statistic using a weighted Fourier transform of the genotypes to reduce the test degrees of freedom [9]. Chapman and Whittaker compared the above five methods by simulation studies, and they found that the minP (maxT) and Goeman's method perform well over a range of scenarios [7]. For the minP (maxT) method, a Monte Carlo (MC) method can be used to evaluate the empirical p-values based on approximating the joint distribution of the test statistics by an MC-sampling approach. This is computationally feasible compared with a permutation method [10]. An entropybased test statistic was recently proposed to test gene-disease association based on the joint genotypes on multiple SNPs within a gene and a cluster-based analysis method was used to reduce the degrees of freedom of the test statistic [11]. In this study, we compare three methods, namely the single SNP-based method, the maxT method with MC sampling to estimate the empirical p-value, and an entropy-based method, by simulation studies and real data analysis. We start with a detailed description of each method, followed by simulations and real data analysis. MaxT (or minP) Method with Monte Carlo Sampling Much of what follows in the section below is adapted from Lin [10]. Consider one gene with m genic SNPs, each with two alleles. , and n is the sample size. The test statistic for the j-th locus within this gene is defined as . This test statistic follows an 2 distribution with j r degrees of freedom, The test statistics (T 1 , T 2 ,…, T m ) may be correlated due to linkage disequilibrium among SNPs within one gene. The pvalues evaluated by using the actual joint distribution of (T 1 , T 2 ,…,T m ) can be computationally intensive. Lin [10] proposed an MC method to approximate the actual joint distribution to evaluate the empirical p-values by MC sampling. The MC method defines , and G 1 , G 2 ,…,G n are independent, standard, normal, random variables that are independent of the data. where is the preset significance level, then the null hypothesis that this gene is not associated with the disease is rejected. Entropy-based Test Statistic and Genotype Grouping via Penalized Entropy For one gene with m genic SNPs, there is a total of 3 m joint genotypes. However, the real number of joint genotypes is much less. Denote the number of observed joint genotypes for one gene by s (s<3 m ). Let the frequencies of the i-th joint genotype in cases and controls, respectively. Then the entropy-based test statistic for testing the association between this gene and a disease is as follows [11]: where )] log( , is the number of cases and controls, and Under the null hypothesis that there is no association between this gene and a disease, T gene follows a central 2 distribution with m-1 degrees of freedom. When the number of genic SNPs is high, the degree of freedom increases so that the power will decrease. To increase the power, the rare joint genotypes could be grouped into common ones by using the penalized entropy measure (PEM) [11]: where m k is the number of k-th joint genotypes. The joint genotype set with maximum value of I will be the corresponding common joint genotype. To do so, we first sort all joint genotypes in descending order, according to their frequencies. Then we calculate the PEM by adding one joint genotype to the present joint genotype set. If the PEM begins to decrease when the k-th joint genotype is added to the current set, the common joint genotype set will include the former k-1 joint genotypes. Once the grouping threshold is determined, we can proceed to calculate the similarities between one rare-joint genotype with frequency less than the threshold and all common genotypes and then group it with the common one that is the most similar. SIMULATION STUDIES We evaluated the performance of the three methods described above by using simulation studies. We simulated case-control samples in two methods: one using a linkagedisequilibrium (LD)-based method similar to methods in [10,11], and the other using an MS program developed by Hudson [12] that is similar to programs developed by Tzeng [13]. Although we will not discuss the LD-based simulation method here (see [11]), we describe below the detailed process to generate samples by the MS program. MS Program We used the MS program developed by Hudson [12] 10 4 , where the parameter g is the probability of crossover per generation between the ends of the haplotype locus being simulated; the scaled mutation rate for the simulated haplotype region, bp n e / 4 μ , is set to be 4 10 6 . 5 for the region of simulated haplotypes; and the length of sequence within the region of simulated haplotypes, n sites, is 10 kb. Similar parameter settings can be found in other studies [10,12,13]. We set the number of SNP sequences in the simulated sample to 100 for each gene and run the MS program to generate the haplotype sample on the basis of these parameter settings. Then we randomly selected a segment of 10 adjacent SNPs as a haplotype. The two haplotypes are randomly drawn from the simulated sample containing 100 10-SNP haplotypes and are paired to form an individual genotype. Phenotype Simulation In reality, we do not know the true functional mechanism for a given gene, so it is difficult to simulate the true functional variants and the true functional mechanism within a gene [13]. Here, we considered three scenarios to mimic the situation of a complex disease in which there is one, two, or three disease-related SNPs within a given gene. For cases with two or three disease-related SNPs, complex interactions occur among the SNPs. Here we briefly illustrate how the disease phenotypes are simulated. Scenario 1. Let f 0 , f 1 , f 2 be three penetrances of three genotypes. Denote 1 = f 1 /f 0 , 2 = f 2 /f 1 as the genotype-relative risks (GRRs). Let p be the disease allele frequency, and denote the disease prevalence as k. Then the three penetrances can be calculated for an additive, dominant, or recessive disease model ( Table 1). We omit a multiplicative model, because the results of that model are similar to those from the additive model. Once f is determined, the case/control status is simulated according to a Bernoulli distribution, with the probability of success f conditional on the observed genotype data. For a disease model with two or three interactions of disease-related SNPs within a single gene (Scenarios 2 and 3), we follow the cases given in [14]. Scenario 2. For the two-locus-interaction disease model, we denote the two-locus genotypes as (G A , G B ) (0, 1, 2) 2 , which represents the number of risk alleles at each diseaserelated SNP A and B. The two-locus-interaction disease model is as follows: where is the baseline effect, and is the genotypic effect. Scenario 3. For the three-locus-interaction disease model, we denote the three-locus genotypes as (G A , G B , G C ) (0, 1, 2) 3 , which represents the number of risk alleles at each disease-related SNP A, B, and C. The three-locus-interaction disease model is as follows: where and are the same as in Scenario 2. Once the disease-related SNPs are determined, the case-control status can then be simulated according to a multinomial distribution conditional on the observed genotype data. We simulated data sets with 400 cases and 400 controls or 800 cases and 800 controls. For the evaluation of type one error rate, we simulated data sets using both LD-based and MS methods but for power, we only used MS method because it can better mimic the biological data. For each data set, we applied the three methods described above. The type-I error rate was estimated based on 1000 replicates, and the power was estimated based on 100 replicates at a significance level of 0.05. For the maxT method, the empirical pvalue was obtained based on 10,000 normal samples. REAL DATA ANALYSIS To compare the three methods, we applied them to a large-scale, candidate-gene study. The data set contains 225 cases and 585 controls on 190 candidate genes in a genetic association study of preeclampsia [15]. We removed SNPs with minor allele frequencies less than 0.05 and focused on the remaining 819 SNPs. We also removed 27 genes carrying only one SNP. Similar to [11], we used a nominal level of 0.005 for the gene-based method and 0.005 dividing the number of SNPs within each gene for SNP-based method. ( Table 2) lists the p-values of significant genes and SNPs for the three methods. The genes and SNPs that showed significant effects are formatted in bold. The entropy-based method identified seven significant genes among the 190 genes evaluated. The single SNP-based method identified three significant genes, and the maxT method identified one significant gene. Thus, the gene-based entropy method identified the most number of significant genes. SIMULATION RESULTS ( Table 3) presents the empirical type-I error rates of the single-SNP, maxT, and entropy-based methods based on the MS program and LD-based method. From (Table 3), we see that the maxT and entropy-based methods control the type-I error rate quite well. The latter also controls as the sample size increases. However, the single-SNP method has a much lower type-I error rate, which means that this method may have lower power. We also simulated 10 SNPs with r 2 =0.9, 0.5, and 0 within one gene by using the LD-based method and found that all three methods control the type-I error rate well. ( Table 4) presents the estimated power of the SNP-based, maxT, and entropy-based methods for one disease-related SNP within a single gene. The maxT method appeared to be the most powerful among the three methods. The entropybased method had lower power than the maxT method, because when one disease-related SNP occurs within a gene, the cluster number in the entropy-based method will be large, so that the degree of freedom of the test statistic in equation (1) is high. This will affect the power of the entropy-based method. (Tables 5 and 6) present the estimated power of the three methods for situations in which two or three diseaserelated SNPs occur within a single gene. The entropy-based method appeared to be the most powerful method, and the single SNP-based method was the least powerful. This makes sense because when there are two or three interacting-disease-related SNPs within one gene, the cluster number of the observed joint genotypes will be small. Thus, the degrees of freedom of the test statistic in equation (1) will be small, which will improve the power of the entropybased method. DISCUSSION We have compared three gene-based association approaches by conducting simulation studies and one real data set analysis. Simulation results show that 1) all three methods effectively control the type-I error rate; 2) the single SNP-based method is very conservative; 3) when there is one disease-related SNP within a gene, the maxT method is the most powerful; 4) when there are two or three diseaserelated SNPs within a gene, the entropy-based method is the most powerful. Real data analysis shows that the entropybased method identifies more significant genes than do the other two methods. In addition, we have compared the computing time used by the three methods and found that the entropy-based method is computationally more efficient than the maxT method. Given the unknown number of causal SNPs as well as the complex structure among/between causal and non-causal SNPs within the gene, and the complex underlying disease gene actions, the relative performance of different approaches for gene-based association tests strongly depends on different realistic scenarios. Considering genes as testing units, sometimes we have to move forward to pursue gene-based interactions to get better biological insights into the etiology of complex diseases [16]. As new approaches are increasingly developed, we believe that no single approach is universally superb to others [4]. We suggest that users explore as many different approaches as possible and choose the best one based on their biological experience. Rare variants may play an important role to explain the missing heritability of complex disease in post-GWAS research. The correlations between rare and common SNPs and among rare variants are generally weak [17], and the number of causal rare SNPs each with moderate or large effect sizes may be large [18]. The novel statistical or computational methodologies for analyzing rare variants focusing on genes are urgently needed with the availability of large scale exome or wholegenome sequencing data [19]. The relative performance of these approaches for gene-based association tests is worthy of further investigation. Data were obtained using the entropy-based method. d Data were obtained using the single-SNP-based method.
2018-05-08T18:25:39.774Z
0001-01-01T00:00:00.000
{ "year": 2013, "sha1": "8ea46ac399124539b0d69df90a60773728337676", "oa_license": "CCBY", "oa_url": "https://europepmc.org/articles/pmc3731815?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "8ea46ac399124539b0d69df90a60773728337676", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
118536938
pes2o/s2orc
v3-fos-license
Conformal N=0 d=4 Gauge Theories from AdS/CFT Superstring Duality? Non-supersymmetric d=4 gauge theories which arise from superstring duality on a manifold $AdS_5 \times S_5/Z_p$ are cataloged for a range $2 \leq p \leq 41$. A number have vanishing two-loop gauge \beta-function, a necessary but not sufficient condition to be a conformal field theory. The relationship of the Type IIB superstring to conformal gauge theory in d = 4 gives rise to an interesting class of gauge theories [1][2][3][4]. Choosing the simplest compactification [1] on AdS 5 × S 5 gives rise to an N = 4 SU(N) gauge theory which has been known for some time [5] to be conformal due to the extended global supersymmetry and non-renormalization theorems. All of the RGE β−functions for this N = 4 case are vanishing in perturbation theory. In the present note we systematically catalog the available N = 0 theories for Γ an abelian discrete group Γ = Z p . We also find the subset which has β (2) g = 0, a vanishing two-loop β−function for the gauge coupling, according to the criteria of [6]. In a future publication, we hope to find how many if any of the surviving theories satisfy β The ideas in Frampton [6] concerning the cosmological constant and model building beyond the standard model provide the motivation as follows. At a scale sufficiently above the weak scale the masses and VEVs of the standard model obviously become negligible. Consider now that the standard model is promoted by additional states to a conformal theory of the d = 4 N = 0 type which will be highly constrained or even unique, as well as scale invariant. Low energy masses and VEVs are introduced softly into this conformal theory such as to preserve the desirable properties of vanishing vacuum energy and hence vanishing cosmological constant. Since no supersymmetry breaking is needed and provided the introduction of scales is sufficiently mild it is expected that a zero cosmological constant can be retained in this approach. The embedding of Γ = Z p in the complex three-dimensional space C 3 can be conveniently specified by three integers a i = (a 1 , a 2 , a 3 ). The action of Z p on the three complex coordinates (X 1 , X 2 , X 3 ) is then: where α = exp(2πi/p) and the elements of Z p are α r (0 ≤ r ≤ (p − 1)). The general rule for breaking supersymmetries is that for Γ ⊂ SU (2) To ensure that Γ ⊂ SU(3) the requirement is that Each a i can, without loss of generality, be in the range 0 ≤ a i ≤ (p − 1). Further we may set a 1 ≤ a 2 ≤ a 3 since permutations of the a i are equivalent. Let us define ν k (p) to be the number of possible N = 0 theories with k non-zero a i (1 ≤ k ≤ 3). Since a i = (0, 0, a 3 ) is clearly equivalent to a i = (0, 0, p − a 3 ) the value of ν 1 (p) is where ⌊x⌋ is the largest integer not greater than x. For ν 2 (p) we observe that a i = (0, a 2 , a 3 ) is equivalent to a i = (0, p − a 3 , p − a 2 ). Then we may derive, taking into account Eq.(2) that, for p even while, for p odd For ν 3 (p), the counting is only slightly more intricate. There is the equivalence of a i = (a 1 , a 2 , a 3 ) with (p − a 3 , p − a 2 , p − a 1 ) as well as Eq. The next question is: of all these candidates for conformal N = 0 theories, how many if any are conformal? As a first sifting we can apply the criterion found in [6] from vanishing of the two-loop RGE β−function β (2) g = 0, for the gauge coupling. The criterion is that a 1 + a 2 = a 3 . Let us denote the number of theories fulfilling this by ν alive (p). If p is odd there is no contamination by self-equivalent possibilities and the result is For p even some self equivalent cases must be subtracted. The sum in Eq. (18) is 1 4 p(p − 2) and the number of self-equivalent cases to remove is ⌊p/4⌋ with the results In the last two columns of Table 1 are the values of ν alive (p) and p p ′ =2 ν alive (p ′ ). though ν alive (p) diverges; the value of the ratio is e.g. 0.28 at p = 5 and at p = 41 is 0.066.
2014-10-01T00:00:00.000Z
1999-02-23T00:00:00.000
{ "year": 1999, "sha1": "b64b16201c174f82dde116f44cdf04a92df8e2af", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-th/9902168", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b64b16201c174f82dde116f44cdf04a92df8e2af", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
208176361
pes2o/s2orc
v3-fos-license
Quantum beats in the polarization of the spin-dependent photon echo from donor-bound excitons in CdTe/(Cd,Mg)Te quantum wells S. V. Poltavtsev, 2, ∗ I. A. Yugova, 3 Ya. A. Babenko, 3 I. A. Akimov, 4 D. R. Yakovlev, 4 G. Karczewski, S. Chusnutdinow, T. Wojtowicz, and M. Bayer 4 Experimentelle Physik 2, Technische Universität Dortmund, 44221 Dortmund, Germany Spin Optics Laboratory, St. Petersburg State University, 198504 St. Petersburg, Russia Physics Faculty, St. Petersburg State University, 199034, St. Petersburg, Russia Ioffe Institute, Russian Academy of Sciences, 194021 St. Petersburg, Russia Institute of Physics, Polish Academy of Sciences, PL-02668 Warsaw, Poland International Research Centre MagTop, Institute of Physics, Polish Academy of Sciences, PL-02668 Warsaw, Poland (Dated: November 21, 2019) We study the quantum beats in the polarization of the photon echo from donor-bound exciton ensembles in semiconductor quantum wells. To induce these quantum beats, a sequence composed of a circularly polarized and a linearly polarized picosecond laser pulse in combination with an external transverse magnetic field is used. This results in an oscillatory behavior of the photon echo amplitude, detected in the σ + and σ − circular polarizations, occurring with opposite phases relative to each other. The beating frequency is the sum of the Larmor frequencies of the resident electron and the heavy hole when the second pulse is polarized along the magnetic field. The beating frequency is, on the other hand, the difference of these Larmor frequencies when the second pulse is polarized orthogonal to the magnetic field. The measurement of both beating frequencies serves as a method to determine precisely the in-plane hole g factor, including its sign. We apply this technique to observe the quantum beats in the polarization of the photon echo from the donor-bound excitons in a 20-nm-thick CdTe/Cd0.76Mg0.24Te quantum well. From these quantum beats we obtain the in-plane heavy hole g factor g h = −0.143 ± 0.005. PACS numbers: Quantum beats are a phenomenon due to resonant coherent excitation of (at least) two discrete quantum mechanical states with different energies, leading to a superposition state. Quantum beats can be manifested by oscillations in the coherent optical response of the system due to interference of the excited polarizations, where the oscillation frequency corresponds to the energy difference between the levels [1]. A typical example are the oscillations observed in resonance fluorescence or other coherent spectroscopy techniques from excitons in semiconductors, which can be represented by V-type energy level arrangements with a common crystal ground state that is optically coupled to two split excited exciton states [2][3][4][5][6][7][8][9][10][11][12][13][14][15]. Quantum beats on excitons in semiconductors and their nanostructures have been observed for a large variety of excited states corresponding to the beating between, e.g., heavy-and light-hole excitons [2][3][4], the exiton states of a fine structure doublet with different spin configurations [5][6][7][8], as well as neutral and charged excitons [9][10][11][12]. Application of a magnetic field can be used to split (quasidegenerate) excitonic states by the Zeeman effect and to observe the corresponding quantum beats in the polarized optical response [5,13,14]. In this case, the splitting of the optically active exciton states, having opposite angular momentum projections ±1 onto the quantization axis along which also the magnetic field is applied, leads to quantum beats in the polarization rather than the intensity of the emitted light. The period of the oscilla-tions corresponds to the Zeeman splitting between the spin levels and can be used for evaluation of the g factors of the excitons. Another advantage of the Zeeman effect induced quantum beats is given by the possibility to tune and control the beating frequency by the magnetic field. Along with neutral excitons which comprise an electron-hole pair, excitonic complexes such as donorbound excitons and three-particle charged excitons (trions) can exist in quantum well (QW) and quantum dot structures [16]. Currently, these excitonic complexes attract attention for application in spintronics since they can be used as a pathway to optically control the spin state of resident carriers in semiconductor nanostructures [17][18][19][20]. The energy level structure of these complexes is different from the V-type energy level arrangement. The ground and lowest optically excited states are each represented by a Kramers doublet at zero magnetic field. Each of the two ground states is optically coupled to one of the excited states, and the two optical transitions have opposite circular polarizations, as shown by the two arms in Fig. 1(a). A magnetic field in Faraday geometry splits the degeneracy of the doublets, but does not introduce a coupling between the two arms and, therefore, no quantum beats are observed. Here, it should be noted that most of the experiments on the coherent optical response of excitonic complexes (e.g. resonant fluorescence and fourwave mixing) were performed using the Faraday geometry. Using the Voigt geometry, the magnetic field leads to doubly coupled Λ energy schemes [20][21][22]. Four-wave mixing experiments on trions in Voigt geometry were performed only recently. [21,[23][24][25] There quantum beats at the Larmor precession frequency were observed for the intensity of the photon echo. However, no polarization quantum beats were recorded in this system yet. In what follows we will demonstrate that quantum beats can be induced in the polarization of the photon echo (PE) generated by donor-bound excitons (D 0 X) by applying a sequence of circularly and linearly polarized pulses in the presence of a transverse magnetic field. These quantum beats carry information about the Larmor precession of both the resident electron and the heavy hole in D 0 X. Therefore, the quantum beats in the PE polarization represent a tool for measuring both the in-plane electron and hole g factors. In more detail, we consider the PE generated by an ensemble of D 0 X in a QW subject to a transverse magnetic field. Excitation by two short laser pulses separated by a time interval τ results in PE emission occurring at time τ after the second pulse, as shown in Fig. 1 The D 0 X can be represented by a four-level system as displayed in Fig. 1(a). The two ground states have electrons with total angular momentum projections |±1/2 on the z axis parallel to the structure growth direction. The two excited levels correspond to the D 0 X states with total angular momentum projections |±3/2 , associated with the heavy hole spin. The left and right arms of this system can be independently addressed by circularly polarized light (σ + and σ − ) as dictated by the optical selection rules. The external magnetic field is applied in the Voigt geometry (B||x), as illustrated in Fig. 1(c). Since the basis states have spin projections perpendicular to the magnetic field axis, the ground states |±1/2 are degenerate and mixed. The electron spins precess about the B direction at the Larmor frequency ω e = |g e |µ B B/ , where g e is the in-plane electron g factor, µ B is the Bohr magneton and is the Planck constant. Similarly, the two degenerate D 0 X states with |±3/2 are mixed and the hole spins precess at the Larmor frequency ω h = |g h |µ B B/ , where g h is the in-plane heavy hole g factor. The emergence of oscillations in the PE amplitude can be understood with the help of Fig. 2. For simplicity, we neglect here the hole spin precession (g h = 0), any dispersion of the in-plane electron g factor (∆g e = 0), and also relaxation processes. We assume that the pulse duration τ p is short compared to the Larmor precession period and the delay of the second pulse: τ p 2π/ω e and τ p τ . By the excitation with the first σ + polarized pulse, the ensemble of coherent superposition states on the lefthand side of the energy scheme corresponding to the (|+1/2 ,|+3/2 ) states is created. If the electron spins perform an integer number of full revolutions about the magnetic field until the second pulse arrival, then the co- herent ensemble has returned to the left-hand side of the scheme, as if no magnetic field was applied [See Fig. 2(a)]. The linearly polarized second pulse inverts the populations of the ground and excited states and starts the ensemble rephasing. This results in a σ + polarized PE emitted after the same number of electron spin revolutions as before the second pulse. When the second pulse arrives after an odd number of Larmor precession half-periods, it transfers the coherent ensemble from the (|−1/2 ,|+3/2 ) to the (|+1/2 ,|−3/2 ) superposition, as shown in Fig. 2(b). As a result, a σ − polarized PE is emitted. Variation of either the magnetic field strength B at τ =const or of the delay τ at B=const causes a periodic alternation of the PE circular polarization between σ + and σ − . In order to describe this effect analytically and study its consequences, we employ the spin-optical Hamiltonian, taking into account the electron and hole spin precession, in the form (1) Here, f ± = − 2e iωt d(r)E σ ± (r, t)d 3 r are the envelopes of the circularly polarized components of the light pulse with E σ ± being the electric field amplitudes with the corresponding circular polarizations, ω is the central frequency of the light pulses, which is close to the D 0 X optical transition frequency, ω 0 , and d(r) = +1/2|d + (r) |+3/2 = −1/2|d − (r) |−3/2 are the components of the electric dipole moment operator. The Hamiltonian is composed in correspondence with the level numbering given in Fig. 1(a). The coherent evolution of the D 0 X ensemble under the action of the optical and external magnetic fields can be described using the optical Bloch equations. We use the rectangular approximation for the optical pulses and neglect the magnetic field during the pulse action. The in-plane g factors of the election and hole are considered to be isotropic. We write down the PE amplitude detected in σ + or σ − circular polarization for two linear polarizations of the second pulse: H is the horizontal polarization parallel to the magnetic field (|| B) and V is the vertical polarization orthogonal to it (⊥ B) [26]: where T 2 is the pure optical dephasing time. For the H polarized second pulse, the PE oscillates between maximum amplitude and zero at the sum precession frequency, ω e + ω h . On the other hand, applying the V polarized second pulse results in PE oscillations at the difference precession frequency, ω e − ω h . The PE amplitudes detected in the σ ± polarizations oscillate with opposite phases. Thus, measuring the PE oscillations for the H and V polarized second pulse, e.g. in the P σ + H→σ + and P σ + V →σ + configurations, allows obtaining both the in-plane electron and hole g factors. The sign of g h can be also obtained, if the sign of g e is known. Equal signs of the two g factors result in a higher oscillation frequency in the P σ + H→σ + configuration than in the P σ + V →σ + configuration, and vice versa. We note that the theory works also for trion (negative or positive), since it is described by an energy scheme similar to Fig. 1(a). Taking into account the finite dispersions of the electron and hole in-plane g factors, ∆g e and ∆g h , leading to spin dephasing, results in an exponential damping of the oscillation amplitude ∝ exp −τ 2 µ 2 B B 2 (∆g 2 e +∆g 2 h )/2 2 for a normal distribution of the g factors. Thus, applying a sufficiently large delay τ or magnetic field B results in a non-oscillating PE signal that decays exponentially with the T 2 time constant when the delay τ is scanned. To verify experimentally these concepts, we used a 20nm-thick CdTe/Cd 0.76 Mg 0.24 Te single QW (032112B). The structure was grown by molecular-beam epitaxy on a [100]-oriented GaAs substrate. The QW layer sandwiched between 100-nm-thick Cd 0.76 Mg 0.24 Te barriers contains a background density of donors of n d < 10 10 cm −2 . The photoluminescence (PL) spectrum of this QW is shown in Fig. 4(a) and exhibits three features associated with the neutral exciton (X) at 1.601 eV, the trion (X − ) at 1.5980 eV, and the D 0 X at 1.5973 eV. The sample cooled down to about 2 K was excited by a sequence of two 2.3 ps pulses with the central energy of 1.5973 eV, whose spectrum is shown in Fig. 4(a). The pulses with the wavevectors k 1 and k 2 close to the sample normal were separated by the time delay τ = 200 ps and focused into a spot of about 250 µm in diameter. The pulse intensities were adjusted such that they correspond to about π/2 pulse area (6 pJ) based on previous studies [27]. The PE was detected in reflection geometry along the 2k 2 − k 1 direction, as illustrated in Fig. 1(c). A reference pulse delayed by 2τ = 400 ps with respect to the first pulse was used for an interferometric measurement of the PE amplitude. The reference pulse polarization was set to σ + or σ − to analyze the PE in the according polarization. The detected signal intensity is where E P E and E Ref are the PE and reference pulse amplitudes, respectively. The magnetic field was applied in the QW plane. More details of the experimental technique can be found in Ref. [28]. For measuring spin-dependent PE we have chosen the D 0 X transition since it shows better coherence properties than the X − transition. These include longer optical coherence times up to T 2 = 100 ps (∼ 80 ps for X − ) and more robust optical Rabi oscillations in the PE amplitude [27]. As a result, we are able to apply higher pulse pow-ers and to set substantially longer delay times τ (200 ps), avoiding strong excitation-induced dephasing. Measurements of the oscillating spin-dependent PE from the D 0 X ensemble in the studied QW structure are summarized in Fig. 4(b). The oscillations are induced by varying the magnetic field strength, resulting in a sweep of the electron and hole Larmor frequencies that scale linearly with B. These oscillations correspond well to the theoretical model: They are opposite in phase for σ ± polarized detection and exhibit different frequencies with a ratio of about 5:4 for the H and V polarized second pulse (periods of 0.208 T −1 and 0.250 T −1 ), respectively. These data were analyzed taking into account the heavy hole g factor dispersion ∆g h . We assume ∆g e ≤ 1% and thus neglect it [29]. As a result, we find the inplane electron and hole g factors to be g e = −1.583 ± 0.005 and g h = −0.143 ± 0.005, respectively. The negative sign of the electron g factor is taken here following the sign-sensitive studies on bulk CdTe [30,31]. The obtained heavy hole g factor dispersion is ∆g h ≈ 0.07. We note that scanning of the magnetic field strength instead of the pulse delay τ allows neglecting the relaxation processes such as pure optical dephasing. Moreover, the diamagnetic shift of the D 0 X transition within 0.3 meV, which we observe at B = 3 T, is also negligible. The in-plane heavy hole g factor might be notably anisotropic and thus sensitive to the in-plane orientation of the applied magnetic field with respect to the crystallographic axes. The reason for that is the presence of various interactions due to heavy and light hole splitting contributing to g h , such as strain-induced interaction or non-Zeeman interaction due to the cubic symmetry [32][33][34]. As a result, the heavy hole experiences an effective magnetic field, which may deviate from that externally applied. Thereby, the observed oscillations can be sensitive to the angle of the sample orientation around the z axis. Here we employ a fixed angle of about 90 • between the magnetic field axis and the [010] crystal direction and do not study the in-plane g h anisotropy, which is subject of future studies. To conclude, we have demonstrated that by applying a sequence of circularly and linearly polarized laser pulses to an ensemble of donor-bound excitons in a QW and varying the strength of the applied transverse magnetic field one can induce quantum beats in the photon echo polarization. As a consequence, the circular polarization of the photon echo can be switched between σ + and σ − in a controllable way by means of varying either the magnetic field strength or the pulse delay. From the observed oscillations in the photon echo amplitude one can precisely extract both the in-plane electron and hole g factors. This method can be applied in particular to systems with the hole spin weakly interacting with the magnetic field, where other optical methods such as PL spectroscopy, pump-probe Faraday rotation, or spin-flip Raman scattering are not suitable in this case.
2019-11-20T09:31:36.000Z
2019-11-20T00:00:00.000
{ "year": 2019, "sha1": "469d32ca1ad87da1f53791f3589d9f3a14fa3daa", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1911.08785", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "da289e1fc59cbaf5e28e756e7eef57d8f274a6f1", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
212845695
pes2o/s2orc
v3-fos-license
Delay-period activity in frontal, parietal, and occipital cortex tracks different attractor dynamics in visual working memory One important neural hallmark of working memory is persistent elevated delay-period activity in frontal and parietal cortex. In human fMRI, delay-period BOLD activity in frontal and parietal cortex increases monotonically with memory load and asymptotes at an individual’s capacity. Previous work has demonstrated that frontal and parietal delay-period activity correlates with the decline in behavioral memory precision observed with increasing memory load. However, because memory precision can be influenced by a variety of factors, it remains unclear what cognitive processes underlie persistent activity in frontal and parietal cortex. Recent psychophysical work has shown that attractor dynamics bias memory representations toward a few stable representations and reduce the effects of internal noise. From this perspective, imprecision in memory results from both drift towards stable attractor states and random diffusion. Here we asked whether delay-period BOLD activity in frontal and parietal cortex might be explained, in part, by these attractor dynamics. We analyzed data from an existing experiment in which subjects performed delayed recall for line orientation, at different loads, during fMRI scanning. We modeled subjects’ behavior using a discrete attractor model, and calculated within-subject correlation between frontal and parietal delay-period activity and estimated sources of memory error (drift and diffusion). We found that although increases in frontal and parietal activity were associated with increases in both diffusion and drift, diffusion explained the most variance in frontal and parietal delay-period activity. In comparison, a subsequent whole-brain regression analysis showed that drift rather than diffusion explained the most variance in delay-period activity in lateral occipital cortex. These results provide a new interpretation for the function of frontal, parietal, and occipital delay-period activity in working memory. Introduction 49 Working memory -the ability to mentally retain and manipulate information to guide 50 behavior -is crucial for many aspects of high-level cognition [1][2][3]. One prominent neural 51 hallmark of working memory performance is persistent elevated delay-period activity in frontal 52 and parietal cortex. Specifically, blood oxygen level-dependent (BOLD) activity in frontal and 53 parietal cortex increases monotonically with memory load and asymptotes at an individual's 54 memory capacity [4,5]. Activity in these networks is thought to reflect the engagement of 55 control [6,7]. For example, one recent study has demonstrated that persistent activity in parietal 56 cortex tracks the demands of binding stimulus content to its trial-specific context, rather than 57 memory load per se [8]. These signals have been shown to correlate with individual memory 58 capacity [4,5] and with memory precision [8][9][10]. In contrast, persistently elevated activity 59 during the delay period is often absent in occipital cortex, despite the reliable representation of 60 stimulus-specific information [8,[10][11][12][13]. 61 Recent psychophysical work has shown that inaccuracies in working memory are due to 62 both random error and systematic biases. For example, when subjects remember features drawn 63 from a uniform stimulus space, their responses are not uniform. Instead, the responses "cluster" 64 around a small number of specific values [14][15][16]. Further modeling work has demonstrated this 65 clustering can be explained by attractor dynamics that pull memories to specific locations in 66 mnemonic space (i.e. color memories are 'attracted' to red). While this induces systematic error 67 into the memories, it also stabilizes memories near the attractors [16]. Thus, engaging attractor 68 dynamics is thought to be especially beneficial when memory load is higher, because increased 69 noise in stimulus representations can be counteracted by increasing drift towards a few stable Modeling load-dependent BOLD activity with behavior at the ROI level 120 To relate load-dependent BOLD activity in parietal and frontal cortex to behavior, we 121 fitted linear regression models with behavioral-model fitted parameters and subject as the 122 independent variables, and BOLD activity as the dependent variable. We first used these 123 regression models to calculate within-subject correlations (ANCOVAs) between behavioral 124 parameters (drift and diffusion) and BOLD activity. The results indicated that BOLD activity in 125 both ROIs correlated significantly with diffusion (IPS diffusion: r = 0.83, p = 0.00004; PFC 126 diffusion: r = 0.79, p = 0.0002) and drift (IPS drift: r = 0.59, p = 0.012; PFC drift: r = 0.61, p = 127 0.009; Figure 2C and 2D). 128 Next, to evaluate the contribution of drift and diffusion, we found the regression model 129 that best explained BOLD activity in the two ROIs. Comparison between the four models of 130 interest indicated that Model 2 (BOLD ~ diffusion (DDM) + subject) explained the most 131 variance in BOLD activity in both IPS and PFC ROIs, and showed the best model performance 132 in terms of AIC and BIC (See Table 1 Modeling load-dependent BOLD activity with behavior at the whole-brain level 144 Lastly, we performed a whole-brain linear regression analysis to explore the relative 145 contribution of drift and diffusion to the BOLD activity of each voxel. Consistent with our ROI-146 based results, we found significant clusters in bilateral IPS and left frontal cortex with load-147 dependent BOLD activity that can be better explained by load-dependent changes in diffusion 148 Interestingly, we also observed clusters that showed higher brain-behavior correlation 153 with drift ( Figure 3A, green clusters). These clusters were most prominent in in the lateral 154 occipital cortex (LO), in superior postcentral gyrus bilaterally and in right inferior precentral 155 gyrus. Because of the known involvement of occipital cortex in visual working memory, we 156 defined two anatomical ROIs for LO (LO1 and LO2) and repeated with them the ROI-based 157 analyses as previously performed for IPS and PFC. The results of this study provide a new account of the function of load-sensitive activity 173 in IPS and PFC [4,5]. First, consistent with previous work with color working memory, here we 174 showed that attractor dynamics provided a better account of behavioral data of orientation 175 working memory, compared with classic mixture models that did not take attractor biases into 176 account. Next, and most importantly, the diffusion parameter from the discrete attractor model 177 provided the best account of the load-sensitive delay-period activity of IPS and PFC. In contrast, 178 in LO where aggregate levels of late delay-period activity were at or below baseline levels, load-179 sensitive fluctuation in this activity was better explained by drift. Thus, our results provide the 180 first evidence to our knowledge that load-related imprecision in working memory, known to 181 entail increases in random diffusion and in drift towards stable attractor state, engages control-182 related circuits of IPS and PFC and sensory-related circuits of LO, respectively. 183 By definition, working memory is guided by information specific to the current trial. 184 Nevertheless, working memory is also often influenced by many other factors, such as sensory 185 history [17] and prior knowledge. In working memory for color, the influence of prior 186 knowledge is reflected as clustered responses around a small number of specific color values, 187 even when the distribution of sample colors is uniform [14][15][16]. The present results show that this 188 phenomenon generalizes to another low-level visual feature, orientation, and these biases 189 increased with increasing memory load. Together with those of Panichello et al. (2019), our 190 results indicate that dynamical systems offer a useful framework within which to understand the 191 influence of trial-nonspecific factors on working memory performance. 192 Neurally, delay-period neural activity in IPS and PFC increased with increasing memory 193 load, and we showed that this load-dependent change in BOLD activity was mainly related to 194 load-dependent changes in diffusion rather than drift. Therefore, load-related activity change in 195 IPS and PFC is likely related to random diffusion processes, rather than systematic biases 196 towards attractors. The random noise could be related to noise in representations when memories 197 are held in IPS/PFC or related to greater engagement of control processes when working memory 198 has greater diffusion. For example, a recent study has found that delay-period activity in IPS is 199 more sensitive to the demands of context binding than of memory load per se. By this account, 200 increases in diffusion were likely due, at least in part, to increased interference between 201 representations of stimulus content and stimulus context, which would be expected to place 202 greater demands on a frontoparietal priority map controlling visually guided behavior [8]. In 203 comparison, load-related activity in LO was more sensitive to load-related changes in drift to 204 particular stimulus values, rather than diffusion. This result is consistent with the idea that prior 205 knowledge shapes feature tuning in visual cortex, resulting in biased tuning responses to 206 different visual features at early stages of cortical processing [18]. 207 When considering these findings, it is important to not think of these factors as working 208 in isolation. In frontoparietal cortex, for example, estimating drift is still necessary, as it allows 209 for a more accurate model of diffusion, that can better predict neural signals in these regions. 210 Moreover, it is important to note that in terms of parameter fitting, the drift parameter relies 211 inferring the shape of attractor landscape across the entire stimulus space, and therefore both the 212 number of trials and the uniformity of target distribution can have a significant impact on the 213 fitted outcome. It is possible that future studies acquiring more trials, and/or applying more 214 uniformly distributed targets, will lead to improved model fit of drift, and increases in the 215 variance explained by this parameter. 216 In previous studies emphasizing stimulus-specific representations of visual working 217 memory, we have argued that disparate patterns of results in frontoparietal versus occipital 218 cortex are consistent with a functional distinction between these two regions, with the former 219 more strongly associated with control and the latter with stimulus representation [8,10]. Here, 220 we see that stimulus-nonspecific factors, as reflected in the relationship between load-dependent 221 changes in behavior (drift and diffusion) and delay-period activity, are also consistent with this 222 distinction. Taken together, the results from higher-order frontal and parietal cortex and low-223 level occipital cortex suggest that imprecision in working memory can be caused by a 224 combination of effects of noise in parietal and frontal cortex, and of stimulus-related biases in 225 occipital cortex. 226 227 Method 228 Subjects 229 The results reported here are from analyses carried out on existing data collected for other 230 purposes [19,20]. Thirty individuals (10 males, mean age 20.7 ± 2.3 years) participated in the 231 behavioral session of the study, and sixteen of these (8 males, mean age 20.6 ± 1.8 years) also 232 participated in two subsequent fMRI scanning sessions. All were recruited from the University of 233 Wisconsin-Madison community. All had normal or corrected-to-normal vision, reported no 234 neurological or psychiatric disease, and provided written informed consent approved by the 235 University of Wisconsin-Madison Health Sciences Institutional Review Board. Anatomical 236 scans from the fMRI session were also screened by a neuroradiologist, and no abnormalities 237 were detected. All subjects were monetarily compensated for their participation. the same location as the sample, a response wheel centered on fixation (inner radius = 7.2°, outer 265 radius of 9.2°), and a cursor (a conventional "mouse" arrow) located at central fixation. Twenty 266 oriented lines (radius = 1.8°, width = 0.05°, ranging in orientation from 0° to 171° in steps of 9°) 267 were displayed with equal spacing along the response wheel, and subjects registered their 268 memory of the sample orientation by moving the cursor to the appropriate location on the 269 response wheel and registering that location with a button press. At the onset of the recall 270 display, the stimulus patch was rendered with a randomly determined value rendered in the 271 format of the sample stimuli, and as soon as the subject began to move the cursor (with the 272 trackball) the stimulus patch took on the value corresponding to the location on the response 273 wheel that was nearest to the cursor. Responses were required within 4 s, while the circle and 274 wheel remained on the screen. The angle of rotation of the response wheel was randomized 275 across trials, to prevent subjects from preparing their response during the delay period. 276 "3O" trials were similar to "1O" trials, except three oriented bars, each with a different 277 orientation, were displayed in three of the four possible sample locations, and, at time 12 s, the 278 sample to be recalled was indicated by the location of the stimulus circle in the recall array. For 279 each 3O trial, sample values were selected randomly, without replacement, from the pool of 9 280 possible orientations ( Figure 1A). 281 On "1O1C1L" trials, 1 oriented bar, 1 color patch, and 1 luminance patch were presented, 282 and during the response stage subjects were tested, unpredictably, on their memory for one of 283 these stimuli. The response wheel for color and luminance was the same size as the orientation 284 wheel, but displayed 180 possible color or luminance values. 285 The behavioral session contained two blocks of 1O and 3O trials, and three blocks of 286 1O1C1L trials. Each block contained 50 trials, and block order was counterbalanced across 287 subjects. The 1O and 3O blocks contained 25 trials each for 1O and 3O, and the 1O1C1L blocks 288 contained 17 probes of two of the three categories, and 16 of the remaining one. The selection of 289 the categories was randomized across blocks, yielding 50 trials for each category across three 290 blocks. 291 There were two fMRI scanning sessions. The first scanning session included four 18-trial 292 blocks of 9 3O trials and 9 1O1C1L trials (with 3 probes each for orientation, color, and 293 luminance), yielding a total of 36 trials for each of these load-of-3 trial types. These four blocks 294 were followed by eight 18-trial blocks of 1O trials. The second session included twelve blocks of 295 1O trials. To match the number of trials between conditions in fMRI data, two of the twenty 1O 296 blocks were randomly selected for each subject for further analyses. 297 We introduce the 1O1C1L condition here only for the completeness of experimental 298 design. All subsequent analyses focused on 1O and 3O trials for load-related changes in 299 behavioral and neural data. 300 301 Behavioral modeling 302 We fitted data from the behavioral session using a discrete attractor model [16]. This 303 circular drift-diffusion model (DDM) fits the dynamic evolution of memories with two distinct 304 processes: random noise (diffusion); and systematic drift towards one of several stable attractors. 305 Notably, when the drift parameter is removed, the remaining diffusion-only model ( Functional MRI data were preprocessed using AFNI (http://afni.nimh.nih.gov) [24]. The 325 data were first registered to the first volume of the first run, and then to the T1 volume of the first 326 scan session. Six nuisance regressors were included in GLMs to account for head motion 327 artifacts in six different directions. The data were then motion corrected, detrended (linear, 328 quadratic, cubic), converted to percent signal change, and spatially smoothed with a 4-mm 329 FWHM Gaussian kernel. For the whole-brain analysis, the data were further aligned to the MNI-330 ICBM 152 space [25]. 331 332 Region of interest (ROI) definition 333 We first defined anatomical ROIs using existing anatomical atlases, and warped them 334 back to each subject's structural scan in native space. Parietal anatomical ROIs were created by 335 extracting intraparietal sulcus (IPS) masks IPS0-5 from the probabilistic atlas of Wang and 336 colleagues [26], merging them, and collapsing over the right and left hemispheres. Lateral 337 prefrontal cortex (PFC) anatomical ROIs were created by extracting masks of the superior, 338 middle, and inferior frontal gyri supplied by AFNI, merging them, and collapsing over the right 339 and left hemispheres. Lateral occipital anatomical ROIs were created by extracting masks for 340 LO1 and LO2, from the probabilistic atlas of Wang and colleagues [26], merging them, and 341 collapsing over the right and left hemispheres. 342 To find the functionally activated voxels within the anatomical atlases, a conventional 343 mass-univariate general linear model (GLM) analysis was implemented in AFNI, with sample, 344 delay and probe periods of the task modeled with boxcars (4 sec, 8 sec, and 4 sec in length, 345 respectively) that were convolved with a canonical hemodynamic response function. Across the 346 whole brain, we identified the 2000 voxels displaying the strongest loading on the contrast [delay 347 -baseline], collapsing over all three conditions. The intersection of these 2000 voxels and the 348 two anatomical masks defined the two functional ROIs in subsequent analyses: the IPS ROI and 349 the PFC ROI. On average, the IPS functional ROI contained 463 ± 177 voxels, the PFC 350 functional ROI contained 314 ± 86 voxels; the two anatomical LO ROIs contained 404 ± 57 and 351 456 ± 69 voxels, respectively. 352 353 Univariate analyses 354 We calculated the percent signal change in BOLD activity relative to baseline for each 355 time point during the working memory task; baseline was chosen as the average BOLD activity 356 of the first TR of each trial. The BOLD signal change was averaged across trials within each 357 condition, and across all voxels within each ROI. Statistical significance of BOLD activity 358 against baseline was assessed using two-tailed, one-sample t-tests against 0, and the obtained p 359 values were corrected across loads and time points using FDR (False Discovery Rate) [27]. 360 Statistical difference of BOLD activity between 1O and 3O at each time point was assessed 361 using two-tailed paired t-tests, and similarly the obtained p values were FDR corrected across 362 time points. 363 364 Brain-behavior correlation and model comparisons 365 Following previous work [8-10], we used an analysis of covariance (ANCOVA) method 366 to evaluate the correlated sensitivity to trial type (i.e., 1O vs. 3O) across pairs of task-related 367 variables (i.e., BOLD activity vs. behavioral parameter). Unlike simple correlations, ANCOVA 368 accommodates the fact that each subject contributes a value for each level of trial type. It 369 removes between-subject differences and assesses evidence for "within-subject correlation" 370 between the two task-related variables [28]. 371 Mathematically, within-subject correlations were implemented as linear regression 372 models, and were calculated for drift and diffusion separately, where subject is a dummy variable 373 for trial types (1O and 3O) of each subject, and BOLD is BOLD signal from time 12 s ("late 374 delay-period" activity): Model 3, after the initial fit, the predictors in the model were examined one by one, and the 391 predictor with a p > 0.10 in the F test after removal was removed. 392 393 Whole-brain regression analysis 394 To explore brain areas that showed activity sensitive to either the drift or diffusion 395 parameter, we used a whole-brain exploratory analysis to find voxels with activity that can be 396 best explained by either drift or diffusion. To this end, all subjects' data were first normalized to 397 the MNI-ICBM 152 space [25], and for each voxel we fit Models 1 and 2 to the BOLD activity 398 of that voxel. The model with a higher adjusted R 2 for each voxel was selected as the best fitting 399 for that voxel, and we used the p-value of the selected model (F-test on regression vs. constant 400 model) for statistical significance. To correct for multiple comparisons, we applied the False 401 Discovery Rate (FDR) method to the p-values of the selected model across voxels. To avoid 402 overinterpretation, we also applied a threshold in model selection using BIC [29], such that only 403 voxels with a significant p-value after correction, and in which the drift or diffusion model 404 outperformed the other by a BIC >= 2, remained in the final report. Therefore, we identified 405 voxels with load-dependent BOLD activity that could be better explained by load-dependent 406 changes in drift, or in diffusion, at the whole-brain level. Results from the whole-brain analysis 407 were displayed on the cortical surface reconstructed with FreeSurfer 408 (http://surfer.nmr.mgh.harvard.edu; [30,31]) and visualized with SUMA in AFNI 409 respectively. Error bars indicate ± 1 SEM. C. Within-subject correlations between behavioral 428 parameter from DDM (drift and diffusion plotted separately) and IPS BOLD activity, at "late 429 delay" time point (12 s). D. within-subject correlations between behavioral parameter (drift or 430 diffusion) and PFC BOLD activity. In each plot, data from each subject are plotted in a different 431 color, and the "1" and "3" symbols correspond to values from 1O and 3O trials, respectively. 432 Lines illustrate the best fit of the group-level linear trend (i.e., the within-subject correlation) in 433 relation to individual subject data. 434 parameter from DDM (drift and diffusion plotted separately) and LO1 BOLD activity, at "late 448 delay" time point (12 s). E. within-subject correlations between behavioral parameter (drift or 449 diffusion) and LO2 BOLD activity. In each plot, data from each subject are plotted in a different 450 color, and the "1" and "3" symbols correspond to values from 1O and 3O trials, respectively. 451 Lines illustrate the best fit of the group-level linear trend (i.e., the within-subject correlation) in 452 relation to individual subject data. 453 454
2020-02-20T09:12:38.165Z
2020-02-13T00:00:00.000
{ "year": 2020, "sha1": "293b35e053a18df886667f7e4f7dfc8e7ed83fbd", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosbiology/article/file?id=10.1371/journal.pbio.3000854&type=printable", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "d67faf4694ebc166d1199a1c33d556210829863b", "s2fieldsofstudy": [ "Psychology", "Biology" ], "extfieldsofstudy": [ "Biology", "Psychology" ] }
237966513
pes2o/s2orc
v3-fos-license
Immediate Denture that Act as a Bandage: A Case Report An immediate complete denture is a replacement of the lost natural teeth and associated tissues which is inserted into the patients mouth immediately following the extraction of remaining teeth. The transition from dentulism to edentulism should be psychologically atraumatic as far as possible. The case presented here are interim (transitional or non transitional) immediate complete denture was planned after extraction of remaining natural teeth. INTRODUCTION The immediate denture is a dental prosthesis that is constructed to replace the lost dentition, associated structures of the maxillae and mandible and inserted immediately following removal of the remaining teeth. There are two types of immediate dentures in the literature: conventional immediate dentures and interim immediate dentures [1]. In the traditional type, the conventional immediate denture is fabricated to immediately place after the extraction of natural teeth and can be used as the definitive or longterm prosthesis. The interim type is used for a short time after tooth extraction. After the achievement of healing period, the immediate denture may be relined or replaced with the newly fabricated final denture [2]. The interim immediate denture show numerous advantages like preservation of facial appearance and vertical height, muscular tone, phonetic and reduction of postextraction pain [3]. One of the most important issues to be considered in immediate denture fabrication may be the difficulty to assess the occlusal vertical dimension (OVD) and centric relation after extraction of the posterior teeth. CASE REPORT A 65 year old patient referred to the department of prosthodontics for replacement of missing teeth in lower right and left back region of the jaw and want a complete upper denture .On intra oral examination patient presented with retained lower anterior and lower left 1 st premolar and which are periodontally unfavourable and a completely edentulous maxilla with no abnormality detected. As the teeth present are not periodontally sound so extraction of the teeth and fabrication of the conventional immediate denture was advised. Extraction of remaining teeth was planned followed by delivery of an immediate denture. Procedure The case was proceeded by taking the case history of the patient (Figure 1 and 2). Thereafter, preliminary impressions were made with irreversible hydrocolloid and poured with dental stone to obtain the primary cast. Maxillary and Mandibular special trays were made after applying separating medium on the cast. Maxillary and Mandibular border moulding was done using low fusing impression compound followed by final impression with zinc oxide eugenol paste for maxillary arch. For Mandible, dual impression was made with irreversible hydrocolloid and cast were poured. Jaw relations were recorded and the record bases were sealed with bite registration paste followed by articulation on the mean value articulator (Figure-3). Shade selection, teeth arrangement and try-in was done in conventional manner (Figure-4). On the articulator, alternate teeth was cut away on the cast and the labial portion of each root were excavated to a depth of 1-2 mm on the labial side and flush with the gingival margin of the lingual or palatal side. The selected teeth were placed in their specific positions after modification (Figure-5). The mandibular anterior teeth were extracted in toto after attaining informed consent of patient and sutures were placed. Immediate denture act as a stent on the extraction socket which help in healing. Patient was advised to wear the denture overnight and was called after 24 hours of the insertion. Patient complained of ulceration in mylohyoid region and maxillary palatal region and required trimming were done. Patient was advised to continue wearing denture and called for suture removal after a week. Further instructions were given and recalled after 6 months to check for stability and retention of both the dentures and relining was done ( Table-1). Alternate teeth was cut on the cast Labial portion of each root were excavated to a depth of 1-2 mm Selected teeth were placed in their specific positions Mandibular anterior teeth were extracted Maxillary and mandibular dentures were inserted after adjustments Advised to wear the denture overnight and was called after 24 hours of the insertion Suture removal after a week Recalled after 6 months to check for stability and retention of both the dentures and relining was done DISCUSSION When removal of all teeth becomes necessary an immediate denture is an important treatment modality. There are many advantages of immediate dentures as it acts as a matrix which control haemorrhage, prevents contamination and provide protective covering over the wounds. An immediate denture provides restoration of phonetics and masticatory functions and facilitates transition of the edentulous state [4]. All in all it help to boost the patient's confidence even after extraction of all teeth. CONCLUSION In the era of implant and immediate implant treatment, immediate complete denture treatment should still be considered as an important treatment modality. A detailed extra oral and intra oral evaluation and correct treatment planning will lead to a successful replacement of missing structures with immediate dentures which is functionally acceptable and pleasing to the patient.
2021-03-12T09:33:41.837Z
2021-03-10T00:00:00.000
{ "year": 2021, "sha1": "4636967739d579da7b2fd7c7e76d141ea0ef112d", "oa_license": null, "oa_url": "https://doi.org/10.36348/sjodr.2021.v06i03.002", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "4636967739d579da7b2fd7c7e76d141ea0ef112d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
256380485
pes2o/s2orc
v3-fos-license
Characterization of a novel Jerseyvirus phage T102 and its inhibition effect on biofilms of multidrug-resistant Salmonella Highlights • A broad-spectrum Salmonella Siphophage T102 was isolated and characterized.• Phage T102 possessed a short latent period, adequate stability, and large burst size.• Phage T102 exhibited strong lytic activity against multidrug-resistant Salmonella.• Phage T102 contained no genes implicated in virulence and drug resistance.• Phage T102 prevented biofilm formation and eliminated formed biofilms. Introduction Salmonella spp. is an important zoonotic pathogen with an enormous economic impact and remains the major cause of food poisoning reported worldwide (Abebe et al., 2020). Over the past few decades, there has been an increase in antibiotic-resistant Salmonella due to the widespread use of antibiotics in animal feed and other production processes (Bajpai et al., 2012). Antibiotic-resistant Salmonella caused more than 100,000 Salmonella infections each year, including those resistant to very common clinical drugs such as ceftriaxone and ciprofloxacin (Divek et al., 2018;Eng et al., 2015). Salmonella contamination has been found in various foods, such as eggs, egg products, meats, cheese, fresh fruits, and vegetables (EFSA, 2022;Paudyal et al., 2017). Furthermore, Salmonella has been reported to have the ability to attach, colonize, and form biofilms on various food equipment surfaces and fresh produce. Therefore, preventing Salmonella and its biofilm formation capacity is particularly important in the food industry. Biofilms are complex, sessile microbial communities living on surfaces and interfaces where the organisms produce a matrix of extracellular polymeric substances (EPS) (Donlan, 2009). As a consequence, organisms within the biofilm are more tolerant of environmental stress than planktonic cells, which makes biofilm difficult to inhibit or eliminate (Donlan, 2009;Zhang et al., 2020). Despite various chemical, physical and biological strategies to control biofilms, treatment of bacteria in biofilms is usually ineffective since antibacterial agents cannot reach their surfaces (Jahid and Sang, 2012). Bacteriophages (phages) have been increasingly recognized as a rediscovered agent to decontaminate pathogens in the food industry with the potential to control biofilms (Milho et al., 2021). Phages are viruses that specifically infect bacteria. Phages are widely isolated in nature and are commensals of humans, with the number of phage particles on the earth being about 10 31 (Brüssow and Hendrix, 2002;Dion et al., 2020). Among them, virulent phages replicate only through the lytic cycle, producing new descendant phages and causing bacterial lysis. The phage genome size ranges from 3.3 to 500 kb. Despite the significant diversity of phages at the nucleotide sequence level, the structural proteins that form viral particles remain highly similar and conserved (Dion et al., 2020;Switt et al., 2013). The order of the phage genome can be divided into host adsorption-associated genes, DNA replication-associated genes, biosynthesis-associated genes, viral assembly-associated genes, lysis-associated genes, and morpho logy-associated genes (Bertin et al., 2011). Phage depolymerase plays an important role in the degradation of the EPS substrate, promoting phage penetration into the biofilm and leading to bacterial cell lysis. Thus, phages have been considered promising agents to control bacterial biofilms (Milho et al., 2021). Phage treatment has been displayed to be highly efficient in reducing bacterial biofilms formed by Salmonella, Escherichia coli (E. coli), Listeria monocytogenes, and Vibrio parahaemolyticus (Gong and Jiang, 2017;Jiang et al., 2021;Yin et al., 2018;Zhang et al., 2020;Zhou et al., 2020), as previously reported. Phages as antibacterial agents can control pathogens in raw materials, clean production environments and processing equipment, extend food storage, and sterilize fresh fruit and vegetables (Połaska and Sokołowska, 2019). Even though studies have demonstrated the potential of using phages to control infection, up to now, few studies have focused on exploring the inhibition effect of phages with a broad host spectrum, high specificity, and clear genomic background to the biofilm. In this study, a novel Siphophage T102 highly specific to Salmonella was isolated from household sewage in Wuhan, China. Then, the biological characteristics of phage T102 were studied. Phage T102 could lyse different serotypes of Salmonella, including multidrug-resistant Salmonella. The structural protein profile of phage T102 was further determined by SDS-PAGE and UPLC-MS/MS analysis. The complete genome sequence of phage T102 was analyzed and has been deposited in NCBI under GenBank accession number ON996339. Finally, the efficacy of phage T102 to inhibit and eliminate biofilm formation of multi-drug resistance Salmonella strain was evaluated and confirmed by FL-SEM observation on spiked lettuce. These findings suggested that T102 is a new lytic phage with the potential as an antibacterial agent against multidrug-resistant Salmonella and their biofilms. Bacterial strains and growth conditions Salmonella strains, described in Table S1, were streaked onto xylose lysine deoxycholate agar (XLD) medium and grown for 12 h at 37 • C. The other bacteria (Table S1) were streaked onto the Luria-Bertani agar (LA) plate and grown at 37 • C for 10 h. Then, a single colony was picked and incubated in Luria-Bertani (LB) broth at 37 • C for 12 h. XLD, LB, and LA broth were purchased from Hopevio (Qingdao, China). The phages were isolated and enriched in LB broth. After enrichment, the phages were re-suspended in SM buffer (2 g/L MgSO 4 ⋅7H 2 O, 5.8 g/L NaCl, and l mol/L Tris-HCl pH 7.5) for further use. The double-layer agar plates, with LA broth as the bottom layer and LB broth with 0.7% agar as the overlay, were used to determine the phage titer (Kutter and Sulakvelidze, 2004). Isolation and purification of bacteriophage T102 Phage T102 was isolated using Salmonella Typhimurium ATCC 14028 as a host, from household sewage, in Wuhan, China. Phage stock was prepared using the double-layer method (Andreatti Filho et al., 2007). Briefly, the samples were centrifuged at 8000 × g for 15 min and then filtered through a sterile pore diameter of 0.22 μm (Millipore, USA). The processed sample was incubated with its host culture at 37 • C for 16 ± 2 h. After culturing, the bacteria were centrifuged with 8000 × g for 15 min, and the bacteria were removed by a sterile filter with a pore diameter of 0.22 μm. The bacterial solution was inoculated on a double-layer agar plate and cultured at 37 • C for 24 h. The samples with clear spots on soft agar were considered positive. After the positive samples were diluted 10 times with LB broth, a single plaque was selected and purified three to five times. The bacteriophages were preserved at 4 • C for one month and preserved at -80 • C in 20% glycerol. Lytic spectra and efficiency of plating (EOP) of the phage The host spectrum was determined by the spot method (Clokie and Kropinski, 2009). The strains (Table S1) cultured in the logarithmic phase were added to the warm semi-solid broth, mixed, and poured into the pre-prepared LA plate. After solidification, the bacteriophage with a titer of 5 μL 10 9 PFU/mL was dripped to the surface of the upper plate, dried and cultured in an incubator at 37 • C for 4-6 h, and the plaques were observed. To determine the efficiency of plating (EOP) for phage T102, 100 μL 10 2 PFU/mL phage was incubated with 24 strains (Salmonella that can be lysed by phage T102), and phage titer (PFU/mL) determined for each strain by the double layer plate method. The EOP was calculated by dividing the phage titer obtained from each strain by the titer derived from the original host (Ren et al., 2022). Transmission electron microscopy (TEM) To identify the morphology of phage T102, the phage (10 10 PFU/mL) was negatively stained with 2% aqueous uranyl acetate (pH 4.0) on carbon-coated copper grids and was observed by a biological transmission electron microscopy (Hitachi H-7600, Tokyo, Japan) at an accelerating voltage of 80 kV. Its size was measured by the software Digital Micrograph Demo 3.9.1. Lytic characteristics of bacteriophage T102 The titer of phage was determined by the double-layer plate method. The multiplicity of infection (MOI) is the ratio of the number of phages to the number of host bacteria at the time of initial infection. The phage suspensions were mixed with the host bacterial solution at certain MOI values, incubated at 37 • C for 3.5 h, and centrifuged at 8000 r/min for 10 min to determine phage titers (Fridholm and Everitt, 2005). The adsorption rate of T102 was determined as previously described. Phage T102 was added to each diluted culture at an optimal multiplicity of infection (MOI) of 0.001 PFU/CFU, and the mixed cultures were incubated at 37 • C without shaking, then phage titers were determined (Li et al., 2020). Single-step growth curve analysis was performed as previously described. Phage T102 was added to the bacterial culture (10 8 CFU/mL, MOI=0.001 PFU/CFU) and absorbed at 37 • C for 15 min. Then, the pellets were resuspended with fresh LB broth and incubated with shaking (180 r/min) at 37 • C (Huang et al., 2018;López-Cuevas et al., 2011). The samples were collected each 10 min to determine The titer of bacteriophage (PFU / mL) = The number of plaques × Dilution gradient × 10 phage titers. The burst size of the phage was estimated by dividing the final free phage particle number by the initial phage number. The lytic ability of phage was determined in a 96-well microtiter plate by measuring the optical density (OD 600 nm ) every hour at various multiplicity of infection (MOI) from 0.001 to 100 PFU/CFU. The negative control consisted of a mixture of an equal volume of host strain suspension and LB broth. The microtiter plates containing the test and control groups were incubated at 37 • C on an orbital shaker at 160 rpm for 6 h. Subsequently, the OD 600 nm value was measured employing a microplate reader (Infinite M200 Pro, Tecan, Switzerland) at OD 600 nm , set time 20 ms and 37 • C (Fridholm and Everitt, 2005). pH and temperature stability of phage T102 For the analysis of the thermal stability of phage T102, LB broth was preheated and maintained at completely different temperatures from 30 • C to 90 • C for 1 h. Then, the phage T102 (10 8 PFU/mL) was added to the preheated medium and incubated for 30 and 60 min, respectively. The pH stability of T102 was determined by diluting the phage lysate (10 8 PFU/mL) in buffered peptone water (BPW) at a pH range of 2-13 and incubating at 37 • C for 2 h (Zhou et al., 2015). The thermal/pH stability of phage T102 was determined by bacteriophage plaques, which were formed on double-layer agar plates. According to the classification of viruses by ICTV (The International Committee on Taxonomy of Viruses) and BLASTn (https://blast.ncbi.nl m.nih.gov/Blast.cgi) of the NCBI database, phage with higher similarity was selected (Dereeper et al., 2010;Lavigne et al., 2008). The genome of phage T102 was compared and plotted at the nucleotide level with the whole genomes of six phage strains (T102, PIZ SAE-01E2, SETP3, ST1, LPSE1, SE2) using the software BRIG 0.95 (http://brig.sourceforge. net/), with a cut-off value of 50% for the identity percentage (Alikhan et al., 2011). The software Easyfig 2.2.5 (http://easyfig.sourcefo rge) was used to analyze the homology of the three phage strains (T102, PIZ SAE-01E2, and LPSE1) at an amino acid level, and the comparison was set to tBLASTx. The phylogenetic tree was constructed based on the protein sequence of terminase large subunit using the software MEGA X version 10.2.4 with the Neighbor-Joining method and 500 bootstraps (Kumar et al., 2018). Structural proteins analysis of phage T102 Phages T102 purified by a continuous CsCl-gradient (10 11 PFU/mL) were analyzed for structural proteins by standard sodium dodecyl sulfate-12% polyacrylamide gel electrophoresis (SDS-12% PAGE). Protein bands were detected by silver staining. Ultracentrifuge (Optima XPN, Beckman Coulter, USA) was used to concentrate the phage suspensions prior to identifying the phage protein. The highly concentrated phage sample was sonicated for 5 s (output control = 4, duty cycle = 40%) with an Ultrasonic Sonifier W-350 (Branson Sonic Power Co.). Phage T102 protein was digested with trypsin, and identified by tandem mass spectrometry (MS/MS) in Q ExactiveTM Plus (Thermo) coupled online to the UPLC (Gu et al., 2019;Shevchenko et al., 1996). Inhibition and reduction of biofilms of multidrug-resistant Salmonella by phage T102 2.6.1. Biofilm formation and assay Two S. Typhimurium strains were selected for the biofilm assay, including Salmonella (S. Typhimurium ATCC 14028) and multidrugresistant S. Typhimurium 13337, which was isolated from supermarket-bought sausages, and its resistance profile was determined (Table S1). For biofilm formation, S. Typhimurium and LB broth were mixed at a 1:1 ratio. Biofilm formations were performed on 96-well plates and were mass determined by crystal violet staining, as described previously (Miyamoto et al., 2011), with some modifications. Briefly, each Salmonella strain was grown overnight at 37 • C and diluted to 10 9 CFU/mL. The suspensions were added to 96-well plates at 100 μL/well and 100 μL/well LB broth, then grown at 37 • C for 24 h, without agitation. The supernatant in each of the wells was removed by suction. Wells retaining the bacterial cells were washed twice with sterile PBS buffer. After the wells were dried, 200 μL of 1% crystal violet solution was added to each well and allowed to react at room temperature for 30 min. After the supernatant was removed, each well was washed twice with sterile water and dried. To extract any crystal violet retained in the well, 200 μL of 95% ethanol was added to each well. The absorbance was measured at 595 nm with Multiskan SkyHigh Microplate Spectrophotometer (Thermo Fisher Scientific, USA). Biofilm mass is expressed by the value of absorbance at 595 nm. Assessment of the ability of phage T102 to inhibit biofilm To investigate the ability of phage T102 to inhibit the biofilm of Salmonella strains, bacterial cells (10 9 CFU/mL) of Salmonella and phage T102 (10 6 PFU/mL) was mixed at a 1:1 ratio (MOI = 0.001) to make working cultures. For controls, LB broth was used instead of phage suspension. The cultures were added to 96-well plates and incubated at 37 • C for 24 h, without agitation. Biofilm mass and viable counts were determined at 6, 12, and 24 h, respectively. Treatment of biofilm with phage T102 on the microplate Biofilms of Salmonella were prepared by incubation at 37 • C for 24 h, as described above. Following the formation of biofilm, supernatants were removed, and wells were washed twice with PBS (Kelly et al., 2012). Then refilled with 200 μL of phage T102 solution (10 6 PFU/mL). For control, LB broth was used. Biofilm mass and viable counts in the biofilm were determined at 2, 6, and 8 h, respectively. Determination of viable counts To determine the viable counts of biofilm cells, the supernatants were carefully removed, and the wells were washed twice with PBS without disturbing cells at the bottom. Biofilm cells were harvested by scraping the surface of the wells with pipette tips and rinsing with PBS (Kelly et al., 2012). The biofilm-cell samples were serially diluted with PBS and viable counts were determined by LB agar. Biocontrol of biofilm of multidrug-resistant Salmonella on spiked lettuce 2.7.1. Biofilm formation on lettuce and its reduction by phage T102 Leaf inoculation and biofilm formation on lettuce was performed as previously described (Patel and Sharma, 2010), with some modifications. The bacterial inoculum was prepared in PBS with a final concentration of ~10 9 CFU/mL. Lettuce was purchased from a local supermarket. The leaves were aseptically cut into 3 × 3 cm 2 pieces with sterile scissors. The lettuce coupons were washed with sterile distilled water and treated with UV in a biosafety cabinet for 1 h on each side. Lettuce pieces were aseptically submerged into bacterial suspensions for 120 s and then air-dried under laminar airflow for 20 min. Individual samples were stored for 24 h at 37 • C and sealed in a petri dish for biofilm formation. One sample was used as an untreated control. After incubation, lettuce pieces were washed twice in a sterile beaker. Lettuce pieces were submerged in suspensions containing 10 9 PFU/mL of phage for 10 min. The groups were incubated for 2 h at 37 • C. After incubation, each coupon was submerged in 2 mL of PBS buffer in a sterile tube and processed using a sterile grinding rod for 5 min to release the biofilm-forming bacteria from the lettuce leaf. 1 mL of homogenates was centrifuged for 90 s at 10,000 × g at 25 • C. Cell pellets were resuspended in PBS buffer and enumeration of bacteria was carried out by serial dilution and spread plating on LB agar. Reduction of biofilms on spiked lettuce by field emission scanning electron microscopy (FE-SEM) observation Biofilm on lettuce samples of phage-treated and control groups was prepared as previously described . Samples were fixed with 2.5% glutaraldehyde in PBS at room temperature for 4 h. The coupons were then serially treated with ethanol (50, 60, 70, 80, and 90% for 15 min each, and 100% two times for 15 min each) and successively dehydrated by soaking in 33, 50, 66, and 100% hexamethyldisilazane in ethanol for 15 min each. After dehydration, samples were sputter-coated with platinum and visualized using FE-SEM (Hitachi/Baltec, S-4700). Statistical analysis Statistical significance was assessed by Student's T-tests. The statistical software R version 3.5.3. All the experiments were performed in triplicate. A p-value <0.05 was considered statistically significant. Phage morphology and host range The phage T102 was isolated from a sewage sample and tested for lytic activity against S. Typhimurium ATCC 14028. It formed small, circular, clear plaques on the bacterial lawn of S. Typhimurium ATCC 14028 (Fig. 1A). Electron microscopy of phage T102 (Fig. 1B) indicated that the bacteriophage had a head with a long and non-contractile tail structure and belonged to the Guernseyvirinae subfamily. The head of T102 was icosahedral with a diameter of 48.02 ± 0.35 nm. The tail length was approximately 96.48 ± 0.28 nm. As shown in Table S1, T102 was able to lyse 24 strains (24/ 42=57.14%) of different Salmonella serovars, including S. Typhimurium, Salmonella Enteritidis, Salmonella Pullorum, Salmonella Javiana, Salmonella Dublin, and Salmonella Agona. It was especially able to lyse 14 drug-resistant Salmonella strains (14/23=60.87%). In addition, T102 could not lyse E.coli, Vibrio parahemolyticus, and Gram-positive strains, such as Listeria monocytogenes and Staphylococcus aureus. The efficiency of plating (EOP) of the phage was measured across the panel of 24 Salmonella strains (Table S2). The strict host range of T102 is consistent with previous phages isolated by different researchers (Gu et al., 2019;Hooton et al., 2011;Li et al., 2016;Santos et al., 2011), demonstrating that phage T102 is a well-targeted candidate for application in control of Salmonella. Multiplicity of infection (MOI) analysis The phage titers were determined by the double-layer plate method after mixing different titers of phage T102 with host S. Typhimurium ATCC 14028 and cultured for 3.5 h (Fig. 2A). The phage titer of T102 initially increased slowly and eventually reached a maximum, with the highest value obtained with a starting MOI of 0.001 PFU/CFU. The results indicated that the optimal multiplicity of infection of the bacteriophage T102 was 0.001 PFU/CFU. The multiplicity of infection represents the ability of the phage to infect the host bacterium, with smaller values representing a higher infection capacity of the phage. At this optimal multiplicity of infection, phage T102 was consistent with Salmonella phages (vB_Sen-TO17, vB_Sen-E22, and LPST144) with a stronger ability to infect host cells (Kosznik-Kwasnicka et al., 2020;Yang et al., 2020). Absorption efficiency The first step of phage infection of the host cell is surface absorption on the bacterial surface. As shown in Fig. 2B, the surface absorption ability of phage T102 on S. Typhimurium ATCC 14028 showed an increasing trend from 0 min to 27 min and reached a peak of 62.95% with 15 min of incubation. With the absorption efficiency, phage T102 is consistent with Jerseyvirus phages UPWr_S1 -S5 (Kuzminska-Bajor et al., 2021) with a short time to infect host cells. The short duration of surface absorption and better surface absorption rate may improve the survival and lytic ability of the phage (Abedon, 1989;Lenski and Levin, 1985). Single-step growth curve The absorption time of T102 was 15 min for S. Typhimurium ATCC 14028 ( (Fig. 2B), which was used as the basis for single-step growth curve analysis. Phage T102 exhibited a latent period of 30 min and the burst period was 50 min in the one-step growth curve results (Fig. 2C). The burst size was calculated as approximately 161.78 ± 62.39 PFU/ CFU. The incubation period and outbreak size of the phage are key factors in considering whether they can be selected for use in biocontrol experiments (Mateus et al., 2014). It has been proved that a large burst size and short latent period are positively correlated with the effective inactivation of bacteria (Abedon et al., 2001). Lytic capacity of phage T102 The lytic ability of bacteriophage T102 was determined in a 96-well plate by measuring the optical density for S. Typhimurium ATCC 14028 at different multiplicity of infection (MOI) from 0.001 to 100 PFU/CFU (Fig. 2D). The growth of the host cell was apparently inhibited (OD 600 nm < 0.15) within 5 h at all MOIs. However, the antibacterial ability of MOI 0.001 PFU/CFU was best compared with others after incubating for 5 h. Moreover, the recovery of bacterial growth was observed after 5 h, which may be caused by a lower growth rate of phages or a possible emergence of bacterial resistance to phages. In fact, one obvious limitation of phage therapy is the inevitable evolution of phage resistance in bacteria. It is important to collect as much information about phages as possible to select the best candidate when predicting bacteriophage therapy, and maintain a balance between killing efficiency, sensitivity to cell defense, and potential antagonistic interactions between therapeutic phages (Kortright et al., 2019). Higher phage MOI may increase the likelihood of bacterial resistance to phage. In the present study, a lower MOI (0.001 PFU/CFU) of T102 produced complete inhibition of S. Typhimurium growth within the first 5 h, exhibiting the similar lytic capacity to phages UPWr_S1 -S5 (Kuzminska-Bajor et al., 2021) and PVP-SE2 (Sillankorva et al., 2010). Stability of phage T102 The thermal and pH stability was determined by using phage titer under different temperatures (Fig. 3A) and pHs (Fig. 3B). Phage T102 was kept stable from 30 • C to 60 • C, and the decrease in the phage titer has no significance with the initial titer at this temperature range. Phage titer gently decreased when the phage was treated at 70 • C or 80 • C for 30 min, respectively. However, the phage titer showed no changes compared to 30 min and 60 min at 70 • C or 80 • C. The high-temperature tolerance of the phage has been previously related to the formation of cross-links within phage capsid proteins, the improvement of hydrophobicity, and the strong interaction between the mutated gene G protein and other capsid proteins, which can lead to phage stabilization against thermal denaturation (Caldeira and Peabody, 2007;Kadowaki et al., 1987). Regarding pH stability, phage T102 was stable in a pH range of 3-12. The phage was not detected at pH 2 or pH 13. In the present study, phage T102 revealed significant stability at temperatures below 80 • C and pH 3.0-12.0, indicating that these conditions did not affect the activity of the phage. Phage T102 has better stability than the Achromobacter phage phiAxp-3 and E.coli phage vB_EcoP-EG1 (Gu et al., 2019). The phage T102 had excellent stability under various conditions showing potential value for further study. Features of the phage T102 genome The genome of phage T102 was a dsDNA-linear genome with a genome size of 41,941 bp with an average G + C content of 49.7%, which was almost the same as the genome of the Salmonella Guernseyvirinae subfamily phage strains. It encoded 54 open reading frames (ORFs) and no tRNAs. 29 ORFs were predicted to have specific functions, whereas the others were unknown, based on Salmonella-infecting phage genome information in public genome databases. On the basis of the genome annotation results, the functional ORFs were categorized into four groups: nucleic acid metabolism and DNA packaging proteins, structure proteins except for tail proteins, tail relative proteins, and cell lysis proteins (Fig. 4A). Based on the locations of the functional genes, the genome of phage T102 showed a modular genome structure and genes with associated functions were mainly located in the same gene cluster. In addition, the predictions by the VFDB database and CARD database indicated that the genome did not encode any bacterial virulence-related proteins or antimicrobial resistance factors. Phylogenetic and comparative genomic analysis of phage T102 BLASTn alignment of phage T102 genome indicated that the phage shared >80% similarity with other Guernseyvirinae subfamilies. Therefore, the terminase large subunit protein sequences of the Guernseyvirinae subfamily of BLASTp alignment and the International Committee on Taxonomy of Viruses (ICTV) were chosen to construct the phylogenetic tree. The neighbor-joining method of these proteins also revealed that the phage T102 was closely related to the Jerseyvirus genus of the Guernseyvirinae subfamily of the Caudoviricetes class (Fig. 4B). Based on the phylogenetic analysis of phage T102, the genomes of Salmonella phage PIZ SAE-01E2, SETP3, ST1, LPSE1, and SE2, as well as the genome of phage T102, were used for comparative genomic analysis (Fig. 5). The results of the comparative genomic analysis revealed that the genome of T102 phage showed a high degree of covariance with the genome of Salmonella phage, except for differences in ORF36. At the same time, the genome of phage T102 indicated homology with the genome of the selected Salmonella phages (PIZ SAE-01E2, SETP3, ST1, LPSE1, SE2), but with differences, reflecting the genetic diversity of these phages (Switt et al., 2013). Structural proteome analysis of phage T102 The genome of phage T102 encoded 54 open reading frames (ORFs), and 29 ORFs were predicted to have specific functions. 17 clear protein bands could be observed by SDS-PAGE analysis of the phage T102 (Fig. 6). Based on the predicted molecular weights of the functional proteins, 21 predicted proteins could be identified with putatively 11 corresponding bands, and no corresponding bands were found for ORFs 5, 19, 25, 30, 32, 38, 45 and 54 (Table S4). Phage T102 was also hydrolyzed by trypsin and the phage peptides were identified using UPLC-MS/MS. The results indicated that 271 peptides of phage T102 were identified, of which 237 peptides had amino acid sequences consistent with the functional protein sequences annotated in the genome, and the amino acid sequences of the identified 34 peptides were consistent with the hypothetical protein sequences annotated in the genome (Table S4). In total, 26 predicted proteins could be identified with corresponding peptides by UPLC-MS/MS. No corresponding peptides encoding ORFs 5, 45, and 54 were identified, due to the small molecular weights and insufficient mass of the 3 proteins. Analysis of the phage T102 virion protein was able to identify hypothetical associated proteins, which could provide a reference for future applications of phage proteins. For example, 2 peptides were identified by UPLC-MS/MS whose sequences were consistent with the protein sequence of the endolysin (ORF 6) in the genomic analysis (Table S4). Therefore, the sequence of the endolysin was identified by the structural proteome analysis, and the results of the genomic analysis were confirmed. Inhibition and reduction of biofilms of Salmonella by phage T102 The effects of phage T102 on biofilm formation and reduction of S. Typhimurium ATCC 14028 were shown in Fig. 7. To evaluate the ability of the phage T102 for biofilm inhibition, the biofilm was grown in LB broth with phage T102 treatment for 6-24 h at 37 • C. Biofilm mass was determined by the crystal violet staining method. After phage treatment with T102 for 6-24 h, the biofilm masses of Salmonella significantly reduced (p < 0.05). OD 600 nm could also be used as an indirect strategy to evaluate the whole biofilm mass. When co-cultured with phage T102 for 6 h, the optical absorptions of phage-treated biofilm were read at OD 600 nm and found to reduce by 0.98 ± 0.079 (Fig. 7A). Moreover, samples containing 24 h-formed biofilms were treated with phage T102 for another 2, 6, and 8 h to evaluate the phage ability for biofilm reduction (Fig. 7B). Phage T102 treatment for another 2 h to the formed biofilms resulted in the reduction of bacterial populations by 1.18 ± 0.08 log 10 CFU/well. In addition, the formed biofilm with phage T102 treatment for 6, 12, and 24 h also significantly reduced the biofilm mass (p < 0.05). Specifically, the formed biofilm reduced OD 600 nm of biofilm masses by 0.98 ± 0.09, after being treated with phage T102 for 6 h. These results indicated that phage T102 treatment was able to inhibit the formation of biofilms and could reduce the formed biofilms of sensitive S. Typhimurium 14028. The effects of phage T102 treatment on the inhibition and reduction of biofilm of multidrug-resistant S. Typhimurium 13337 were shown in Fig. 8. For the inhibition of biofilm formation (Fig. 8A), phage T102 treatment for 6 h at 37 • C, reduced the bacterial population by 0.76 ± 0.08 log 10 CFU/well. Phage T102 treatment for 6, 12, and 24 h significantly reduced the biofilm mass (p < 0.05). The OD 600 nm of the biofilm mass was reduced by 0.71 ± 0.54 with phage T102 treatment for 6 h. Results of 12 h and 24 h treatments also inhibited biofilm formation. For Table S2. the reduction of formed biofilm, phage T102 treatment for 2, 6 and 8 h significantly reduced the bacterial population (p < 0.05) (Fig. 8B). With phage T102 treatment for 2 h at 37 • C, the bacterial populations were reduced by 0.39 ± 0.09 log 10 CFU/well. Meanwhile, the biofilm mass was significantly reduced (p < 0.05). With phage T102 treatment for 2 h, the OD 600 nm of the biofilm mass was reduced by 1.46 ± 0.23. Phage T102 could control the biofilm of Salmonella strains, exhibiting a similar lytic capacity to phage SHWT1 (Tao et al., 2021) and φ135 (Milho et al., 2019). Phage T102 treatment could effectively eliminate the formed biofilm of multidrug-resistant S. Typhimurium 13337. Phage T102 exhibited strong lytic activity against biofilms of Salmonella strain, especially multidrug-resistant Salmonella strains. This may be due to the lytic ability of phage T102 since most phages (infected bacteria with lytic ability) are 'doomed' in less than 1 min after the injection of phage DNA (Abuladze et al., 2008). Endolysin encoded by phage T102, based on the genome and structural proteome analysis may destroy the EPS component. Furthermore, phage endolysin lyses some bacteria at the edge of the EPSs. The reduction of bacteria on the biofilm causes the reduction of EPS material, and thus the biofilm is completely removed in the end (Sokunrotanak et al., 2013). In future research, the phage-derived enzymes may be studied as a biological antibacterial agent to control Salmonella and its biofilm, which may reduce the dangers posed by biofilms. Effect of phage treatment on biofilm reduction on spiked lettuce The impact of phage treatment on bacterial biofilm formation on the lettuce leaf surface was presented in Fig. 9. With phage T102 treatment for 2 h, the bacterial populations in the biofilm that had developed on the lettuce leaf surface was significantly reduced (p < 0.01). There was no significant difference in the reduction of the bacterial populations between biofilms of sensitive and resistant Salmonella on lettuce. Spraying SalmoFresh™ onto lettuce reduced Salmonella by 0.76 log 10 CFU/g (Zhang et al., 2019). Populations of S. enterica Enteritidis strain S3, S. Javiana S203, S. Javiana S200 were reduced by > 3 log CFU/g and S. Newport S2 by 1 log 10 CFU/g on both lettuces (Wong et al., 2020). Salmonella in the interior of the lettuce leaves may be less susceptible (limited collision) to phage attack than those that lie on the leaf surface. Although the mechanism of phage-biofilm interactions is still not fully understood, it is believed that phage-derived enzymes degrade the major exopolysaccharide protective layer of the biofilm matrix (Milho et al., 2021), allowing phages to reach and kill bacterial cells in biofilms. Reduction of Salmonella biofilms on lettuce surfaces observed by field emission scanning electron microscopy (FE-SEM) FE-SEM analysis was conducted to observe the effects of phage treatment on multidrug-resistant Salmonella biofilm reduction on the lettuce surfaces (Fig. 10). Dense and compact aggregates of cells throughout the leaf surface were found at 37 • C after 24 h of bacterial inoculation. Bacterial cells of varying size and compactness aggregated throughout the leaf surface at 37 • C after 24 h of bacterial inoculation ( Fig. 10A and B, red arrow 1-4). They presented extensive interconnections and intercalations within the biofilm matrix. Moreover, bacterial cells were extensively accumulated around the stomatal well (Fig. 10A, Red arrow 1). Large microcolonies of healthy Salmonella were seen in the untreated (control) sample, in cluster form over the lettuce surface, and around the lettuce stomata ( Fig. 10A and B). On the contrary, images of phage-treated samples indicated the infection and destruction of biofilm cells ( Fig. 10C and D, red arrows 5-8). Mature biofilms are usually more resistant to physical and chemical stress because they have a strong three-dimensional structure consisting of multiple layers of bacterial cells and well-expressed extracellular material (Wang et al., 2013). In addition, phages can diffuse through pores and channels in biofilms to reach different layers (Sadekuzzaman et al., 2016). One problem with phage applications is that bacteria in biofilms may develop resistance to phages. Bacterial resistance to phages in biofilms may be related to lysogeny, loss of phage receptors (Donlan, 2009), or mutation of the bacterial receptors by which phages attach to the host bacteria (Labrie et al., 2010). However, phages can adapt new receptors to overcome bacterial resistance (Samson et al., 2013). In addition, phage cocktails have been recognized as an effective way to overcome bacterial resistance to phages (Hagens and Lossener, 2007;Guenther et al., 2009). Therefore, phage T102 may be used as a novel biocide to control Salmonella contamination, especially biofilms produced by the bacteria, and reduce economic losses from microbial contamination. Conclusion In summary, this study completely described the biological, genomic, and structural protein characteristics of phage T102. The biological characteristics of phage T102 indicated that it is a virulent phage of the Guernseyvirinae subfamily with specific recognition of different Salmonella serotypes. It also exhibited excellent characteristics such as a short latent period (30 min) and high pH (3-12) and thermal tolerances (30-80 • C), which will benefit its future application. The genome of T102 was sequenced and analyzed to predict the safety of this phage. Phage T102 belongs to the Jerseyvirus genus of the Guernseyvirinae subfamily of the Caudoviricetes class. Furthermore, the structural proteome of phage T102 was identified by UPLC-MS/MS, and the functional proteins of phage were further identified, including endolysin (ORF6), tail fiber (ORF 40) and tail spike (ORF41). Phage T102 had good efficacy in inhibiting and eliminating the biofilm produced by a multidrug-resistant Salmonella strain, especially on spiked lettuce as observed by FL-SEM. Overall, this study describes a newly isolated lytic phage T102 and reveals its potential as an antibacterial agent for Salmonella. Data availability The complete genome sequences of phages T102 have been deposited in the GenBank database under the accession number ON996339. Declaration of Competing Interest The authors have declared no conflict of interest.
2023-01-30T16:02:23.697Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "a61c01b871bb7416f73fc518f743b1f902dcb991", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.virusres.2023.199054", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "98852cd88f8e257823a99e50b0e786b71854ecda", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
34728889
pes2o/s2orc
v3-fos-license
Release Characteristics of Diltiazem Hydrochloride Wax-Matrix Granules – Thermal Sintering Effect : The aim of this study was to investigate the release characteristics of matrix (non-disintegrating) granules consisting of diltiazem hydrochloride (model drug) and glyceryl behenate (a wax matrix forming polymer) for sustained release application using sintering technique. The granules of diltiazem hydrochloride-wax matrix were prepared by melt granulation technique. This was formed by triturating the drug powder with a melted glyceryl behenate (drug: wax ratio, 3:1). The granules were subsequently sintered at 60 and 70 0 C for 1, 1.5 and 3h. The unsintered and sintered wax matrix granules of diltiazem hydrochloride were evaluated for physicochemical parameters and in vitro dissolution studies. The dissolution data were subjected to analysis using different mathematical models namely – zero order flux, first order, Higuchi square root of time, then Korsmeyer and Peppas model. Fourier-Transform Infrared Spectroscopy (FTIR) was carried out to investigate any chemical interactions between the drug and the added recipients before and after sintering. There was increased drug release retardation of diltiazem hydrochloride-wax matrix granules with sintering. The retardation depended on the temperature and duration of sintering. For instance, formulations sintered at 60 and 70°C for a period of 1.5h gave maximum release (m ∞ ), time to attain maximum release (t ∞ ) and dissolution rate (m ∞ /t ∞ ) of 96.1%, 95.2%, 5h, 9h, 19.2 % h -1 and 10.6 % h -1 respectively. The drug release was by Higuchi controlled diffusion mechanism and it followed Fickain diffusion mechanism (n<0.45). Sintering technique enhanced the extent of drug retardation from the systems studied. There was no chemical interaction between the model drug and the added recipients as shown in the FTIR studies. KEYWORDS: Thermal Sintering technique, sustained release, Fourier-Transform Infrared Spectroscopy Currently, significant attention is being focused on the development of sustained release dosage forms due to its numerous advantages over conventional dosage forms. Some of these advantages include: maintenance of a steady plasma level of the drug over a prolonged time period, reduction in adverse side effects, patient convenience and compliance e.t.c (Aulton, 2002). Sintering technique is defined as the bonding of adjacent particle surfaces in a mass of powder, or in a compact by the application of heat (Rakesh and Ashok, 2009). Thermal sintering involves the heating of a compact at a temperature below the melting point of the solid constituents in a controlled environment under atmospheric pressure. The changes in the hardness and disintegration time of tablets stored at elevated temperatures have been described as a result of thermal sintering effect (Satyabrata et al., 2010). Recently, Flowerlet et al., (2010) developed an oral sustained release dosage formulation of metformin hydrochloride matrix tablets by sintering the polymer matrix with organic vapour such as acetone. Thermal sintering process has been used for the fabrication of sustained release matrix dosage forms for the stabilization and retardation of drug release from different systems (Cohen et al., 1984). Previously, Rowe et al (1973), have reported that the process of thermal sintering affect the pore structure and strength of plastic matrix tablets. Polymer films with different permeability have been explored to modify drug release from drug particles. Some examples mentioned in the literature include: films with the drug as a solution in a polymer matrix, e.g. monolithic devices (Oppenheim 1981;Douglas et al., 1987;Davis and Illum, 1988) polymer coated reservoir devices (Lehmann, et al., 1979), polymeric colloidal particles (microparticles or nanoparticles) either in the form of reservoir or matrix devices (Oppenheim, 1981;Douglas et al., 1987;) and osmotically "controlled" devices (Zentner et al., 1985;Muhammad et al., 1991). These methods are however very complicated and expensive since it requires the use of organic solvents as coating fluid. Moreover, these organic solvents are hazardous to the environment. Waxes have been used either as matrix former or as a coating polymer to sustain the release of drugs (Zhou et al., 1996;Zhang et al., 2001;Uhumwangho and Okor, 2006). An alternative simple approach, which was considered in the present study, is melt granulation whereby the drug powder is triturated with a melted wax serving as a hydrophobic retard release agent. The resulting granules consist of the drug particles dispersed in a wax continuous matrix. Diltiazem hydrochloride (DZH) is a nondihydropyridine member of the group of drugs known as benzothiazepines, which are a class of calcium channel blockers. It is used in the treatment of hypertension, angina pectoris, and some types of arrhythmia (Buckley et al, 1990). Its chemical formular is [(2S,3S)-5-(2-dimethylaminoethyl)-2-(4methoxyphenyl)-4-oxo-2,3-dihydro-1,5benzothiazepin-3-yl] acetate, with a molecular weight of 414.16. Its bioavailability is about 30% to 40% due (Hermann et al., 1983;Smith et al., 1983). The aim of this study was to prepare waxmatrix granules by melt granulation technique using DZH as a model drug. These wax-matrix granules were later sintered thermally at different temperatures and time duration. Consequently, the effect of sintering temperature and duration on the drug release profiles and physicochemical parameters were investigated. Materials: The active ingredient used in the study was diltiazem hydrochloride (Cipla Ltd, Goa, India). The matrix former used was glyceryl behenate (Dr Rheddy's Laboratory, India), a fine white solid powder with melting point of 83 0 C. Magnesium stearate (Qualikems Fine Chemical Pvt Ltd, India) was used as the lubricant. Other materials used were analytical grade. Melt granulation technique: Glyceryl behenate (30 g) was melted in a stainless steel container in a water bath at a temperature higher than its melting point (i.e. 83 0 C). A sample of DZH powder (90 g) was added to the melted wax and thoroughly mixed with a glass rod. It was then allowed to cool to room temperature (35 ± 2 0 C). The mass was pressed through a sieve of mesh 10 (aperture size; 710 µm) to produce wax-matrix granules. Sintering of the matrix granules: The matrix granules were then subjected to thermal treatment by placing them on aluminum foil and subjecting to sintering at different temperatures (Kondaiah 2002;Luk and Jane, 1996) i.e. 60 and 70 0 C for different durations (1, 1.5 and 3h) in a hot air oven (Labhosp, Mumbia, India). Packing property of the matrix granules: The packing properties were determined by measuring the difference between bulk density (BD) and the tapped density (TD) using standard procedure. In the procedure, 20g of matrix granule sample was placed in a 250ml clean, dry measuring cylinder and the volume, V 0 occupied by the sample without tapping was determined. An automated tap density tester (model C-TDA2, Campbell Electronics, Mumbai, India) was used for tapping the granules according to USP Chapter 616 Method I (Manish et al, 2001). After 100 taps the occupied volume, V 100 was noted. The bulk and tap densities were calculated from these volumes (V 0 and V 100 ) using the formula. Density = Weight/Volume occupied by sample. From the data, Hausner ratio and compressibility index were determined (US Pharmacopeia, 2006). Flow property of matrix granules: The flowability of the granules was determined by measuring the angle of repose formed when a sample of the granules (40g) was allowed to fall freely from the stem of a funnel to a horizontal bench surface. The radius (r) and the height (H) of the powder heap were determined and then the angle of repose (θ) was calculated (Maheshwari et al., 2003). Hardness-Friability Index (HFI): This was calculated on the basis of the results of the friability test. In the procedure, 20 g of matrix granules were placed in the drum of an Erweka friabulator (Heusenstamm, Germany) rotating at 20 rev per min for 10 min. The matrix granules were then screened through a 60# sieve to remove the fines generated (Nasipuri and Omotosho, 1985;Eichie et al., 2005;Singh et al., 2007). Hence, the hardness-friability index was calculated using the equation below: where F A and F B are weights after and before friability determination respectively. Encapsulation of the matrix granules: Samples of matrix granules before and after sintering (drug content, 90mg) were filled manually into plain hard gelatin capsules. The capsules were kept in airtight containers before their use in in-vitro dissolution studies. In vitro dissolution test: One capsule filled with the matrix granules was placed in a cylindrical basket (aperture size 425µm: diameter 20mm; height 25mm), and immersed in 900ml of leaching fluid (0.1N hydrochloric acid maintained at 37 ±2 o C). The fluid was stirred at 100rpm (Model Disso 2000, Lab India). Samples of the leaching fluid (5ml) were withdrawn at selected time intervals with a syringe fitted with a cotton wool plug and replaced with an equal volume of drug-free dissolution fluid. The samples were suitably diluted with blank dissolution fluid and were analysed for content of diltiazem at λ max , 236nm by using an Elico SL 210 UV-Visible double beam spectrophotometer (Elico, India). The samples were filtered with Whatman No. 3 filter paper before assay and the amounts released were expressed as a percentage of the drug content in each dissolution medium. The dissolution test was carried out in quadruplicate and the mean results reported. Individual results were reproducible to ± 10% of the mean. Determination of rate order kinetics and mechanism: The dissolution data were analyzed on the basis of zero order, (cumulative amount of drug released vs. (Higuchi, 1963;Korsmeyer et al., 1983;Peppas, 1985;Harland et al., 1988). The kinetic models order equations are: Zero order: m = k 0 t First order: log m 1 = log m 0 -0.43 k 1 t M = k H t 1/2 Korsmeyer and Peppas dissolution model= log m =log k 2 + nlogt; where m is the percentage (%) amount of drug released in time t; m 1 is the residual amount (%) of drug in time t; m 0 is the initial amount of drug (100%) at the beginning of the first order release; k 0 , k 1, k H and k 2 are the release rate constants for the zero, first order, the Higuchi models and Korsmeyer and Peppas dissolution models respectively. The n is the diffusion release exponent that could be used to characterize the different release mechanism. Value of n below 0.45 indicates Fickian diffusion mechanism and n value between 0.45 and 0.89 indicates anomalous transport, often termed as first-order release. If the n value reaches 0.89 or above, the release can be characterized by case II and super case II transport, which means the drug release rate does not change over time and the drug is released by zero-order mechanism. The correlation coefficient (r) for each rate order was also calculated. Fourier Transform Infra Red (FTIR): The FTIR spectrum of the different samples were recorded in an Infra Red spectrometer (Nicolet Magna 4R 560, MN, USA) using potassium bromide discs prepared from powdered samples. Infrared spectrum was recorded in the region of 4000 to 400 cm -1 . Statistical analysis: All data obtained were subjected to student t-test (p < 0.05) to test for significance of difference. Effect of sintering on physicochemical parameters of unsintered and sintered wax-matrix granules: The effects of sintering on the physicochemical parameters of unsintered and sintered matrix granules are presented in table 1. It was observed that all the matrix granules were free flowing with angle of repose ≤ 29 0 and Carr's index ≤19.3%. (Gordon et al., 1990). There was a slight decrease in these values i.e. angle of repose and Carr's index (See table 1) as temperature and duration of sintering increased, although, the difference did not vary significantly (p>0.05). On the other hand, it was observed that all the sintered matrix granules had high HFI value when compared with the unsintered matrix granules (See table 1). However, the differences varied significantly (p<0.05). More so, with increase in temperature and duration of sintering, the HFI values increased correspondingly (See Table 1). The increase in hardness with increase in temperature and duration of sintering might be attributable to the fusion of the wax matrix particles or the formation of welded bonds among the matrix particles after cooling. Previously, some researchers reported that asperity melting and formation of welded bonds resulted in high tensile strength of tablets, this occurs with compression at high temperature (Pilpel and Esezobo, 1977;Kurup and Pilpel, 1979;Esezobo and Pilpel, 1986). Dissolution profiles of matrix granules: The dissolution profiles of the unsintered and sintered matrix granules at 60 and 70 0 C at different time durations are presented in Fig 1. It was observed that the unsintered matrix granules were able to retard the drug for 2h. Generally, as the temperature and duration of sintering of the matrix granules increased, the time to attain maximum release (t ∞ ) increased correspondingly. For instance, when matrix granules was sintered at 60 0 C for 1.5 and 3h (i.e. formulations GB 3 and GB 4 ) the maximum release (m ∞ ) and time to attain maximum release (t ∞ ) were 96.1%, 94.2%, 5h and 6h respectively, while their corresponding values at 70 0 C for time duration of 1.5 and 3h (i.e. formulations GB 6 and GB 7 ) were 95.2%, 96.2%, 9h and 12h (See table 2). Hence, sintering temperature and duration markedly affected the drug release properties of the wax-matrices granules . The dissolution rate (m ∞ /t ∞ ) also decreased as the sintering temperature and duration of sintering increased (Table 2). This finding is in conformity with previous literature (Rao et al., 2001;Rao et al., 2003). The retardation in drug release on sintering may be due to softening of the wax particles during sintering and hence penetrated the empty spaces, forming a continuous layer around the drug particles in the matrix granules (Singh et al., 2007). This resulted in a decrease of the drug particles surface to the dissolution medium resulting in drug retardation from the matrix granules. It may also be attributed to the increase in HFI (See Table 1) which might decrease porosity and hence a reduction in influx of the dissolution medium into the drug particles in the matrix granules. However, formulation GB 7 which was sintered at 70 0 C for 3h duration was able to retard the drug for a period of 12h. Drug release mechanism: A good knowledge of the drug release kinetics will provide a proper understanding of the drug release mechanism. Four mathematical models were used for analysis: zeroorder kinetics, first-order kinetics, Higuchi mechanism, and Korsmeyer and peppas model (Higuchi, 1963;Korsmeyer et al., 1983;Peppas, 1985;Harland et al., 1988). The values of the correlation coefficients (r) and the release rate constants are presented in Table 3). This indicates that the release of diltiazem hydrochloride from these systems followed Fickian diffusion mechanism (Korsmeyer et al., 1983). FTIR: Formulation GB 7 was considered for FTIR studies since it was able to retard the drug for a period of 12h. This study was carried out in order to investigate if there was any chemical interaction between added excipients and DZH in the formulation (GB 7 ) before and after sintering. The FTIR of the pure drug, glyceryl behenate, unsintered and sintered matrix granules were recorded (See Fig 2a, b, c and d respectively). The IR spectrum of DZH showed characteristic peaks at 1743.04 cm -1 (ester-C=O) and 1679.0 cm -1 (amide-C=O). However, for the glyceryl behenate alone (without drug or other excipients), IR spectrum showed signals at 2956 cm -1 (for aliphatic, C-H stretch), and 1739.02 cm -1 (for ester, C=O stretch). These spectra were compared with the IR spectrum of the unsintered and the sintered wax-matrix granules (GB 7 ). It was observed that the IR spectra showed both the principal peaks of DZH (1743 and 1649cm -1 ) and glyceryl behenate (1739cm -1 for ester), suggesting that there was no chemical interaction between the DZH and added excipients (such as glyceryl behenate) in both sintered and unsintering matrix granules. Conclusion: Sintering technique enhanced the extent of drug retardation from the systems studied. Formulation GB 7 sintered at 70 0 C for 3h was able to sustain the drug for a period of 12h with a maximum release of 96.2%. The FTIR studies showed that the model drug was not affected by the temperature and time duration used for sintering.
2017-09-08T18:28:11.566Z
2011-08-02T00:00:00.000
{ "year": 2011, "sha1": "adc6a02251daf9df75db47415d08aa404ada153d", "oa_license": "CCBY", "oa_url": "https://www.ajol.info/index.php/jasem/article/download/68527/56605", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "91cb8bed9c3ef8433a20702b582fd2d62c31a2c9", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
6236411
pes2o/s2orc
v3-fos-license
The Domination Game: Proving the 3/5 Conjecture on Isolate-Free Forests We analyze the domination game, where two players, Dominator and Staller, construct together a dominating set M in a given graph, by alternately selecting vertices into M. Each move must increase the size of the dominated set. The players have opposing goals: Dominator wishes M to be as small as possible, and Staller has the opposite goal. Kinnersley, West and Zamani conjectured that when both players play optimally on an isolate-free forest, there is a guaranteed upper bound for the size of the dominating set that depends only on the size n of the forest. This bound is 3n/5 when the first player is Dominator, and (3n+2)/5 when the first player is Staller. The conjecture was proved for specific families of forests by Kinnersley et al. and later extended by Bujtas. Here we prove it for all isolate-free forests, by supplying an algorithm for Dominator that guarantees the desired bound. Introduction We analyze a two-party game on graphs called the domination game, in which two players with opposing goals construct together a dominating set for a given graph. The game was introduced by Bresar, Klavzar and Rall in [1]. One setting in which such a problem may be of interest is the following scenario: New city regulations state that a house is only fire-safe if it is a short distance from a trained firefighter. In order to make sure all houses are fire-safe, a list of citizens that should be trained and hired as firefighters must be made. Two people volunteer for the task of making the list: The city treasurer, who wishes to minimize the costs and therefore wants the number of firefighters to be as small as possible, and the head of the firefighters union, who benefits from adding new members and therefore wishes to maximize the number of firefighters. The mayor, a seasoned politician, decides to let both volunteers add names to the list in turns, each adding a single firefighter that would improve Dominator wins in a Dominator-start (respectively, Staller-start) game if the game ends within at most 3n/5 (resp., (3n + 2)/5) moves, and otherwise Staller wins. Procedure 1.1. Given an isolate-free n-vertex forest G(V, E), the Dominator-start variant of the game can be described by the following algorithm. 3. If t ≤ T max , Dominator wins. Otherwise, Staller wins. Hereafter, the total number of moves in a specific execution of the game is denoted by T . A similar algorithm can be used to describe the Staller-start variant, except that then T max is set to (3n + 2)/5 , and the odd moves are performed by Staller. Previous approaches. As mentioned earlier, Conjecture 1 was introduced by Kinnersley, West and Zamani in [4]. In the paper, the conjectured bound of 3n/5 moves is achieved for specific types of forests, and a weaker bound of 7n/11 moves is proved for arbitrary isolate-free n-vertex forests. In [2], Bujtás proves the conjecture for isolate-free forests in which no two leaves are at distance 4, and improves the bound for arbitrary isolate-free n-vertex forests from 7n/11 to 5n/8. The proofs in [2] use a method for coloring and evaluating vertices according to their state, and creating intermediate graphs, in order to choose moves and to prove the desired bound. Motivating example. We start with a simple example that illustrates some of the difficulties that any algorithm for Dominator must face. Consider a Dominator-start game which is played on the graph shown in Figure 1. The graph contains 23 vertices, therefore Dominator wins if and only if the game ends within 13 moves or less. Even though the neighborhoods of the vertices v 1 and v 2 are similar, the reader can verify that Dominator can win by playing v 1 or u 1 as the first move, whereas if Dominator plays v 2 or u 2 , Staller can win the game by playing z in the following move. We believe this example can be extended to graphs of arbitrarily large size, in which choosing between moves that appear to be the same locally may determine the outcome of the game. Our contributions. We provide an algorithm for Dominator that guarantees that the game ends within the number of moves required by Conjecture 1 on all isolate-free forests, which proves the conjecture. We rely on the general method used in [2] and extend it, by separating the value of each vertex from its color, as well as fine-tuning additional aspects of the intermediate graphs. We start with Section 2, where we lay the foundations for the analysis by formalizing various aspects of the game. We then introduce the algorithm in Section 3, and prove that it achieves the bound of Conjecture 1 in Section 4. We conclude the analysis of Dominator's strategy in Section 5, where we discuss a possible implementation of the algorithm and describe our tests. Finally, Section 6 contains some concluding remarks. Notation and Preliminaries Before describing the algorithm, we introduce some definitions and properties used to analyze the game. Graph notions and vertex labeling. The graphs on which the game is played are undirected and unrooted forests that have no isolated (singleton) vertices. We label all vertices in the initial graph G with distinct even indices, z 2i . The motivation for this is that we later introduce virtual vertices, and we want an easy way to tell apart real (non-virtual) vertices from virtual ones -a real vertex z 2i always has an even index, while a virtual vertex z 2j+1 has an odd index. When we refer to components of a graph, we always mean maximal connected components. For a component C, define the size of C to be the number of vertices in C and denote it by |C|. The degree of a vertex v in a graph G(V, E) is denoted by d(v, G). When G is clear from the context, it is denoted by d(v). Vertex color, vertex value and legal moves. Recall that the players construct together a set M , until it becomes a dominating set. We use a variation of the grading system introduced in [2]. During the game, each vertex has one of three possible colors, and one of three possible values (and these may change between steps). Definition 2.1. Let u be a vertex in the graph. The color, or type, of u at the end of step t, denoted by c t (u), is defined by the following properties: -u is called white or c t (u) = W if u is not dominated, that is, u / ∈ Γ[M t ]. -u is called blue or c t (u) = B if u is dominated but has an undominated neighbor, that is, -u is called red or c t (u) = R if u is dominated and all its neighbors are dominated, that is, When t is clear from the context, we denote the color by c(u) instead of c t (u). We define W t , B t and R t to be the sets of vertices of type W , B and R (respectively) at the end of step t. Even though the first step of the game is step 1, we use (the end of) step 0 to denote the state of the graph before the first move. Observation 2.2. For all steps t ≥ 0, V = W t ∪ B t ∪ R t , and these sets are disjoint. and therefore v is a legal move. Observation 2.6. If for some step t, c t (u) = W and c t (v) = R, then (u, v) / ∈ E (that is, white and red vertices cannot be neighbors). Definition 2.7. For any step t and for any v / ∈ R t , let c t,v (u) be the color of u assuming the (t + 1)st move was v, that is, c t,v (u) = c t+1 (u) if m t+1 = v. In addition to its color, each vertex also has a value. If p(u, t) = k, we say that at step t, u is worth k points, or has value k. For a set of vertices U ⊆ V , define p(U, t) = u∈U p(u, t). When t is clear from the context, we may omit it, and denote the value of u by p(u). Definition 2.9. For any step t, a vertex u is called high, and its type is generically referred to as H, if p(u, t) = 3. Let H t denote the set of high vertices at the end of step t. Definition 2.10. For any step t and for any vertex u ∈ V , if c t (u) = B and p(u, t) = 3, we say that u is a B 3 vertex (at the end of step t). Similarly, if c t (u) = B and p(u, t) = 2, then u is called a B 2 vertex. Note that saying that a vertex is of type H is synonymous to saying that it is of type W or In the graphical illustrations to appear hereafter, vertices are of type H except where specifically labeled otherwise. Definition 2.11. For any step t and for any v / ∈ R t and u ∈ V , let p(u, t, v) be the value of u at the end of step t + 1 assuming the (t + 1)st move was v, that is, p(u, t, v) = p(u, t + 1) if m t+1 = v. Observation 2.12. For any step t, for every v ∈ M t and for any value function p(·, ·), p(v, t) = 0. Let us remark that the value function defined later on for the algorithm will ensure that p t (V ) is monotonically decreasing in t. Gain. The gain of a vertex v under a given value function is the number of points gained when the current player chooses it. Formally, given the value function p, the corresponding gain function Again, whenever t is clear from the context, we omit it. Claim 2.13. For any 1 ≤ t ≤ T and for any v / ∈ R t−1 , the following properties hold. Therefore g(v, t) ≥ 3 (for any gain function), establishing (b). It remains to prove (a) in case Then by the definition v has some white neighbor, u, at the end of step t − 1. Since Corollary 2.14. It is always possible to define the value function such that at least 3 points are gained in every legal move. Proof: Consider a move m t . If c t−1 (m t ) = W , then g(m t , t) ≥ 3 since c t (m t ) = R. Otherwise, c t−1 (m t ) = B and therefore m t has a neighbor u such that c t−1 (u) = W . The value of m t itself decreases by at least 2. Hence if c t (u) = R, then g(m t , t) ≥ 2+3 = 5 for any value function. Otherwise c t (u) = B, and we can choose p(·, ·) such that p(u, t) = 2, gaining 1, and then g(m t , t) ≥ 2 + 1 = 3. In fact, the algorithm will define the value function in such a way, namely, it will ensure that every move (including Staller moves) gains at least 3 points. We now formulate a useful condition on strategies. Denote the average gain per move (over the entire game) byĝ The average gain condition: The average gain per move satisfiesĝ ≥ 5. Claim 2.15. In a Dominator-start game, the average gain condition is equivalent to Conjecture 1. Next, denote the average gain over steps 2, ..., T byg = 1 The shifted average gain condition: Excluding the first move, the average gain satisfiesg ≥ 5. Claim 2.16. In a Staller-start game, the shifted average gain condition implies Conjecture 1. Removing vertices and edges. Recall that red vertices are illegal moves and cannot be played, and are also worth 0 points. Therefore we have the following. Observation 2.17. Red vertices can be removed from the graph along with their edges, without changing the outcome of the game. By definition, each blue vertex v has at least one white neighbor. Moreover, v is converted from blue to red exactly when its last white neighbor is converted to blue or to red, regardless of the states of its blue neighbors. Therefore we have the following. Observation 2.18. Edges between two blue vertices can be removed from the graph without changing the outcome of the game. However, it may sometimes be useful for our algorithm to keep edges that have a B 3 vertex as one of their endpoints. The decision on whether to remove these edges or not will be made by the algorithm. The algorithm maintains a graph called the underlying graph, which contains only vertices and edges that may affect the outcome of the game. This data structure also stores the decisions made by the algorithm about deleting edges between blue vertices, and contains only the edges that were not deleted. In particular, the algorithm ensures the following property, throughout the execution. Property 2.19. The underlying graph at the end of step t, denoted G t = (V t , E t ), satisfies the following conditions. 1. G 0 = G, and the vertices of V are labeled with the labeling z 2i defined in Section 2. 3. E t contains only edges that have at least one endpoint in H t (this guarantees that both endpoints are in V t by Observation 2.6), and contains all edges that have at least one endpoint in W t . The following observation is an immediate result of the fact that edges are not removed as long as one of their endpoints is white. That is, the neighborhood of a white vertex does not change as long as it is white (except maybe for some of its white neighbors turning blue). Proof: Since G is isolate-free, and the last move is either on or adjacent to some white vertex (by Claim 2.5), we conclude from Observation 2.20 that the underlying graph contains at least one additional vertex (that is not red) adjacent to the move. Therefore the total gain is at least 3 + 2 = 5 points. We want to define a single algorithm that will serve to prove the conjecture for both variants of the game. The following corollary explains how this can be done. Corollary 2.22. Given an algorithm A, which guarantees that the game ends within at most 3n 5 moves in the Dominator-start variant of the game given any initial isolate-free forest where all vertices are high (and not necessarily white), it is possible to construct an algorithm B which guarantees that the game ends within at most 3n+2 5 moves in the Staller-start variant of the game. Proof: The desired goal can be achieved by an algorithm B that sets the value function at the end of the first step as described in the proof of Claim 2.16, and then invokes A for all the following moves (so that move i is considered by A as move i − 1 for all i ≥ 1). This holds since the underlying graph G 1 contains only high vertices, so the corollary follows from Claim 2.16. Hereafter we focus on finding an algorithm which achieves the desired gain for the Dominatorstart variant of the game, and the conjecture will follow from Corollary 2.22. Structural notations. We use the following definitions. Definition 2.23 (White, blue, high subgraph). We say a subgraph of G t is white (respectively, blue, high) if all its vertices are white (respectively, blue, high). Specifically, G 0 is high. , v k is a leaf ), and d(v i ) = 2 for all 0 < i < k. We call v 1 the tail lead, and we say that v 0 has a tail. If d(v 0 ) ≥ 1, we say that (v 1 , ..., v k ) is a subtail. Note that by our graphical conventions, the vertex v has degree 3 or higher, and u has degree 1 or higher, but v 1 and v 2 have degree exactly 1. such that v 1 is B 2 , v 2 is white and v 3 is high. Specifically, we use the term " BW component" to describe a component of size 2 containing one blue vertex and one white vertex. 28. Let C be a component and let r 1 and r 2 be vertices in C (not necessarily distinct). If, when C is rooted at r 1 , the subtree T rooted at r 2 is not a subtail, then T contains a split vertex. Proof: Let T be such a subtree. Let r be a vertex on T such that d(r) ≥ 3 (guaranteed to exist since T is not a subtail). Let λ 1 , ..., λ be all the leaves of the subtree rooted at r, and for each i, let v i be the first vertex of degree at least 3 on the (unique) path from λ i to r, including the endpoints (see an example in Figure 4). Since d(r) ≥ 3, v i is guaranteed to exist for every i. Notice that not all v i are distinct. Let v be the v i farthest from r. Since d(v ) ≥ 3, there are at least two leaves, λ j and λ k , in its subtree. Since v is the first vertex on the path from λ j to r that has degree at least 3, we conclude that v j = v and therefore v has a tail towards λ j . Similarly, we conclude that v has another tail towards λ k . Therefore v has at least two tails, which means it is a split vertex. Corollary 2.29. Every tree containing a vertex of degree 3 or more has at least one split vertex. We do not describe a specific algorithm in this section, but rather show that such an algorithm exists. In Section 3.4 we present a concrete naive algorithm resulting from this outline, and in Section 5 we discuss better implementations. Section 3.5 contains a simplified version of this algorithm, that can be used on isolate-free forests in which no two leaves are at distance 4. The suggested algorithm outline consists of several parts, performed for each move. Suppose t moves (t < T ) were already played, and the algorithm needs to decide on the (t + 1)st move (if it is a Dominator move), or preprocess for step t + 2 (if t + 1 is a Staller move). 1. At the end of step t, the current underlying graph, denoted by G t , undergoes a simulation process consisting of two phases, each of which is described in detail later. -Phase 1: The graph is simplified by replacing subtrees of certain specific forms by virtual vertices (i.e., vertices that were not in G 0 ). The resulting (possibly smaller) graph is called the dense graph and is denoted byĜ t . -Phase 2: The resulting dense graphĜ t is separated into boxes, each of which is a connected subcomponent satisfying one of several properties. The process of separating the dense graph into boxes is called box decomposition, and each vertex of the dense graph is assigned into a single box. We define Invariant I which must be satisfied by the box decompositions used by the algorithm. A box decomposition satisfying this invariant is called a valid box decomposition. As becomes clear later, a dense graph may have more than one valid box decomposition, and we show in the analysis that it is possible to maintain the underlying graph such that the corresponding dense graph has at least one valid box decomposition. We say that the underlying graph G t and the corresponding dense graphĜ t are good ifĜ t has a valid box decomposition, and similarly we say that a component C of the dense or underlying graph is good if a graph containing only this component is good. 2. If move m t+1 is performed by Staller, then the new underlying graph G t+1 is generated from G t in a way that guarantees that at least 3 points are gained by Staller's move m t+1 , and that the corresponding dense graph has a valid box decomposition. In the analysis, we show that an underlying graph satisfying these requirements can be generated from any good underlying graph and for any Staller move. 3. Otherwise (move m t+1 is a Dominator move), move m t+1 is chosen (along with a corresponding underlying graph) greedily for Dominator from the vertices ofĜ t , such that the gain is maximal among all such moves which result in a good underlying graph G t+1 . If several potential moves achieve the (same) maximal gain, ties are broken by choosing a move maximizing the minimal cumulative gain in the next three moves, i.e., maximizing min m t+2 [g(m t+1 , t + 1) + g(m t+2 , t + 2) + g t+3 ] where g t+3 is the maximal gain that can be achieved by Dominator in its following move (with a good underlying graph), and we define g t = 0 for all t > T . If there are still several such maximizing moves, then the tie is broken arbitrarily. It remains to describe the two phases of the simulation process. Phase 1 of the simulation: Creating the dense graph The dense graph is the result of removing subtrees called triplet witnesses and replacing them with virtual leaves. The subtrees are constructed by the following process. Initially, set WT 2 = {v | v is a lead of a white tail of length 2} . The corresponding subtree on the dense graph. The triplet vertices in the set T T are the vertices that have numbers next to them. These numbers are their triplet depths. The vertex v is the only triplet head in (a), and its triplet witnesses are v 1 , v 2 and v 3 . The vertices in WT 2 in (a) are v 1 , v 2 and all other vertices that are adjacent to leaves. Note that we assume that v does not have another neighbor in T T 3 except for v 1 and v 2 . Next, the family T T = i≥1 T T i of triplet vertices, and the family PW = i≥1 PW i of potential triplet witnesses, are constructed using the following iterative rule. For every i ≥ 1, we construct in parallel the sets T T i of triplet vertices and PW i of potential triplet witnesses. For each vertex v ∈ T T we also define its triplet depth, td(v), and its triplet subtree. We define T T i and PW i iteratively as follows. After defining T T i and PW i : -Add to T T i+1 every (blue or white) vertex v that has at least three neighbors in PW i : -Add to PW i+1 all vertices from T T i+1 that are white and have degree exactly 4, and all vertices from WT 2 : Note that there is a maximal triplet depth td max = max v∈T T td(v) in the graph, and for all i ≥ td max , . This is true since the graph's diameter upper bounds td(v) for every v. If T T = ∅, then set td max = 0. See illustration in Figure 5. If v is not a triplet witness, then it is called a triplet head (note that v may still be a potential triplet witness that was not chosen as a witness). Observation 3.2. Let v ∈ V be a triplet vertex. All vertices in the triplet subtree rooted at v are white, except (possibly) for v itself, which is either white or blue. If v is not a triplet head, then v is white as well. Proof: Consider the set T T C = T T ∩ C, and let v be a vertex in T T C with maximal triplet depth (among the vertices of T T C ). By the way we define td(v) we know that v ∈ T T td(v) \ T T td(v)−1 , and since PW i ⊆ T T i ∪ WT 2 for all i, we conclude that v is not a triplet witness, and therefore it is a triplet head. Definition 3.4. A virtual vertex or virtual leaf is a white leaf with odd label z 2i+1 that exists only on the dense graph, and is adjacent to a vertex z 2i that is a triplet head on the underlying graph. A vertex that is not virtual is called real, and each real vertex has at most one virtual neighbor. The dense graph is created by replacing all triplet witnesses of each triplet head, along with their entire subtrees, with a single virtual vertex colored white (see Figure 5(b)), thus converting each triplet subtree into a subtail of length 2. This operation can be performed as follows: Procedure Densify: /* Note that G t does not contain virtual vertices. */ 2. For each z 2i ∈ T : (a) Disconnect the edges between z 2i and its triplet witnesses, and remove the components containing the triplet witnesses. (b) Create a new (virtual) white leaf z 2i+1 and add an edge between z 2i and z 2i+1 . 3. Return the resulting graph. The dense graphĜ t results from invoking the procedure Densify on G t . Phase 2 of the simulation: Box decomposition In the second phase of the simulation, the algorithm decomposes the dense graphĜ t into boxes, so that each vertex belongs to exactly one box. We start by defining the boxes and their possible types. Box types We now define a box, and the four possible box types. Definition 3.5. LetV t be the set of vertices inĜ t , and let Q ⊆V t be a connected subset of vertices in the dense graph. Q is a box inĜ t if it satisfies the following requirements. 1. Q is of (at least) one of four types: regular, dispensible, high leftover and corrupted, which are defined below. 2. Q contains at most two B 2 vertices. 3. If Q is not regular, then it has a blue vertex r called the box root, and r does not have a neighbor in Q that is a (white) leaf. For a vertex v ∈ Q, we define the internal neighbors of v to be its neighbors inside the box, and the internal degree of v to be the number of internal neighbors it has. From now on, whenever we consider the degree or the neighbors of a vertex in a specific box, we mean its internal degree and its internal neighbors, except where specifically noted otherwise. Definition 3.6. There are two types of dispensible boxes. i. u has internal degree 3, and it has two additional neighbors in Q, λ and u , such that λ is a high leaf, and u is the (B 2 ) lead of a tail of the form B 2 HH (note that this implies that u could be the root of a D 1 box). ii. u has internal degree 2, and it is the lead of a subtail of the form HHB 2 HH (in this case as well, the B 2 vertex on the tail could be the root of a D 1 box). A box is called dispensible, denoted by D, if it is dispensible of type 1 or 2. See Figure 6 for illustrations. Definition 3.7. A box Q inĜ t is called a high leftover box if all its vertices are high and it has a B root, and additionally, it does not contain triplet subtrees. There are several types of regular boxes, defined below. Definition 3.8. A box Q of size 3 or more is called a regular colored box if it satisfies exactly one of the following two properties, P 1 and P 2 , and additionally it satisfies Property P 0 defined below. , v, such that at least one of the following conditions is satisfied. (a) v is a leaf on a subtail of a vertex u, and u has a high subtail of length 3 or more and does not have white subtails of length 1 or 2. (b) v has a (high) subtail of length 3 or more, and no leaf neighbors. (c) v is a leaf and |Q| = 3 (i.e., Q is a dispensible box of type 1). P 2 . Q contains two B 2 vertices, v 1 and v 2 , such that the internal degree of v 1 is not greater than the internal degree of v 2 , and at least one of the following conditions holds. Property P 0 . Let v be a triplet vertex of depth 2 in a box Q of the dense graph. Then for every three white tails of length 2 of v whose tail leads are not all in PW (i.e., not all three tail leads are potential triplet witnesses in the underlying graph G t ), at least one vertex v = v in one of these tails is the parent of a box Q whose box root has internal degree at most 1 (i.e., Q is either a dispensible box of type 1, or a high leftover or corrupted box whose box root has at most one internal neighbor). Note that this relates to all white tails of length 2 of v, and not only the tails lead by the current triplet witnesses. Definition 3.9. A box Q is called a C 12 box if it contains exactly 12 vertices and is of one of the forms F 1 or F 2 (see Figure 8): Q is high, i.e., does not contain B 2 vertices, and it contains at least 3 vertices and satisfies If Q contains a split vertex and is not a C 12 box, it is called a regular complex box. If Q does not contain split vertices, it is called a regular path box. Definition 3.11. A box Q that is not dispensible, high leftover or regular is called a corrupted box. The decomposition In the second phase of the simulation, each maximal connected component C in the dense graphĜ t is decomposed into boxes according to the following definition. Definition 3.12. LetV t be the set of vertices inĜ t , and let Q = {Q 1 , Q 2 , ...} be a partition ofV t into boxes, i.e., a collection of subsets ofV t satisfying Definition 3.5, such that: Q is called a box decomposition ofĜ t if it satisfies the following properties. A 5 . The parent box of a high leftover box is not high. A 6 . Edges between boxes always connect a box root to its parent, and are called external edges. Component types are defined according to their root boxes. For example, a component whose root box is dispensible is called a dispensible component. One exception is that a component is called a corrupted component if it contains a corrupted box anywhere in it (regardless of the type of its root box). Finally, we define a semi-corrupted component. Definition 3.13. A component C of the dense graphĜ t is called semi-corrupted if every one of its box decompositions contains a corrupted box, but there exists some v ∈ C such that if m t+1 = v, then there exist an underlying graph G t+1 , a corresponding dense graphĜ t+1 and a box decomposition Q ofĜ t+1 that does not contain corrupted boxes, and the gain from playing v on G t is at least 8 points. We are now ready to introduce the invariant that must be maintained by the algorithm. Invariant I. Let t be any step in the game. If move m t is played by Dominator, then there exists a box decomposition Q of the dense grapĥ G t that does not contain corrupted boxes. If move m t is played by Staller, then there exists a box decomposition Q of the dense graphĜ t that contains at most one corrupted box, and if such a box exists then it is in a semi-corrupted component. 1. The underlying graph G t and the corresponding dense graphĜ t are good ifĜ t has a valid box decomposition, i.e., a box decomposition satisfying Invariant I. 2. A component C of the dense graphĜ t or the underlying graph G t is good if a graph containing only this component is good. See Figure 9 for an example component on the dense graph and a valid box decomposition. In Section 4 we show that there exists an algorithm following the described outline, such that for every t, if at the end of step t the dense graph is good (i.e., it satisfies Invariant I), and the average gain up to (and including) step t is at least 5 points, then the algorithm guarantees that the average gain at the end of some future step t > t is at least 5 points. This, in turn, guarantees also that the average gain at the end of the game is at least 5 points. Algorithmic details The outline described in the previous subsections gives rise to the following naive (and highly inefficient) implementation. 5. Return m as the selected move for step t. Section 5 contains a short discussion of more efficient implementations for the strategy outlined in Sections 3.1 through 3.3. Simplified algorithm for forests in which no two leaves are at distance 4 Conjecture 1 was proved in [2] for isolate-free forests in which no two leaves are at distance exactly 4, using a much simpler algorithm than the one described in this thesis. We note that our algorithm can also take a simpler form when used on this family of graphs. It may be instructive to consider this variant, in order to pinpoint the aspects of our algorithm that were needed in order to handle the possible existence of pairs of leaves at distance 4. Observe that if no two leaves are at distance 4 from each other, then G does not contain triplet subtrees. Therefore, there is no difference between the underlying graph G t and the dense graphĜ t (and since Property P 0 refers to triplet subtrees, this property also becomes irrelevant). Additionally, it is always possible for Dominator to make a move gaining at least 7 points on any regular colored or high box whose size is greater than 2 (this fact follows from Claims 4.20 and 4.21 which appear later in the analysis of Dominator moves), and any Staller move on such boxes gains at least 3 points (by Claim 4.17 in the analysis of Staller moves), and in both cases all resulting boxes can be regular boxes that are not C 12 boxes. As a result, there is no need to perform the box decomposition, and the following simpler algorithm suffices (compare this to the algorithm in Section 3.1): Suppose t moves (t < T ) were already played, and the algorithm needs to decide on the (t + 1)st move (if it is a Dominator move), or preprocess for step t + 2 (if t + 1 is a Staller move). 1. If move m t+1 is performed by Staller, then the new graph G t+1 is generated from G t in a way that guarantees that at least 3 points are gained by Staller's move m t+1 , and that all components of G t+1 are regular complex or regular path boxes. 2. Otherwise (move m t+1 is a Dominator move), move m t+1 is chosen greedily for Dominator from the vertices of G t , such that the gain is maximal among all such moves which result in a graph G t+1 all of whose components are regular complex or regular path boxes. If there are ties, they are broken arbitrarily. Note that when all components are of size 2, all moves (by both players) gain at least 5 points. The complex form of the algorithm for general isolate-free forests results from the fact that in some cases, Dominator must play on graphs that only contain split vertices with two or three tails, and all these tails are white tails of length 2. This prompted the creation of the dense graph (for handling triplet subtrees), as well as the addition of dispensible boxes, which contain subtrees that can be ignored by Dominator (i.e., that cannot reduce the gain of playing on their parent boxes. See Lemma 4.4 for details). High leftover and corrupted boxes were added in order to handle Staller moves on these "hidden" subtrees, while Property P 0 and C 12 boxes were added in order to make sure Dominator can always make moves that achieve the desired gain. The analysis section that follows covers all possible moves on all types of boxes, but the core of the algorithm remains this simplified version. Analysis We separate the analysis into several parts, and show that Invariant I is always satisfied, and that an average gain of 5 points per move is achieved. In Section 4.1 we introduce sufficient conditions for guaranteeing Dominator's win. In Section 4.2 we show properties that simplify the case analysis, and Section 4.3 contains an analysis of two special subtrees that appear in many of the cases. We then analyze all possible moves and the resulting graphs. The possible outcomes of Staller moves are analyzed in Section 4.4, and those of Dominator moves in Section 4.5. Lastly, in Section 4.6 we combine all the results to conclude that the algorithm outline proves Conjecture 1. A policy for ensuring high average gain We have seen that it suffices to guarantee an average gain of 5 points per move (Claim 2.15), and also that the last move on any component gains at least 5 points (Corollary 2.21). Therefore, it suffices to guarantee that each pair of consecutive Dominator and Staller moves gains at least 10 points in order to make sure that Dominator wins the game. Definition 4.1. Define the excess gain of move m t at step t, denoted by ψ t , as follows. Additionally, define the cumulative excess gain at step t to be the sum of excess gains in steps 1 through t, and denote it by Observation 4.2. Each of the following conditions is sufficient in order for Dominator to win. 2. If Ψ T ≥ 0, then the game ended with an average gain of at least 5 points, which is a sufficient condition according to Claim 2.15. 3. If T is odd and Ψ T ≥ −2, then T − 1 is even and 5 · (T − 1) + 7 − 2 = 5 · T points are gained in T moves. Therefore an average gain of 5 points is achieved throughout the game, which is a sufficient condition by Claim 2.15. If T is even and Ψ Therefore this is a sufficient condition. The following guarantees are maintained throughout the execution, as will be shown in the analysis. 3. The algorithm never relies on past gains, but rather on future gains. Namely, it guarantees that if ψ t (v) is negative at some point, then there will be positive excess gain in future moves to make up for it. Therefore, if Ψ t > 0 for some t, we can use the cumulative excess gain to convert B 2 vertices to B 3 . Preliminary properties simplifying the analysis In this section we prove properties which will allow us to calculate a lower bound for the gain of playing a (real) vertex from the dense graph directly on the box containing it in the dense graph. First, we describe the difference between playing on the dense graph and playing on the underlying graph. Then, we prove that it suffices to analyze moves on the dense graph by analyzing them directly on the box that contains them. Since Dominator moves are chosen from vertices of the dense graph, this includes all Dominator moves and all Staller moves that are on the dense graph, i.e., all moves that are not under triplet subtrees. Staller moves that are not on the dense graph will be analyzed separately in Section 4.4. vertex which exists both in G 1 and in G 2 . If g points are gained when v is played directly on G 2 (i.e., without invoking Densify()) and the resulting graph (again, without invoking Densify()) is G 2 , then g points can be gained when v is played on G 1 , yielding G 1 , and the dense graph G 1 resulting from Densify( G 1 ) is the same as G 2 except that it may have (at most three) additional B 2 W components. See Figure 10 for an illustration of the graphs in the claim. Proof: Consider G 1 , G 2 and v as in the claim. Since G 2 = Densify(G 1 ), the only difference between the graphs G 1 and G 2 is the replacement of triplet subtrees from G 1 with white leaves in G 2 . Therefore, if G 1 does not contain triplet vertices, then G 1 = G 2 . Assume that G 1 contains at least one triplet vertex, and let u be a triplet head (guaranteed to exist by Claim 3.3). Let u 1 , u 2 and u 3 be the triplet witnesses of u in G 1 , and let λ be the white leaf adjacent to u in G 2 that is not in G 1 (the procedure Densify guarantees that exactly one such leaf exists, since it creates a single virtual leaf next to each triplet head). Denote by T u the set of vertices in the triplet subtree rooted at u in G 1 , excluding u. Observation 3.2 guarantees that all vertices in T u are white, and we know that λ is white. λ is not in G 1 , and all vertices of T u are not in G 2 , therefore by the claim's assumption, v cannot be any of these vertices. If v = u, then the above implies that all vertices of T u (in G 1 ) and λ (in G 2 ) remain white. Therefore u is a triplet head in G 1 with the same triplet witnesses (this results from the way we choose triplet witnesses when there are more than three potential witnesses), which implies that u has one virtual (white) leaf neighbor in G 1 = Densify( G 1 ). Note that u itself has the same color (and value) in G 1 and in G 2 . Figure 11: (a) An example triplet subtree rooted at a vertex u in G 1 , with triplet witnesses u 1 , u 2 and u 3 . All unlabeled vertices are white. (b) The components of G 1 resulting from the triplet subtree following the move v = u. If v = u, then λ becomes red when v is played on G 2 , and adds exactly 3 points to the gain (since it is white in G 2 ). In G 1 , each of the triplet witnesses u 1 , u 2 and u 3 becomes a blue vertex in a separate component, and may be set as a B 2 or B 3 vertex. For each 1 ≤ i ≤ 3, if u i ∈ WT 2 in G 1 , then u i is in a BW component in G 1 , and otherwise (i.e., if u i is a triplet vertex in G 1 ), u i is a blue triplet head of degree 3 in a component of G 1 . See Figure 11 for an example. In both cases, u i is a blue vertex in a BW component in G 1 , and can be converted to B 2 without violating Invariant I. Therefore exactly 3 points can be gained from the vertices of T u . The above shows that each virtual leaf in G 1 has a corresponding white leaf in G 2 , except possibly for three BW components, and that each white leaf in G 2 has a corresponding white leaf in G 1 . The claim follows from the fact (mentioned above) that all other vertices (except for the vertices in the three discussed BW components) are the same in both graphs. We now prove that in some cases, it is possible to calculate a lower bound for the gain of a move m t by calculating its gain on another, simpler graph. Specifically, this simpler graph can be any graph that contains only the vertices of the box Q that contains m t , in some valid box decomposition Q of G t−1 . We use this important property later in the analysis of Dominator and Staller moves. Suppose g points can be gained by playing v in G t−1 and the resulting underlying graph G t is good. Then it is possible to play v inĜ t−1 so that it leads to a goodĜ t . More precisely, the following properties hold. If G t does not contain a semi-corrupted component, and one or more of the following conditions is satisfied: 3. The box root r of Q does not exist inĜ t (i.e., r becomes red. Note that this may occur even if r = v). 4. The box root r of Q exists in G t (i.e., r does not become red). Then at least g points are gained by playing v inĜ t−1 , and the resulting graphĜ t is good (namely, it has a valid box decomposition) and does not contain semi-corrupted components. Procedure InterPart: 1. For each box Q i ∈ Q whose vertices exist inĜ t (i.e., such that none of its vertices become red in move m t ), add Q i to P. Note that this includes all boxes of Q excluding Q, and possibly excluding its parent box P (if exists) as well as some high leftover boxes of size 1 from Q whose root (and only) vertex becomes red. 2. Add all boxes of Q 1 to P. 3. If Q is not a root box inĜ t−1 , and its box root r is inĜ t but not in G t , then add the box Q r of size 1 containing r to P. Note that Q r is either high leftover (if r is high) or corrupted (otherwise). 4. If Q is not a root box inĜ t−1 , and not all vertices of its parent box P are inĜ t , then add the sets Q i containing all maximal connected subsets of P to P. 5. For every component C 1 of size 2 inĜ t , add the box P 1 containing all vertices of C 1 to P (if it is not already in P). Note that P is not necessarily a valid box decomposition. Also observe that P is a partition of the vertices ofĜ t , and that all the sets added to P by the above five steps are boxes. After performing Procedure InterPart, we invoke Operation DisconnectExtBlue on the resulting graphĜ t and disconnect all external edges in P that connect two blue vertices. Operation DisconnectExtBlue: For every edge e = (u 1 , u 2 ) that is external in P, do: If u 1 and u 2 are both blue, remove e. Note that after performing Operation DisconnectExtBlue there are no parent boxes of size 1 in P (since box roots are blue). We now consider the partition P under the three different settings specified in the claim, and describe in each setting how P can be modified into a valid box decomposition ofĜ t , soĜ t is good, while achieving the desired properties (i.e., a gain of at least g points with no corrupted boxes in Case C 1 , a gain of at least g − 3 points with no corrupted boxes in Case C 2 , and a gain of at least g points with at most one semi-corrupted component in Case C 3 ). This will imply the claim. We rely on the following three claims. Proof: Let P 1 be a parent box of size 2 in P, and assume towards contradiction that P 1 is of the form W W . Recall that Observation 2.20 guarantees that all neighbors of each white vertex are still in the graph. Also recall that Definitions 3.5 and 3.12 guarantee that each box that is not a root box contains a blue vertex (the box root), and that a parent box is of size 3 or more (Property A 4 ). Since P 1 is in P and P was constructed using Procedure InterPart, we conclude that P 1 was added to P for one of the following three reasons. The first option is that P 1 was in Q. This is impossible, since then P 1 would be a parent box of size 2, which contradicts Property A 4 of Definition 3.12. The second option is that P 1 is in Q 1 . This is impossible when P 1 is white, since a box of the form W W cannot be a parent box and cannot have a parent, which means that it must be in a component of size 2 in G t . This contradicts the fact that the component containing P 1 in G t must contain a blue vertex by Observation 2.20. The third option is that P 1 ⊆ P , where P is the parent box of Q in Ĝ t−1 . This is also impossible when P 1 is white since, as in the previous case, P must contain a blue vertex by Observation 2.20. The claim follows. Claim 4.6. All boxes of size 2 in P are root boxes. Proof: Let P 1 be a box of size 2 in P. Denote by u 1 and u 2 the blue and white vertices of P 1 , respectively. If P 1 was a root box inĜ t−1 , then it is clearly also a root box inĜ t . Otherwise, we know that u 1 was not a box root: Assume towards contradiction that u 1 was a box root inĜ t−1 , and denote the box which contained u 1 and u 2 inĜ t−1 by Q 0 . Definition 3.5 guarantees that box roots do not have neighbors that are white leave, therefore u 2 was not a leaf in Q 0 , and had a neighbor, u 3 . Since u 2 is white inĜ t , we conclude from Observation 2.20 that u 3 is inĜ t (and Procedure InterPart guarantees that it is in P 1 ), in contradiction to the assumption that P 1 is of size 2. Therefore u 1 was not a box root in Q, which means that it does not have a parent in P. In all cases, we conclude that P 1 is a root box in P. Then G t is good, and P can be modified into a valid box decomposition ofĜ t that does not contain corrupted boxes, while gaining at least g 0 points. Proof: We start by handling the case that P does not satisfy Property A 5 , i.e., it contains a high box that is the parent of at least one high leftover box. Consider the following operation. Operation JoinHigh: While there are high boxes P 1 and Q 1 such that P 1 is the parent of Q 1 , do: Remove both boxes P 1 and Q 1 from P and replace them with the single box Q 2 that contains all vertices of Q 1 and P 1 . After performing Operation JoinHigh, P satisfies Property A 5 , and the gain does not change. It is possible that now P is a valid box decomposition ofĜ t . If this is not the case, it remains to modify P so that it satisfies Property A 4 . From Claims 4.5 and 4.6 we conclude that all parent boxes of size 2 in P are root boxes of the form BW , and we have seen earlier that after performing Operation DisconnectExtBlue P does not contain parent boxes of size 1. Let P 1 be a (root) parent box of size 2 in P, and let Q 1 be another box in P such that P 1 is the parent of Q 1 . Consider the following operation. See Figure 12 for illustrations. Operation FixBWParent: For every parent box P 1 = (u 1 , u 2 ) in P, such that u 1 is blue and u 2 is white, do: 1. Find a box Q 1 in P such that P 1 is the parent of Q 1 , and denote the box root of Q 1 by r 1 . 2. Remove both boxes P 1 and Q 1 from P and replace them with the single box Q 2 that contains all vertices of Q 1 and P 1 . 3. Convert u 1 and r 1 to B 2 (if they are high). 4. If Q 1 was a dispensible box of type 2, then: (a) Denote by r 2 the B 2 vertex of Q 1 that is not r 1 , and denote by u 3 and u 4 the two vertices in Q 1 that are on its high subtail. (b) Remove Q 2 from P, and replace it with the following two boxes: The box Q 3 , which contains all vertices of Q 2 except for r 2 , u 3 and u 4 , and the box Q 4 , which contains r 2 , u 3 and u 4 with box root r 2 . Observe that Operation FixBWParent may only increase the gain (if u 1 or r 1 is high). We make the following two observations. First, if Q 1 was a dispensible box of type 1 or a high leftover box, then Q 2 is a regular colored box, since it satisfies Property P 2 :(b) of Definition 3.8 (see Case (a) in Figure 12). Second, if Q 1 was a dispensible box of type 2, then the box Q 3 is a regular colored box (again by Property P 2 :(b)), and the box Q 4 is dispensible of type 1 (see an example in Case (b) of Figure 12). We conclude that after performing Operation FixBWParent, the resulting P satisfies Definition 3.12 and does not contain corrupted boxes, and therefore it is a valid box decomposition ofĜ t , and that at least g 0 points are gained. For Cases C 1 and C 2 , it remains to show that in each setting, the gain is as described in the theorem and P satisfies Properties A 1 , A 2 , A 3 and A 6 of Definition 3.12 before performing Operations JoinHigh and FixBWParent, and then the theorem will follow from Claims 4.6 and 4.7. In Case C 3 , we cannot use Claim 4.7 directly. Case (C 1 ): Let us first consider the setting of Case (C 1 ), in which G t does not contain a semicorrupted component, and one of the four conditions (1)-(4) is satisfied. We examine each of these conditions. Subcase (1): First, suppose Condition (1) holds, namely, Q is a root box inĜ t−1 . Since box roots are always blue, external edges always connect a box root to its parent, and root boxes do not have parents (see Definitions 3.5 and 3.12), we conclude that all external neighbors of vertices of Q (i.e., neighbors that are inĜ t−1 but not in Q) are blue box roots. This implies that if a vertex in Q does not have internal white neighbors (i.e., white neighbors that are also in Q), then it does not have any white neighbor. We conclude that Q does not contain vertices that are inĜ t but are not in G t (i.e., vertices that are red in G t but blue inĜ t ). Additionally, if an external neighbor u does not have internal white neighbors in its box in Q, then it must be in a high leftover box of size 1, which means (since box roots cannot be parents) that it does not have additional external neighbors except its parent. Therefore, the only case in which an external neighbor's color changes as a result of playing v is when a vertex in a high leftover box of size 1 becomes red, and in such a case 3 more points are gained and no additional boxes are modified. We make the following four observations regarding the properties of Definition 3.12. First, each connected component contains at most one regular box, i.e., Property A 1 is satisfied. This is because all boxes from Q that were not in the same component with v are in P, all boxes of Q\ {Q} that were in the same component with v were not regular, and Q 1 is a valid box decomposition and therefore satisfies this condition. Second, P does not contain corrupted boxes, and therefore Property A 2 is satisfied. This is because the only potentially corrupted box is Q r , generated in step 3 of Procedure InterPart, and from the previous paragraph we conclude that it does not exist when Q is a root box. Third, Property A 3 holds, since all box roots have at most one external neighbor, and if such a neighbor exists then it is not another box root. This is because all external edges connecting blue vertices were disconnected in Operation DisconnectExtBlue (and box roots are blue). Fourth, all external edges connect a box root to its parent, because all external edges of P are either external edges in Q 1 , or were external edges in Q. Therefore Property A 6 holds. We conclude from Claim 4.7 that P can be modified into a valid box decomposition ofĜ t while preserving a gain of g points or more, and thatĜ t does not contain semi-corrupted components, which is what we wanted to prove. Subcase (2): Next, suppose Condition (2) of Case (C 1 ) holds, namely, Q is not a root box, and v is the box root of Q. Since only the box root v had a parent in Q, and v is red, we conclude that all boxes from Q 1 are root boxes in P. Therefore all components except, possibly, for components which contain vertices from the parent box P , satisfy Properties A 1 , A 2 , A 3 and A 6 , for the same reasons as in the previous subcase. It remains to handle the components that contain vertices from P . Consider the boxes P i in P which resulted from the parent box P in Q. Exactly one of the following cases occurs. 1. All vertices of P are inĜ t . Then the box P remains as it was, except that maybe the vertex p that was the parent of the box root r of Q was converted from white to blue. Whether P was a regular, dispensible or high leftover box, it is still of the same type that it was, since all these properties are still satisfied if a single vertex is converted from white to B 3 (see Section 3.3.1). Therefore in this case P satisfies Properties A 1 , A 2 , A 3 and A 6 . 2. Some vertex of P became red and is not inĜ t . Then at least 2 additional points were gained, and they can be used to convert all remaining vertices of P to high vertices (since each box contains at most two B 2 vertices). Claim 4.6 guarantees that all resulting boxes of size 2 are root boxes, and we conclude that all properties of Definition 3.12 are satisfied on these components as well. From Claim 4.7, we conclude that P can be modified into a valid box decomposition ofĜ t while preserving a gain of g points or more, and thatĜ t does not contain semi-corrupted components. (3) of Case (C 1 ), namely, the box root r of Q does not exist inĜ t (i.e., r becomes red), and assume that v = r. Then all boxes of Q 1 are root boxes in P (since only the box root r could have a parent), and all components that do not contain vertices from Q 1 remain as they were inĜ t−1 . As in the previous subcase, we conclude that Properties A 1 , A 2 , A 3 and A 6 are satisfied by P, and therefore from Claim 4.7 we conclude thatĜ t is good and does not contain semi-corrupted components, and at least g points are gained. Subcase (3): Next, consider Condition Subcase (4): Finally, suppose Condition (4) of Case (C 1 ) holds, i.e., Q is not a root box and the box root r of Q exists in G t (that is, it does not become red). Denote by Q 1 the box containing r in Q 1 . Since box roots do not have internal neighbors that are white leaves, and Observation 2.20 guarantees that the neighborhood of a white vertex remains as it was, we conclude that Q 1 is not of size 2. Therefore Properties A 1 and A 6 are satisfied in P. Properties A 2 and A 3 are satisfied for the same reasons as in Subcase (1). We conclude from Claim 4.7 thatĜ t has a valid box decomposition that does not contain corrupted boxes, and that at least g points are gained. In all the above subcases, the gain inĜ t is at least as high as the gain in G t and the resulting graph is good (with no semi-corrupted components). Hence Case (C 1 ) follows. Case (C 2 ): We now turn to Case (C 2 ) of the claim, in which Q is not a root box, and the box root r is inĜ t but not in G t . Consider the following operation. Operation ConvHigh: The difference between the gain in G t , and the gain inĜ t after performing Operation ConvHigh (if r was B 2 ), is at most 3 points. Since r was a box root and box roots cannot be parents, the box Q r containing r in P is a high leftover box of size 1 that is not the parent of another box, and its parent box P remains as it was inĜ t−1 . We conclude that the component C containing r satisfies all properties of Definition 3.12 except, possibly, for Property A 5 (i.e., P may also be a high box). All other components can be analyzed as in Case (C 1 ) above, and therefore we conclude from Claim 4.7 that P can be converted into a valid box decomposition ofĜ t while gaining at least g − 3 points, and thatĜ t does not contain semi-corrupted components. Case (C 3 ): Finally, we consider Case (C 3 ). If G t contains a semi-corrupted component C and one of the specified conditions holds, then at least one of the conditions of Case C 1 above is satisfied (except that now one of the resulting components may contain a corrupted box). All components except, possibly, for the component C * containing vertices from C , satisfy the conditions of Claim 4.7, and therefore P can be modified so that its restriction to these components is valid, while preserving the number of points gained on them. It remains to handle C * . Property A 4 of Definition 3.12 guarantees that the root box of Q 1 that is in C is a box Q C of size at least 3 in P, therefore Property A 4 is also satisfied by P. Since P contains a single corrupted box, we conclude that P satisfies Properties A 1 , A 2 , A 3 and A 6 on C * for the same reasons as in Case (C 1 ) above. Therefore, after performing Operation JoinHigh, P is a box decomposition ofĜ t and at least g points are gained. It remains to check whether C * is semi-corrupted. In order for C * to be semi-corrupted (and for P to be a valid box decomposition ofĜ t ), we need to check if it contains a move u gaining at least 8 points with a good resulting graphĜ t+1 . Since C is semi-corrupted, it contains a move u gaining at least 8 points when played in G t . Under each of the conditions of the claim, one of the conditions of Case C 1 is satisfied for step t + 1 (since the move u in question is either a box root, or on a root box), and we conclude that at least 8 points can be gained when playing onĜ t so thatĜ t+1 is good. Therefore C * is semi-corrupted, as desired. This concludes the proof of Lemma 4.4. Two special subtrees In this section we focus attention on two types of subtrees that occur frequently in subsequent analysis, and describe how the algorithm may cope with such subtrees, and what moves can be used in the analysis. We start by defining the subtrees. 1. u has (at least) one neighbor that is a B 3 leaf. 2. u has (at least) two high tails of length 2. 3. There is a valid box decomposition where none of the vertices in these tails are box roots. Note that this implies that u is white. If u has at most one additional white neighbor, and this neighbor (if exists) is not the lead of a white tail of length 1 or 2, we say that u is a strong fix vertex. The subtree containing u and the three specified tails is called the fix subtree rooted at u. See If u has at most one additional white neighbor, and this neighbor (if exists) is not the lead of a white tail of length 1 or 2, we say that u is a strong semi-triplet vertex. The subtree containing u and the three specified tails is called the semi-triplet subtree rooted at u. See Figure 14. where k is the number of points needed in order to convert the resulting graph to a good graph, and k ≤ 2. If Q contains a strong fix subtree, then Dominator gains at least 10 − k points. Proof: Let Q be a box inĜ as described. If Q contains a fix subtree, consider the move v which is a lead of a high tail of length 2 adjacent to u. Playing v converts at least 3 vertices (v and its adjacent leaf, and the B 3 leaf) to red, which gains at least 9 points, and no new B 2 vertices are created. Since Q contains at most two B 2 vertices, at most two points are needed in order to convert all resulting boxes to high, and therefore k ≤ 2. For a strong fix subtree, if 9 points are gained then no additional vertices became red (otherwise the gain would be greater than 9). Therefore, if u is converted to B 2 , one of the following cases occurs. Case (1): u is a strong fix vertex with a white neighbor that is the lead of a tail of length 2 that is not white. Then the resulting box is a path of the form BW B 2 HH, and it can be converted to a regular colored path box of the form B 2 W B 2 HH (if it is not already so). Case (2): Either u does not have additional neighbors, or its neighbors are not leads of tails of length 2. After converting all B 2 vertices except u in the box containing u to B 3 , and possibly disconnecting edges between u and its blue neighbors (so that u has at most one neighbor), u and the remaining subtail from the fix subtree can be separated into a dispensible box of type 1, with u as the box root. In both cases, at least 10 − k points are gained. Lemma 4.11. Let Q be a box in a valid box decomposition of the dense graphĜ. If Q contains a semi-triplet subtree before Dominator's move, then the following properties hold. 1. If the semi-triplet subtree has a B 3 leaf, then at least 11 − k points are gained in the following move, where k is the number of points needed in order to convert the resulting graph to a good graph, and k ≤ 2. 2. If the semi-triplet subtree is strong, then at least 8 − k points are gained in the next move. Proof: Let Q be a box inĜ as described that contains a semi-triplet subtree rooted at a vertex u. We analyze the different cases. Case (1): The semi-triplet subtree rooted at u contains a B 3 leaf, λ. If u is played, then at least 3 + 3 + 3 + 1 + 1 = 11 points are gained from the vertices that become red (u, λ and the neighbor of λ) and from converting B vertices in resulting components of size 2 to B 2 . We note that if less than two BW components are created as a result of playing u, then at least one additional vertex was converted to red and therefore at least 3 · 4 = 12 points are gained. The claim follows, since Q contains at most two B 2 vertices and therefore k ≤ 2. Case (2): The semi-triplet subtree rooted at u does not contain a B 3 leaf. Then it must contain a B 3 tail lead, u 1 . Let v be another tail lead. As a result of playing v, at least 3 + 3 + 1 = 7 points can be gained from the resulting red vertices (v and the adjacent leaf), and from disconnecting u 1 to a component of size 2 and converting it to B 2 . As before, k ≤ 2, and at least 7 − k points are gained. If the semi-triplet subtree is strong, then an additional point can be gained by converting u to B 2 : If u does not have an additional white neighbor that is not in the triplet subtree, or if u has such a neighbor and it is not the lead of a subtail of length 1 or 2, then u can be a box root of a dispensible box of type 1 (possibly after disconnecting edges between u and its blue neighbors). Otherwise, u has a white neighbor that is the lead of a subtail of length 2, and this subtail is not white (note that u cannot have a white leaf neighbor by the definition of strong semi-triplet). We conclude that u is in a box of the form BW B 2 HH, and therefore it can be converted to a regular colored path box by converting all B vertices to B 2 . Results of Staller moves In this section we analyze all possible Staller moves, i.e., the result of Staller playing any vertex m 2t of the underlying graph. Notice that m 2t may be in the dense graph, or in a triplet subtree in the underlying graph. Theorem 4.12 summarizes all the possible outcomes of Staller moves. Theorem 4.12. If Staller plays on a vertex v in G t−1 and G t−1 is good, then at least 3 points are gained and the resulting underlying graph, G t , is good. We separate the proof of the theorem into several claims, and note that Lemma 4.4 guarantees that it suffices to analyze each move inside the box containing it (and calculate the gain on the underlying graph accordingly). First, we extend the definition of box decomposition to the underlying graph in the following natural way: Definition 4.13. A decomposition Q of the set V t of vertices of the underlying graph G t is called a box decomposition, if the decompositionQ t which results from Q by replacing (without repetitions) each vertex of the underlying graph G t that is not on the dense graphĜ t with the virtual leaf that replaces them onĜ t (i.e., the vertex z 2i+1 adjacent to the nearest triplet head) is a box decomposition ofĜ t . Claim 4.14. If Staller plays on G t−1 a vertex v that is not on the dense graphĜ t−1 , then at least 3 points are gained and the resulting dense graphĜ t is good. Figure 15: All possible Staller moves on G t−1 (marked as v) on triplet witnesses as described in the proof of Claim 4.14. Each tail represents the subtree of a triplet witness, which is either a real tail (of the witness is in WT 2 ), or a triplet subtree. In all cases, (a) is the graph G t−1 before the move and (b) is the resulting graph G t . Proof: If v is not on the dense graphĜ t−1 , then it is in a triplet subtree rooted at a vertex u, and u is in some box Q onĜ t−1 . There are three cases to consider, illustrated in Figure 15. Case (1): v is not a leaf. Then at least 3 + 3 = 6 points can be gained from v and its triplet witnesses or its adjacent leaf, and all resulting components onĜ t are BW components (if v was in T T ), and the component containing u. Since Q contains at most two B 2 vertices, at least 6 − 2 = 4 points can be gained while converting the boxes resulting from Q (that are not BW components) to high, and therefore Invariant I is satisfied, so the resulting dense graphĜ t is good. Case (2): v is a leaf at distance 2 from u, and u is blue. Then at least 6 points are gained since v and its neighbor become red. Therefore, as before, at least 6 − 2 = 4 points can be gained while satisfying Invariant I. Case (3): v is a leaf and the nearest split vertex v 1 is white. Therefore 3 points are gained, and the resulting box Q contains a fix vertex (v 1 ). If Q is corrupted, then at least one of the following three subcases occurs. Subcase (3.1): v 1 was not a triplet head in Q. Then v 1 is a strong fix vertex in Q , and Lemma 4.10 guarantees that at least 10 − 2 = 8 points can be gained while converting Q to a high box, and therefore the component is semi-corrupted. Therefore Invariant I is satisfied by the resulting dense graphĜ t . Subcase (3.2): Q contained a single B 2 vertex. Then Lemma 4.10 guarantees that at least 9 − 1 = 8 points can be gained while converting Q to a high box, and as in the previous item, this implies that Invariant I is satisfied by the resulting dense graphĜ t . Subcase (3.3.3): If Q was a regular colored box with two B 2 vertices, then Q is not corrupted, and therefore this case can be ignored. Note that the above subcases cover all graphs in which Q is corrupted, for the following reasons: If Q was a high leftover or high regular box then Q is not corrupted. Additionally, if Q was a regular colored box satisfying Property P 2 (Subcase (3.3.3)), then Q also satisfies it. Finally, we know that Q was not corrupted because Invariant I guarantees that there are no corrupted boxes before Staller's move. 3. If Staller plays v 17 , then at least 3 points are gained and the resulting box is a path of the form HHB 2 HB 3 (see Case (2) in Figure 17). The containing component is semi-corrupted since Dominator can play on the middle B 2 vertex of this box (which is the box root) and gain at least 2 + 3 + 3 + 1 = 9 points. guarantee that the resulting graph is good. Case (R 2 ): Q is a high regular box. Then any move gains at least 3 points and all resulting boxes can be high. Case (R 3 ): C is a C 12 box. The possible moves are described in Figure 18. If Staller plays v 1 , then the remaining box is semi-corrupted since Dominator can play v 4 in the next move and gain at least 3 + 3 + 2 + 1 = 9 points from the red vertices and from converting v 3 to B 2 , and the remaining box is a dispensible box of type 2. See Case (a) in Figure 19. Figure 19: Possible semi-corrupted boxes resulting from Staller moves on C 12 boxes, and moves v gaining at least 9 points. If Staller plays v 2 , then at least 5 points are gained and, after separating the vertices v 8 , v 9 and v 10 to a dispensible box of type 1 rooted at v 8 , the resulting box is high and satisfies Property P 0 , and therefore it is regular. If Staller plays v 3 , then at least 6 points are gained and all resulting boxes are regular colored path boxes and regular boxes of size 2. If Staller plays one of the vertices v 4 , v 6 , v 9 or v 10 , then at least 6 − 2 = 4 points can be gained while converting the B 2 vertices to high, and the resulting boxes are regular. If Staller plays v 5 or v 7 , then at least 3 points are gained and the resulting box (after separating v 8 , v 9 and v 10 to a dispensible box of type 1 rooted at v 8 ) contains a strong fix vertex and a single B 2 vertex (see Case (b) in Figure 19), and therefore it is in a semi-corrupted component. If Staller plays v 8 , then at least 3 points are gained from converting v 9 to a B 2 vertex in a BW box, and v 3 becomes a strong semi-triplet vertex in a corrupted box. From Lemma 4.11 we conclude that the box is semi-corrupted, since it is possible to gain at least 8 points in the following move (since k = 0). Case (R 4 ): Q is a regular colored box. Case P 1 :(c) (i.e., Q is a dispensible box of type 1) was already analyzed in Claim 4.15, and therefore we ignore it. Therefore exactly one of the following cases occurs (see Figure 7). Note that in all the following cases, we analyze the subtree rooted at some split vertex, and show that all the resulting boxes are regular. If an unexpected vertex becomes red, then at least two additional points are gained, and therefore the box containing this vertex can be converted to a high box. We therefore ignore this possibility in the case analysis. We split the analysis into two subcases, as follows. Subcase (a): v is a B 2 vertex. Then at least one of the following cases occurs. 3. v is a B 2 leaf and Q satisfies Case P 2 :(a) or Case P 2 :(b). Then, similarly to Case P 1 :(a), at least 3 points are gained from v and its neighbor, and if the resulting box cannot be converted to a high box while gaining at least 3 points, then it satisfies one of the cases P 2 :(a) and P 2 :(b). 4. v is a non-leaf vertex and Q satisfies Case P 2 :(b). Then, similarly to Case P 1 :(b), at least 3 points are gained from v and from an adjacent tail lead, and the resulting boxes are a high box, and regular colored path boxes (and possibly additional BW components). Subcase (b): v is a high vertex. Then at least 3 points are gained from v, and therefore no additional B 2 vertices need to be created. At least one of the following cases occurs. 2. Q contains a single B 2 vertex, u, satisfying Case P 1 :(b). Then exactly one of the following cases occurs. Notice that u is blue and therefore does not have neighbors that are blue leaves, which implies that it does not have leaf neighbors. (a) v is on a subtail of u, and is at distance 1 or 2 from u. Then at least 4 points can be gained from v and its neighbors, which means that at least 3 points can be gained while converting the box containing u to a high box. Dominator moves We have seen that if Staller plays on a good graph G t , then the resulting graph is also good. Our goal in this section is to prove the following theorem. Dominator chooses all moves greedily according to the guidelines in Section 3, then the resulting graphĜ t is good, and at least one of the following properties holds. Notice that we do not make requirements about the last move (step T ) because of Corollary 2.21, and therefore in all the following claims we only consider t such that t < T . The definition of semi-corrupted components guarantees that if a semi-corrupted component is created, then in the following Dominator move ψ ≥ 1, and therefore we focus on the case that there is a box decomposition that does not contain corrupted boxes. Let Q be a box decomposition of G t−1 that does not contain corrupted boxes. Case (P 2 ): Q satisfies Case P 2 :(a) or Case P 2 :(b). This splits further into the following two subcases. Subcase (1): There is a B 2 leaf on a subtail of length 2 of some vertex v. Since the leaf is a B vertex, its neighbor must be white. Exactly one of the following cases results from playing v. 1. v is high. Then at least 2 + 3 + 3 − 1 = 7 points can be gained while converting the remaining B 2 vertex in the box to high. 2. v is not high. Then at least 2 + 3 + 2 = 7 points are gained and the resulting boxes do not contain B 2 vertices. Subcase (2): Otherwise, there must be two B 2 leaves that are neighbors of the same vertex, v. Playing v gains at least 2 + 3 + 2 = 7 points, and the resulting boxes are high. Claim 4.21. If Q contains a high regular box Q of size 3 or more (including a high leftover root box), and Q is either a path or contains a split vertex s satisfying one of the following requirements: 1. s has four tails or more. 2. s has a tail that is not of length 2. 3. s has a tail of length 2 containing a B 3 vertex. Then ψ t ≥ 0 and the resulting graphĜ t is good. Proof: First, observe that if Q contains a high path of length 3 or more, then playing on the neighbor of a leaf on this path can gain at least 3 + 3 + 1 = 7 points, and if a box remains, it is a path with a B 2 leaf and no other B 2 vertices, and therefore it is a regular box. Otherwise, let Q be a high regular complex box, and s a split vertex in Q, as described. As before, we assume all high vertices are white, since if an unexpected vertex becomes red, at least two additional points are gained and the box containing the red vertex can be converted to a high box. We separate the analysis into cases according to the different conditions of the claim. See Figure 21 for illustrations. Case (c): s has a tail of length 3 or more. If Dominator plays the vertex v that is the neighbor of the leaf on the shortest tail, then exactly one of the following two subcases occurs. Subcase (1): The shortest tail is of length 2. Then playing v gains 3 + 3 + 1 = 7 points from the red vertices and from s, and the resulting box satisfies Case P 1 :(b) of Definition 3.8. Subcase (2): The shortest tail is of length 3 or more. Then playing v gains at least 3 + 3 + 1 = 7 points from v and its neighbors, and the resulting box satisfies Case P 1 :(a) of Definition 3.8. Case (d): All tails are of length exactly 2, and s has a tail of length 2 containing a B 3 vertex. Then at least one of the following two subcases occurs. Subcase (1): There is a B 3 leaf on a tail of s. Then playing s gains at least 3 + 3 + 3 + 1 = 10 points from the vertices that become red and the tail leads, and the resulting boxes are high boxes and boxes of size 2. Subcase (2): s has a B 3 tail lead, u. Then playing a vertex v that is another tail lead adjacent to s can gain at least 3 + 3 + 1 = 7 points from the red vertices and from u, and the resulting boxes are high boxes and boxes of size 2. Proof: First observe that if there is a dispensible box of type 1 that is a root box, then ψ t ≥ 0 by Claim 4.20 (in fact, ψ t ≥ 1). Therefore the dispensible component is of type 2. Recall that all possible moves appear in Figure 16. If Dominator plays v 7 or v 15 (according to the type of D 2 box), then two of the resulting components are dispensible components of type 1 in some valid box decomposition Q 1 ofĜ t . Therefore, one of the following cases occurs. Claim 4.23. If Q contains a high regular box of size 3 or more and ψ t < 0, then ψ t = −1, and additionally, ψ t+1 + ψ t+2 ≥ 1 and all resulting graphs are good. Let Q be a high regular box of size 3 or more as described. Let s be a split vertex in Q that has at most one neighbor that is not a tail lead (from Claim 2.28 we know that such a vertex exists), and such that all vertices in the tails of s are white, and assume that ψ t < 0. See illustrations in Figure 22. First, observe that if there is such a split vertex s with exactly two tails, then playing one of the tail leads gains at least 3 + 3 + 1 = 7 points and the resulting box can be separated into a dispensible box of type 1 rooted at s, and a high box. Therefore, s is a triplet vertex of depth 2. Additionally, note that Property P 0 guarantees that there is a vertex u on a tail of s that is the parent of a dispensible box of type 1, since if all three tail leads were potential triplet witnesses then this would imply that s has a (virtual) white leaf, contradicting our assumption that all tails of s are of length 2 (and since a high box cannot be a parent of a high leftover box, and Q does not contain corrupted boxes). Let v be a tail lead adjacent to s on another tail. Playing v could gain 3 + 3 + 1 = 7 points from converting s to a B 2 vertex that is the box root of a dispensible box of type 2. Since the other resulting box would be high, and ψ t < 0, we conclude that doing so would violate Property P 0 . Therefore, we conclude that for each such split vertex s there exists a split vertex s that would become a triplet vertex in this case. See Cases (2) and (3) in Figure 22 (s 1 corresponds to s and s 2 corresponds to s ). Let λ be a leaf on Q, and let s 1 to be the split vertex farthest from λ. Let s 2 be the split vertex that is closest to s 1 . We analyze the results of playing m t = s 1 , and separate them into cases according to the structure of the graph. Observe that at least 6 points are gained by this move, therefore ψ t ≥ −1. Case (a): There is a dispensible box of type 1 that is adjacent to a leaf of a tail of s 1 . Then playing s 1 gains at least 6 points, and the resulting boxes are BW boxes, a high box containing the semi-triplet vertex s 2 , and a path P Figure 22. For all 1 ≤ i ≤ 5, if Staller plays v i , then at least 5 points are gained and in the following Dominator move there is a valid box decomposition containing a high box with a semi-triplet vertex, therefore in this case ψ t+1 + ψ t+2 ≥ 2 by Lemma 4.11. If Staller plays elsewhere, then either a semi-corrupted component is created inĜ t+1 (in which case at least 8 points are gained in step t + 2), or Dominator can play v 3 and gain at least 2 + 3 + 2 + 1 = 8 points in step t + 2. We conclude that in this case ψ t+1 + ψ t+2 ≥ 1. Case (b): After Dominator plays s 1 , the resulting graphĜ t+1 contains a dispensible component of type 1 and a high box with a semi-triplet subtree rooted at s 2 . This splits further into the following two subcases. Subcase (1): The resulting semi-triplet subtree has a B 3 leaf (Case (2) in Figure 22). Lemma 4.11 guarantees that in this case, if Staller does not play on the semi-triplet subtree then at least 11 points are gained. If Staller does play on the semi-triplet subtree, then Dominator can play on the D 1 component and gain at least 8 points. If Staller creates a semi-corrupted component, then at least 8 points are gained in step t + 2 as well. In all cases, ψ t+1 + ψ t+2 ≥ 1. Subcase (2): The semi-triplet subtree rooted at s 2 has a B 3 tail lead (Case (3) in Figure 22). We conclude that the internal degree of s 2 inĜ t−1 is exactly 4, for the following reasons: First, assume towards contradiction that the internal degree of s 2 is 3. Then s 2 is a split vertex with two tails of length 2, in contradiction to the assumption that ψ t < 0. Next, assume towards contradiction that the internal degree of s 2 is 5 or more. Then s 2 has at least two additional neighbors, besides the tail lead v 1 adjacent to s 1 and the two other tail leads. Since s 1 was chosen to be the split vertex farthest from λ, and all tails of all split vertices are of length exactly 2, at least one of the other neighbors of s 2 , v 0 , must be one of the following: 1. A lead of a white tail of length 2. 2. A vertex of internal degree 3 that has a white leaf and a neighbor s 0 that is a triplet vertex of depth 2. In both cases, Dominator could play m t = s 2 and gain at least 3 + 4 · 1 = 7 points from s 2 , the tail leads and v 1 and v 0 , and the resulting boxes inĜ t would be BW boxes and C 12 boxes (since every triplet subtree has a vertex that is the parent of a D 1 box by Property P 0 , as none of the split vertices have leaf neighbors). This contradicts the assumption that ψ t < 0, and we conclude that the internal degree of s 2 is less than 5. Since the internal degree of s 2 is exactly 4, and it does not have another white tail of length 2 or a white tail of length 1, we conclude that s 2 is a strong semi-triplet vertex. From Lemma 4.11 we conclude that if Staller does not play m t+1 on the semi-triplet subtree, then ψ t+2 ≥ 1. If Staller does play on the semi-triplet subtree, then as before, Dominator can play on the D 1 component and gain at least 8 points, so either way ψ t+2 ≥ 1. If Staller plays elsewhere and creates a semi-corrupted component, then ψ t+2 ≥ 1 as well. This concludes the proof, since all resulting graphs are good. Claim 4.24. If all root boxes in Q are of size 2, then ψ t ≥ −2 and the resulting graphĜ t is good, and at least one of the following properties holds. Proof: Recall that boxes of size 2 cannot be parent boxes, and therefore all components of the dense graph are of size 2 (i.e., components of the forms B 2 W , B 3 W and W W ). Dominator can play on any real vertex and gain at least 2 + 3 = 5 points, and the resulting dense graphĜ t contains only components of size 2. If Staller plays on a real vertex of the dense graph, then at least 5 points are gained and therefore ψ t+1 ≥ 2. Otherwise, since the box containing Staller's move contains at most one B 2 vertex, the proof of Claim 4.14 guarantees that one of the following cases occurs. Case (a): At least 6 − 1 = 5 points are gained in Staller's move, and the resulting box is high. Case (b): At least 3 points are gained in Staller's move (i.e., ψ t+1 ≥ 0), and the resulting box is semi-corrupted and contains a strong fix vertex. Lemma 4.10 guarantees that in this case, at least 10 − 1 = 9 points are gained in the following Dominator move (i.e., ψ t+2 ≥ 2), and the resulting graphĜ t+2 is good. Analysis conclusion We conclude by showing that if Dominator plays according to the algorithm, then the game ends with an average gain of at least 5 points per move, and therefore Dominator wins. Theorem 4.25. If Dominator plays greedily according to the guidelines in Section 3, then the average gain in a Dominator-start game is at least 5 points. Proof: We first note that ifĜ t * is good for some t * and Ψ t * ≥ 0, then there exists t > t * such that at least one of the following properties holds. We observe that for even t * , these properties are guaranteed by Theorem 4.18 and Corollary 2.21, and for odd t * , they are guaranteed by Theorem 4.12 (and the definition of semi-corrupted components). 1. t is odd and at least 5t + 2 points are gained in steps 1 through t, andĜ t is good. 2. t is even and at least 5t points are gained in steps 1 through t, andĜ t is good. 3. At least 5t points are gained in steps 1 through t andĜ t is empty, i.e., the game is over. We note that for t * = 0,Ĝ t * is high and therefore good, and Ψ 0 = 0, and therefore there exists some t > 0 satisfying one of the above cases. The theorem follows by induction, since Ψ t ≥ 0 in Cases 1 and 2, and the game ends when the graph is empty, and therefore t = T must satisfy Case 3. This concludes the analysis. Implementing the algorithm The greedy algorithm described in Section 3 often achieves stronger results than what is required in order to prove Conjecture 1. Specifically, it would suffice if Dominator's move was chosen such that Ψ is non-negative when possible while preserving the invariant, and when no such move is possible, chose a move which guarantees that the excess gain at the end of the next Staller move or the next Dominator move is non-negative (if the current Dominator move is not the last move of the game). The analysis shown in the previous section guarantees that Dominator always has such a move. We have implemented a variant of the algorithm in order to verify the correctness of the algorithm and the analysis, and ran it successfully on all trees up to size 20 (using the tree generation algorithm described in [5]), as well as on some specifically constructed intermediate underlying graphs (containing components which consist of several boxes in all valid box decompositions). In each test, all possible games resulting from the tested initial graph were checked, i.e., Dominator's moves were chosen according to the algorithm, and all possible legal moves were tested for each Staller move. For efficiency reasons, the implementation differs from the algorithm described in Section 3 in the following ways. 1. Not all possible underlying graphs and value functions are tested, but rather a small subset which is closely related to the previous underlying graph and value function. Because we used the implementation to verify parts of the analysis as well, we did not make additional improvements, and also verified that the excess gain for Staller moves is never negative, and that if the excess gain of a move played by Dominator is negative, then the sum of excess gains over at most three moves starting from this move is not negative (see Theorem 4.18 for details). The efficiency of the algorithm can be further improved using additional modifications, such as choosing the first move achieving non-negative excess gain (as described above), and choosing moves in a deterministic manner imitating the proofs in the analysis. Conclusions The algorithm described for Dominator achieves the desired bound of 3n/5 on all isolate-free forests, which proves Conjecture 1. The variant of the conjecture that relates to general isolate-free graphs remains open, however an upper bound of 7n/10 is proved in [4], and an improved bound of 2n/3 is shown in [3]. In [3], Bujtás further improves these results (to bounds below 3n/5) for graphs with minimum degree 3 or more. We note that the algorithm introduced here does not perform optimally (i.e., does not achieve the game domination number) on all graphs, and it may be interesting to optimize the solutions and find strategies that achieve the game domination number. Constructing a strategy for Staller may also be of interest, whether it is an optimal strategy or a strategy that performs optimally against a specific Dominator strategy.
2016-03-03T17:10:11.000Z
2016-03-03T00:00:00.000
{ "year": 2016, "sha1": "e5fbd87875189efa95c08a10658655e16deb668d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "e5fbd87875189efa95c08a10658655e16deb668d", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
55311319
pes2o/s2orc
v3-fos-license
Simple Management of Radial Artery Perforation during Transradial Percutaneous Coronary Intervention 136 Copyrightc 2016 The Korean Association of Internal Medicine This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted noncommercial use, distribution, and reproduction in any medium, provided the original work is properly cited. Received: 2015. 6. 22 Revised: 2015. 7. 16 Accepted: 2015. 8. 27 INTRODUCTION The radial artery has become the most common access site for coronary angiography and percutaneous coronary intervention (PCI) since transradial intervention results in fewer local vascular complications than transfemoral intervention [1].This procedure rarely results in complications, improves patient comfort, and reduces the duration of hospitalization [2].Despite the fact that complications are rare, radial artery perforation can result in compartment syndrome and acute hand ischemia [3].The aim of this study was to share our experience in managing iatrogenic radial artery perforation. CASE REPORT A 69-year-old male was hospitalized for coronary intervention after evaluation by computed tomography (CT) revealed critical stenosis of the distal left circumflex artery (LCX). The patient underwent coronary angiography via the left radial route with a 6 French (Fr) sheath (Terumo Corp., Tokyo, Japan) inserted using standard techniques.The left radial angiography showed a minor degree radial artery spasm (Fig. 1A). After gioplasty and a 2.75 X 18-mm stent (Resolute integrity, Medtronic, Dublin, Ireland) (Fig. 2B).After removing the guiding catheter, radial angiography was performed via the sheath's side port.The procedure showed that the perforation was sealed and that there was no contrast agent extravasation (Fig. 3). The patient was discharged after 48 hours without any local vascular complications, with a patent radial pulse, and no local hematoma. DISCUSSION The transradial approach is more popular due to decreased vascular complications and increased patient comfort [4].The benefits of the transradial approach include a lower incidence of complications, earlier ambulation, same-day or next-day discharge, and a reduced cost of long-term hospitalization [2].Despite its advantages, the transradial approach can result in significant complications, including local hematoma, radial artery obstruction, radial artery perforation, and hand ischemia.Radial artery perforation has been reported in about 1% of patients undergoing a transradial procedure.In the past, perforations have been treated by manual compression of the forearm or inflation of a balloon catheter across the perforated segment [5].Once this complica-tion occurs, the physician must switch to a contralateral radial or femoral approach to complete the procedure.This ultimately leads to an increase in both total procedure time and patient hospital stay.However, in this study, successful PCI was performed using the radial artery after perforation by downsizing the catheter and rewiring the perforated segment with a 0.014or 0.021-inch PCI guidewire [6].Using a smaller guiding catheter over the affected segment and a 0.035-inch guidewire for the rest enabled the procedure to continue without switching to another site.Since the guiding catheter itself worked as a hemostatic device, the perforation was sealed without further intervention.After successful completion of PCI, a radial angiogram was performed to check the hemostatic status of the perforated segment. This case shows that simple installation of a smaller guiding catheter can seal perforations in the radial artery and prevent the physician from having to move to an alternate site. In summary, radial artery perforation is one of the major complications of transradial PCI.By installing a small guiding catheter, the radial artery perforation was managed, and PCI was performed successfully using the same route.This case verifies that simple installation of a smaller guiding catheter can manage radial artery perforation during PCI. Figure 1 .Figure 2 . Figure 1.Baseline radial angiogram showing a minor degree spasm (arrow) (A).Perforation of the radial artery and extravasation of contrast agent into the surrounding tissue (arrowheads) (B).N, nitroglycerin.
2018-12-11T12:47:39.689Z
2016-02-01T00:00:00.000
{ "year": 2016, "sha1": "cae714bd7fc3a7978e89d9962ba1c2313f20b644", "oa_license": "CCBYNC", "oa_url": "http://ekjm.org/upload/kjm-90-2-136.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "cae714bd7fc3a7978e89d9962ba1c2313f20b644", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
225797618
pes2o/s2orc
v3-fos-license
Prevalence of Benign Vocal Fold Lesions in Ear, Nose, and Throat Outpatient Unit of Dr. Soetomo General Hospital, Surabaya, Indonesia Article history: Received 12 May 2020 Received in revised form 06 June 2020 Accepted 08 June 2020 Introduction Benign vocal fold lesions are abnormal masses in laryngeal tissue that are overgrown and uncoordinated. 1 Benign vocal fold lesions reduce efficiency of sound production that leads to hoarse and breathy voice, so the patient needs more effort to talk. Thus, the patient complains of fatigue and discomfort in the neck and throat. Patients with benign vocal fold lesions do not feel pain. The uneven surface of the vocal cords causes mucus to often get lodged, causing a lump in throat and coughing reactions. 2 Benign vocal fold lesions can be divided into two, namely neoplastic and non-neoplastic. Non-neoplastic vocal fold lesions include vocal cord nodules, vocal cord polyps and vocal cord cysts. 3 The extroverted personality who talks a lot and occupations with high voice requirements becomes a risk factor for benign vocal fold lesions. Other risk factors are smoking, stomach acid reflux, allergies and infections. 4 Benign vocal fold lesions are common cases, but its prevalence is difficult to determine. In 2011, Cohen et al. conducted a retrospective study of the prevalence and causes of dysphonia in the United States. The study compared differences in the etiology of dysphonia determined by primary care physicians and ENT specialists. The research data were taken from dysphonia code according to the International Classification of Disease (ICD) code 9 recorded in the United States in the period of January 2004 to December 2008. The total population obtained in that period was fifty-five million patients. The prevalence of patients with dysphonia diagnosis from the total population was 1%, and approximately 11% of patients with dysphonia were caused by benign vocal fold lesions. Benign vocal Biomolecular fold lesions consist of vocal cord polyps, vocal cord nodules, vocal cord abscesses, vocal cords cellulitis, vocal cords granulomas, and vocal cords leukoplakia. 4,5 There has been no report regarding incidence and prevalence of benign vocal fold lesions in Dr. Soetomo General Academic Hospital, Surabaya, Indonesia. This study aimed to determine the incidence and prevalence of benign vocal fold lesions in the Ear, Nose, and Throat (ENT) Outpatient Unit of Dr. Soetomo General Hospital, Surabaya, Indonesia. Methods A retrospective descriptive study was conducted at the ENT Outpatient Unit of Dr. Soetomo General Academic Hospital, Surabaya, Indonesia. Data collected in this study were 20 patients in the ENT Outpatient from June 2015 to June 2016. The accessible population was patients with dysphonia complaints who visited ENT Clinic. The study sample was dysphonia patients with direct rigid and flexible laryngoscopy examination that showed benign vocal fold lesions, including vocal cord nodules, vocal cords cysts, or vocal cord occupation. The inclusion criteria included patients with symptoms of dysphonia, patients with a diagnosis of benign vocal fold lesions (nodules, cysts, or vocal cord polyps), and complete patient data. The patient's data included patient's identity, history, examination results, and diagnosis after direct rigid or flexible laryngoscopy procedures. The exclusion criteria were history of malignancy with stage III ≤ and incomplete medical record. The collected data were taken from medical record and arranged in a table based on age, sex, occupation, types of benign vocal fold lesions, location of lesions and the therapy given. The type of occupation in this study was divided into four groups according to the level of voice used during work based on Koufmn and Isaacson's classification. Level I refers to elite vocal performers, namely singers and radio broadcasters. Level II is a professional voice user such as a lecturer. Level III is non-vocal professionals, including teachers, traders, and students. Level IV is non-vocal non-professional, such as housewives and farmers [6]. Measurement data were analyzed and presented in the form of frequency distributions Results The results of medical record data obtained 475 patients with dysphonia from June 2015 to June 2016. Of all 475 patients, there were 49 patients with advanced malignancy and 78 incomplete medical record. There were 23 patients diagnosed with benign vocal cord lesion and the rest had other cases. There were 20 patients with complete medical records and benign vocal cord lesions. There were 10 male patients (50%) and 10 female patients (50%). The patient's age ranged from 5-73 years old, with a mean of 33.55 years. The age group with the highest number of patients ranged from 20-59 years old (12 patients; 60%). The distribution of occupational groups according to Koufmn and Isaacson was shown in Table 1. Types of Benign Vocal Fold Lesions The distribution of types of benign vocal fold lesions was shown in Table 1. In this study, the most common type of vocal fold lesions was vocal cord nodules (13 cases; 65%). The least type of lesion was vocal cord polyps (3 cases; 15.00%). The incidence of vocal cord nodules in this study was the highest when compared to other types of benign vocal fold lesions (2.73%), followed by vocal cord polyps (0.63%) and vocal cord cysts (0.84%). The prevalence of vocal cord nodules in this study was 4.42%, while the prevalence of vocal cord polyps and vocal cord cysts was 1.26% and 2.31%, respectively. 4 (20.00) *Group I = elite vocal performer, II = professional voice user, III = non-vocal professional, IV = non-vocal non-professional. Location of Vocal Fold Lesions Benign vocal fold lesions were grouped according to the location of lesion, namely 1/3 of anterior vocal cord, 1/3 of medial vocal cord, unilateral, and bilateral. The location of most vocal cord polyps was in the 1/3 of anterior unilateral vocal cord (2 patients; 66.7%), while the least was in the 1/3 of medial unilateral vocal cord (1 patient; 33.3%). Most vocal cord cysts were in the 1/3 of anterior unilateral vocal cord (3 patients; 75%), and the least was in the 1/3 of medial unilateral vocal cord (1 patient; 25%). The most vocal cord nodules were in the 1/3 of anterior bilateral vocal cord (11 patients; 91.7%), and the least was located in the 1/3 of anti-lateral unilateral vocal cord (1 patient). The distribution of group of benign vocal fold lesions was shown in Table 2. Therapy and Results Patients with benign vocal fold lesions were grouped according to the type of therapy given. The most therapeutic group was non-operative (13 patients; 65%), and the least was non-control group (2 patients; 10%), as shown in Table 6. The no-show non-operative patients were as many as 9 patients (69.2%), and operative patients who felt improvement were 3 patients (60%; Table 3). Discussion Direct rigid and flexible laryngoscopy can detect abnormalities in larynx such as inflammation, lesions and narrowing of airway. Direct laryngoscope can also be used to perform biopsy of laryngeal tissue. 7 Benign vocal fold lesions are less common than malignant disorders. These abnormalities are divided into two, namely non-neoplastic and neoplastic tumors. Nonneoplastic vocal fold lesions occur due to infection, trauma, and degeneration. Some examples of nonneoplastic lesions are vocal cord nodules, vocal cord polyps, and vocal cord cysts. Vocal cord nodules are always bilateral and almost symmetrical. On stroboscopic examination, vocal cord nodules will appear in a decreased mucosal wave. The nodules will shrink or disappear with sound therapy. Vocal cord polyps can occur unilateral or bilateral with clear or reddish exophytic lesions in hemorrhagic polyps. The size of vocal cord polyps does not change with sound therapy. Vocal cord cysts can be unilateral or bilateral. It is located near ligaments or in the subepithelial cavity. The subepithelial cavity is the area just below the vocal cord epithelium. 3,4,8 In this study, the ratio of male and female patients with benign vocal fold lesions was 1: 1, thus indicates that both genders have similar risk of vocal cord disorders. Benign vocal fold lesions are mostly found in female because female more often use sounds excessively. 8 In the pre-menstrual period, female experience premenstrual vocal syndrome (PMVS), which is a change in vocal cords' stability due to hormonal fluctuations. PMVS is characterized by being unable to reach high notes and losing voice power. During pre-menstrual period, there is also dryness of larynx due to unbalanced estrogen and progesterone levels. Dry larynx triggers the patient to clear throat that often leads to vocal cord nodules. 9 Other studies also found similar findings, as benign vocal fold lesions were mostly found in this age range. 10,11 This result indicated that vocal fold lesions occur in patients in the age group of workers. Patients within working age who frequently use sounds will have a greater risk for suffering from benign vocal fold disorders. 11,12 The Voice Handicap Index (VHI) score is a questionnaire containing 30 questions about the quality of life of dysphonia patients with good validity and reliability. The total VHI score is between 0 (without disability) to 120 (maximum disability). The VHI provides information about patients' perceptions of the level of sound defects in daily life. Previous retrospective studies have shown that level of sound requirements associated with lifestyle and work influences VHI scores. The highest number of working age patients in this study showed that working age patients highly need to speak. This also affected the VHI score. A high VHI score encourages working age patients to come for treatment. 13 Type of occupation has a close relationship with benign vocal cord lesions. The type of occupation that requires loud noises with high frequency is a risk factor for benign vocal folds. Koufman and Isaacson classified voice into four groups. In this study, most patients belonged to group III. The type of occupation included in group III often involve vocal abuse because this group works using sound as the main need without professional training. 6 There are several types of occupation in group III that often require moderate sound quality but high sound load. 14 The intended sound load is long working hours, noisy environment, and inadequate work facilities. The excessive number of students in each classroom also influences the sound burden on the type of work as a teacher. 15 The most common type of benign vocal fold lesions in this study was vocal cord nodules. This result is consistent with other studies. 11 Most patients with vocal cord nodules have jobs with voice as their primary need. This causes patients with vocal cord nodules to attend treatment more quickly than other benign vocal fold lesions. 4 The results of previous studies were similar to the current study which showed that 47% of patients had vocal cord abnormalities in 1/3 anterior, 11% in 1/3 medial and 22% in 1/3 posterior vocal cords. 10,11 The 1/3 anterior of vocal cords are part of the membrane. When vocal cords vibrate, the membrane experiences friction and collision between the greatest vocal cords. Longterm and strong vibrations causing vascular congestion accompanied by swelling in vocal cord membrane. This reason why benign vocal fold abnormality is commonly found in the 1/3 anterior part of vocal cord. 16 In this study, one patient experienced unilateral vocal cord nodules, as caused by several possibilities. The patient's vocal cord nodules were detected so early that the possibility of reactive lesions in the contralateral side vocal cord has not yet formed. Facilities for vocal cord examination at the study site were direct rigid and flexible laryngoscopy. Direct rigid and flexible need patient cooperation and operator skills. Unilateral vocal fold lesions can be detected by tele-laryngoscopy (stroboscopy) as much as 79.8%, while direct rigid laryngoscopy with general anesthesia can only diagnose of 60.7%. 17 The non-operative therapy is an attempt to optimize the condition of larynx. The condition of larynx will be optimal with sound therapy that eliminates habits that can injure vocal cords such as screaming or whispering, using sounds in moderation and optimal hydration. Other health problems related to vocal cord irritation are also treated, such as gastric acid reflux and controlling allergies. Benign vocal fold lesions provide a very good response to non-operative therapy, hence it becomes primary choice. 3,6,16 The operative therapy is more aimed at benign vocal cord polyps and vocal cord cysts. Benign vocal fold lesions that are exophytic, such as vocal cord polyps, and lesions that cause severe mucosal stiffness, such as ligamentous cysts, give poor results in non-operative therapy. Operative therapy is the first choice if the patient has dysphagia associated with aspiration, has a risk of airway obstruction, and a suspicion of malignancy. In patients who have undergone non-operative therapy but do not obtain results in accordance with patient's expectations, operative therapy can be used as an option. Operative therapy is also chosen if the need for sound for daily life is very important for the patient. 3, 6 Doloi and Khanna (2011) conducted a study of therapy given to patients with benign vocal fold disorders. Nonoperative therapy in the study included antibiotics, antiinflammatory, vapor inhalation, and voice rest. Meanwhile, operative therapy included excision with direct rectal laryngoscopy, excision with endoscopy, and external excision. 18 The percentage of patients treated non-operatively in our study was quite high and showed good results. These results differ from a study conducted by Singhal et al., (2009) as they found that only 6% of patients treated non-operatively had good results. 1 The results of Doloi and Khanna's study were likely affected by early detection of patient's vocal fold abnormalities. 18 The limitation of this study relies on the method of diagnosing begin vocal fold lesions which still does not use stroboscopy. It is expected that the hospital provides the tool for better accuracy. Conclusion The number of patients with symptoms of dysphonia was enormous yet patients diagnosed with benign vocal cord lesion were very few. Reporting the number of cases of benign vocal cord lesion with dysphonia was used as data for further (specifically the data on the number of patients with benign vocal cord abnormalities).
2020-06-18T09:03:18.792Z
2020-06-12T00:00:00.000
{ "year": 2020, "sha1": "c0cc6cfaf1d78c885a68a6eee586443f361b09a0", "oa_license": "CCBYSA", "oa_url": "https://e-journal.unair.ac.id/BHSJ/article/download/19103/10840", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "df4b61785b06755e013493e3f77e3853fcd1921c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
253786467
pes2o/s2orc
v3-fos-license
The Power of Compensation System (CS) on Employee Satisfaction (ES): The Mediating Role of Employee Motivation (EM) : The compensation system, employee motivation, and employee satisfaction have received much attention from academics for many years. Existing research, however, does not yet detail the mediation effect of employee motivation on the relationship between the entire compensation system and employee satisfaction. The study explores the influence of the compensation structure on employee satisfaction using employee motivation as mediation. This research embraced a quantitative design, positivism paradigm, deductive approach, and explanatory research. Cross-sectional data from 100 employees were drawn with a random sampling technique using a self-administered survey questionnaire. First, in sequence of analysis, descriptive statistics were conducted. After that, a reliability test was used to test internal consistency. Finally, a correlation test, direct effect, indirect effect, and total effect were used to test the hypotheses at the 0.05 level while analyzing the data. The findings show that the compensation system has a favorable impact on employee satisfaction by partially mediating motivation. Concurrently, this study establishes awareness intending to revise a robust compensation strategy so that employee morale, engagement will increase and turnover will reduce. The study outcomes will assist policymakers in improving the situation of the existing workforce in insurance companies and other financial companies in Bangladesh. Introduction Employee motivation and satisfaction in insurance industries have received much scholarly attention, which is not unexpected given the current situation. One of the more accepted ideas to emerge is that insurance businesses will be better able to accomplish their objectives if their people are motivated and happy. However, employee motivation and satisfaction levels are frustrating in the Jiban Bima Corporation (JBC) of Bangladesh. The tendency to maintain compensation policy over a long period was genuinely hopeless and reflected the insurance industry's low job motivation and satisfaction. As a result, the employees leave the organization. According to the annual report of the Insurance Development and Regulatory Authority IDRA (2010-2011 to 2016-2017 and 2017-2018, Annual Report. Available online: http://www.idra.org.bd/sites/default/files/files/idra.portal.gov.bd/ employee satisfaction, which is based on the inner drive of employees (Muguongo et al. 2015). Moreover, equity theory states that comparing an employee's input-output to that of other employees will reveal whether they are satisfied or not, which influenced by motivation where the effort is considered as input and the pay is considered as output (Sudiardhita et al. 2018). goal to reduce AET, which fundamentally involves young people from disadvantaged backgrounds, and possibly more so in the current times of pandemic. Along the same lines, the European Council (2021) indicates that students leaving education early continues to be a challenge since it exposes young people and adults with fewer socio-economic opportunities. The report notes that students from disadvantaged backgrounds are over-represented in early leaving across Europe, and that the pandemic has highlighted even more clearly the importance of equity and inclusion in education and training. In Spain, the political strategies in response to AET-with rates that are amongst the highest in Europe (16%, according to the Spanish National Institute of Statistics in 2020)have been notable and constitute an attempt to comply with the objectives established by the recommendations of Europe and UNESCO. Thus, the reduction in AET has been a priority in the latest educational laws, including the most recent reform (LOMLOE 2020). Additionally, taking inclusive and equitable policies as its backbone, the law appeals for improvement in the dynamics of schools and teachers, and to the responsibility of the administration to eliminate the barriers that limit the access, presence, participation, and learning of students that experience socio-educational and cultural vulnerabilities. Placing in such a scenario the problem of students at risk of dropping out of school, whom we therefore understand to be at risk of educational and social exclusion (Escudero et al. 2013), there are two aspects that should be specified within this context: (1) The tendency, in particular in the Spanish context, is to refer these students to specific re-engagement programs and measures that are not always designed and implemented according to fully inclusive parameters, resulting in what can be regarded as parallel, stigmatized and segregated pathways (Escudero and Martínez 2012; Marhuenda and García 2017; Tarabini et al. 2015). Furthermore, the consolidation of these The lack of prior research on the mediation effect of employee motivation in this study adds to the body of knowledge on compensation to satisfaction. Insurance industries and financial industries would be inspired to restructure their compensation policy to support their employees' motivation and expectations if they were aware of the importance of the indicators. Our work contributes to the ongoing debate in support of human resource management as an effective instrument for reducing dissatisfaction with work via motivation. In this study, the empirical work reviews are shown in detail to develop study hypotheses in the literature review section. After that, methods are discussed, including population, sample size, measurement procedure, scale, and data analysis techniques. Then, the results section describes part by part to reach the study hypotheses. Next, the discussion section explains how the findings contribute to organization and theory. Finally, the conclusion section shows to what extent the study objectives were achieved, study limitations, and future guidelines for professionals, academicians, and researchers. The Literature Review This section discusses the theoretical background in measuring the connection between the compensation system and employee motivation for the study. Compensation System All returns employees receive due to their job are referred to as employee compensation (Dessler 2015;Cabanas et al. 2020). The literature on self-determination theory (SDT) claims that compensation systems serve as significant external triggers (Ryan and Deci 2017;Tepliuk et al. 2021). There is no doubt that compensation is an important component of the management control system (Hong 2017; Terepyshchyi and Khomenko 2019). Additionally, compensation is the total amount given to employees for services provided in connection with particular employment (Eliopoulos 2019;Bangun 2012;Sutrisno 2020;Gorgenyi-Hegyes et al. 2021). The compensation system pushes employees to concentrate on both individual and group goals (Chakrabarty 2021). Employees can be compensated in two ways, according to their input or output. Employees' input-based compensation focuses on their ability and potentiality, and output-based compensation focuses on their productivity. In the neoclassical principal-agent theory, it is noteworthy that the alignment effect is achieved either by performance-based or equity-based payments (Obermann and Velte 2018;Aranibar et al. 2022). Although most studies favor compensation based on output, several have highlighted the challenges in measuring productivity (Türk 2008;Stashevsky and Weisberg 2006;McClune 2005;Holbeche 2005). Elements of the Compensation System The compensation system consists of payments such as rental housing, transportation, relative benefits, overtime, risk pay allowances, etc. Rewards comprise performance rewards, employment rewards, year-end bonuses for perfect attendance, and proposal bonuses. Compensation packages include four aspects, namely: salary, allowance, gratuity, and pension (Salisu et al. 2015). Salary, benefits, and employee bonuses such as paid vacations, insurance, parental leave, free tour opportunities, provident fund, and others are provided as compensation (Pepra-Mensah et al. 2017). In other words, salary and perks are incorporated into the basic remuneration package. Cascio (2006) stated that the broad goal of designing a pay system is to give a monetary value (a standard rate) to each position in the organization and a mechanism for upgrading the standard rate (e.g., based on merit and inflation). However, individual and group incentive schemes, if well-designed, can be a powerful motivator. Holidays, life insurance, personal accident insurance, workplace vehicle schemes, mobile phone packages, and shop vouchers are frequently included in benefits packages (Bateman and Snell 1996;Beech et al. 2006). Thus, compensation is classified into monetary and non-monetary advantages (Baqi and Indradewa 2021). Employee Motivation The process of encouraging individuals to engage in activities in an effort to successfully and efficiently accomplish the intended goals or targets is known as motivation. Work motivation is a need to take action toward a certain objective that might occur in a person consciously or unintentionally (Riyanto et al. 2021). Employee motivation is the process through which an organization encourages workers through incentives such as salary, bonuses, and rewards to meet organizational objectives (Pudjiastuti and Sijabat 2022). Employee motivation is now acknowledged as one of the key factors in an organization's success in such a competitive market (Khuong and Hoang 2015;Muñoz-Pascual and Galende 2017;Yang et al. 2020). Maslow's needs hierarchy, Herzberg's two-factor, Vroom's expectation, Adams' equity, and Skinner's reinforcement theories are the five main theories that have contributed to our knowledge of motivation, according to Safiullah (2015). While motivation is a broad concept with numerous definitions, it may be defined in the workplace as "a collection of energetic factors that arise both within and beyond an individual's self, to trigger work-related behaviour and to govern its shape, direction, intensity, and duration (Pinder 1998)." Employee motivation is essential to the operation and performance of businesses (Greenberg 2011); therefore, managers inspire their staff with the expectation that they would perform in a specific desirable manner (Watson 2006). Employee Satisfaction Employee satisfaction is described in various ways, and there is unlikely to be a unified definition for the phrase. Employees are essential resources in any firm since they are the ones who carry out goals (Rodzoś 2019;Egerová and Rotenbornová 2021). Employees will exhibit enjoyable, positive attitudes when they are happy with their jobs. As a result, intense job satisfaction will boost an organization's productivity and overall performance (Rynkevich 2020;Petrova et al. 2020). A company's main source of power is its workforce (Ali et al. 2021). According to Hashim and Mahmood (2011), job satisfaction is an emotional response to a person's employment condition. Consequently, work satisfaction refers to how content employees are with their jobs (Furnham et al. 2009;Alwali and Alwali 2022). "Job satisfaction" is another name for "employee satisfaction" (Wang 2005). According to Locke (1976), employee satisfaction is linked to people's wants, desires, or values rather than necessities. Employees will be content if they are adequately compensated, work in a pleasant atmosphere, and have prospects for advancement that align with their values. However, employees' capacity to do their formally assigned activities is crucially fueled by their level of job satisfaction or enthusiasm for their work (De Clercq et al. 2019;Rayton and Yalabik 2014;Jiang et al. 2009;Sun and Pan 2008). One of the primary reasons for a company to reach a respectable level of performance at work is collective satisfaction (Oteshova et al. 2021). Compensation, Employee Motivation and Employee Satisfaction Compensation is a parameter to measure an employee's motivation and job satisfaction. A person's mindset regarding their task in order to feel satisfied with their output is known as motivation (Herzberg 1966). One of the most complicated topics, job satisfaction, includes a wide range of emotions and circumstances. Job satisfaction at work is influenced by compensation, motivation, an efficient chain of command, and general working circumstances (Uddin et al. 2016;Bilge et al. 2021). Employee work happiness is affected by wages, benefits, and motivation since they are frequently cited as two of the top three elements affecting employee job satisfaction (Society for Human Resource Management 2012). Several aspects influence employee motivation, most notably pay for work and opportunities for self-development, interpersonal interactions, particularly successful communication (Stachowska and Czaplicka-Kozłowska 2017;Miri and Macke 2022). The motivated employee directly impacts employee happiness in the workplace (Klopotan et al. 2018). Additionally, extrinsic and intrinsic motivation produce positive job satisfaction, organizational commitment, and employee activity, according to previous studies (Çınar et al. 2011;Silic et al. 2020;Peñalba-Aguirrezabalaga et al. 2021). Furthermore, compensation and benefits appear to have a favorable link with employee job satisfaction from this perspective (Leonova et al. 2021). According to empirical research, a properly designed compensation and incentive system can also increase satisfaction and recruit and retain outstanding people, which results in a competitive advantage (Elrehail et al. 2019). The one result that may be most useful to decision-makers and educators relates to pay packages influencing motivation to one's work and job satisfaction (Ashraf 2020). Employees are happier when they are driven by receiving expected pay from the firm. However, the negative correlation between compensation and job satisfaction undoubtedly exists. Individual income is highly inversely connected with work satisfaction and is used to compute relative compensation (Clark and Oswald 1996;Song and Whang 2020). Despite the debate, the compensation provided by employers, such as salary or benefits, and other work facilities, such as motivation for their well-being, might be considered in the discussion of job satisfaction (Darma and Supriyanto 2017). Compensation is one of the most important variables influencing employee motivation (Kubo and Saka 2002;Chinyio et al. 2018). Moreover, investigations are rare regarding the relationship between motivation and satisfaction in the relevant field of study. However, perceptions of the motivating features of such recognition programs are the third lens through which firms assess satisfaction with their HR initiatives (Kotlyar and Karakowsky 2014). Under the motivation-hygiene theory, paying cash payments (i.e., salary, bonus, and other cash payments), ensuring employee satisfaction, and fostering a positive corporate culture are hygiene factors for reducing workplace dissatisfaction (Herzberg 1968;Chen and Hassan 2022). It is critical to understand if job motivation mediates the link between pay and work satisfaction (Ahmat et al. 2019). These studies are solid evidence of a link between the compensation system and employee job satisfaction through the mediating role of employee motivation. Therefore, this area of study needs to be measured empirically. As a result, the following four hypotheses were developed for the current investigation. Materials and Methods Employees of the Jiban Bima Corporation (JBC) who work inside the respective organizations served as the population for this study. A self-administered questionnaire survey was directed to 103 JBC employees in different departments. The sample size (n) was 103, which was calculated using the formula of finite population where the population (N) was 1104, population portion (p) was 92%, the confidence level was 95% (z-score was ±1.96), and margin of error (e) was 5%. The random sampling technique was utilized to acquire data, and 100 employees were given back the questionnaire after completion. Since starting their positions, all respondents received one official performance review and one salary rise, and they were all entitled to bonuses and other incentives. The data collection period was held from 1st August to 30th September in 2021. There were two sections to the questionnaire. The questionnaire's first part contained basic information on the respondents, while the second part contained the measuring factors on a five-point Likert scale from 5 (strongly satisfied) to 1 (strongly dissatisfied). The compensation system was measured by nine items, whereas employee motivation and employee satisfaction were measured by only one item each. The questionnaire was supplied to the employees of the JBC in different regions of the country, such as Dhaka, Rajshahi, Khulna, Chittagong, Rangpur, Sylhet, and Mymensingh. The descriptive statistics, alpha value, correlation test, direct effect, indirect effect and total effect analysis were used to examine and interpret after collecting data. The processed results in the tabular form through running SPSS Vs 26 were interpreted clearly to specify the findings. In addition, we also employed the process macro from Hayes (2018) to test four hypotheses. Likewise, 5000 bootstrap samples were used to find the indirect impact, producing results with greater statistical power than the Sobel (1982) test (Edeh et al. 2022;Kreiseder and Mosenhauer 2022;Ha and Lee 2022;Zhao et al. 2010). Due to the quantitative research design, the SPSS software was chosen because of its competence, variety, and flexibility in analyzing the vast amounts of data obtained (Adefulu and Adebowale 2019). Results The findings of the study first revealed the participants' demographic profiles. Then, the descriptive analysis for knowing the trends of responses and the reliability analysis for the consistency of measurement scales were reported, respectively. Next, a correlation test was performed to measure the association between the compensation system, employee motivation, and employee satisfaction. Finally, the direct, indirect, and total effect analysis was conducted. Particularly, the total effect analysis was performed due to the combined direct and indirect effect in the model. Demographic Analysis The demographic profile of respondents is shown in Table 1. The proportion of male employees (86 percent) was close to six times higher than female employees (14 percent). This means that male employees dominated JBC's workforce. According to age, the most considerable percentage (40%) corresponded to the age range of 40-50 years, and the lowest section was occupied by those 50 years and above. The other two age groups possessed slightly more than two-fifths of the total portion. This shows that most of the experienced persons were considered to have high work productivity with the company. In the case of marital status, more than three-fifths of the participants were married, whereas less than two-fifths were unmarried. The highest response in terms of income level (monthly) was between BDT 20,000-40,000, which was almost half of the respondents of this study. Very few respondents had a monthly income of BDT 60,000 and above, but an exact two-fifths of the respondents had a monthly income below BDT 20,000 in this study. Thus, the diverse compensation structure groups helped the study generate valid study predictors for employee satisfaction. Based on organizational position, 45 officers were marked in the survey; nearly half of the respondents and one-fifth of the respondents were managers who were also being considered to cover different job positions. Based on work experience, more than one-third of respondents fell between 10-15 years, the highest portion. The respondents who had work experience of less than 5 years showcased the second highest portion, and the lowest portion, around one-sixth of the respondents, had long work experience in the organization. Therefore, most of the respondents had long work experience in this study. Table 2 shows the result of descriptive statistics such as mean, standard deviation (SD), skewness, and kurtosis to describe the participants or a population sample and to measure the normal distribution of collected data. It is observed that the mean of the responses for the compensation system is 3.32 (SD = 1.30), employee motivation is 3.42 (SD = 1.30), and employee satisfaction is 3.31 (SD = 1.35). That means respondents' opinions were satisfactory regarding employee motivation but somehow impartial regarding compensation and satisfaction. Moreover, the SD value for compensation system, employee motivation, and employee satisfaction indicates responses are close to the mean. The calculated skewness and kurtosis values for three variables in the current investigation were within the threshold ranges. On the other hand, the normal distribution has skewness and kurtosis values equal to zero (Field 2009;Malhotra et al. 2007). However, for psychometric purposes, it is proposed that data be deemed normal when both the skewness and kurtosis scores are between −2 and +2 (Khan 2015;Hair et al. 2010;George and Mallery 2010). Hence, it was claimed that the responses are positively skewed and light-tailed distribution according to the absolute value of skewness and kurtosis, respectively, for all the variables. Grounded on this analysis, the distribution of the collected data specifies normality in the sample. Table 3 expresses Cronbach's Alpha for three variables. Although values over 0.6 are also acceptable, 0.7 is the recognized value for Cronbach's Alpha (Taber 2018). The calculated value is 0.992, which means that the scale achieved higher reliability through the internal consistency measure. Therefore, all the constructs are eligible for testing four hypotheses for this study. Correlations Test The overall Pearson correlation coefficient value are displayed in Table 4. It is statistically significant (0.000 < 0.01) that the independent variable and dependent variable have a positive correlation (r) of 0.980. Moreover, a positive and significant (0.000 < 0.01) correlation (r) of 0.979 was found between the independent variable and the mediator variable. Likewise, the coefficient of correlation (r) is 0.973 between the mediator and dependent variable, which is significant statistically (0.000 < 0.01). It is advisable to use plus one (+1) to denote a perfectly positive connection; when one variable's value grows, the other variable does so by an absolute linear equation (Ratner 2009). Consequently, there is a high degree of positive statistical association between the compensation system and employee satisfaction, the compensation system and employee motivation, and employee motivation and employee satisfaction as the value of r is close to plus one (+1). Direct Effect, Indirect Effect and Total Effect Analysis A bootstrapping approach was conducted using 5000 iterations to identify the path relationships. The results of the direct effect analysis reveal the impact of the compensation system on employee motivation; employee motivation on employee satisfaction, and the compensation system on employee satisfaction (see Table 5). The value of unstandardized beta shows favorable compensation system towards employee motivation (β = 0.98, t-value = 47.16, and p-value < 0.01). Furthermore, the positive employee motivation escalates employee satisfaction from the outcomes (β = 0.33, t-value = 3.47, and p-value < 0.01). Finally, the compensation system significantly impacts employee satisfaction, indicated by the findings (β = 0.69, t-value = 7.16, and p-value < 0.01). Therefore, H1, H2, and H3 are accepted. The determination coefficient (R 2 ) measures how well an exogenous variable can account for the endogenous variable (Ghozali 2016;Pimentel and Pereira 2022). According to the model summary, the value of R-square (R 2 ) is 0.958 for employee motivation, and the value of R-square (R 2 ) is 0.965 for employee satisfaction, which is exhibited in Table 6. This clarifies the impact of the compensation system (independent variable) on employee motivation at 95.8% (mediator variable). Similarly, the compensation system (independent variable) and employee motivation (mediator variable) account for 96.5% of changes in employee satisfaction (dependent variable). Substantial effect size is often deemed to exist when the R-squared value is more than or equal to 0.7 (Moore et al. 2013). It identifies a high positive contribution to increasing employee motivation and satisfaction. Hence, the values of R-square are highly acceptable in this model. In this study, we investigated how employee motivation mediated the relationship between the compensation system and employee satisfaction. Results showed that employee motivation had a substantial positive mediation influence on the compensation system to employee satisfaction connection since there is no zero value within the bootstrapping lower and upper limit confidence interval (i.e., 0.06 and 0.61) (see Table 7). The indirect analysis enables the execution of significant hypothesis tests to identify the mediator variable influencing the experiment's outcome (Kaufmann and Schering 2014). In this study, the direct path between the compensation system and employee satisfaction is found to be significant. It is concluded that the overall compensation system has a statistically significant influence on employee satisfaction through the partial mediation effect of employee motivation. Only the direct and indirect effect between the compensation system and employee satisfaction were examined in previous research. The total effect, including direct and indirect effects, was another aspect of this study that we were interested in. We found the total effect of the compensation system including motivation on employees' satisfaction, which is statistically significant (see Table 8). The compensation system changed employee satisfaction level by 96% (see Table 9). However, only 4% of the differences were found due to other variables not considered in this model. Interestingly, the total effect of the compensation system on employee satisfaction also remains significant, similar to the direct effect due to the partial indirect effect of employee motivation. Discussion This study examined the effect of JBC's compensation system on satisfaction with the mediating effect of employee motivation, and the findings indicate a strong link between the compensation system and staff satisfaction through the partial mediation impact of employee motivation. The study findings were supported by the study of Uppal (2005), who discovered that employee compensation (fringe benefits) was positively connected to employee job satisfaction. However, in the link between compensation and employee satisfaction, employee performance was discussed rather than the role of motivation as mediator (Candradewi and Dewi 2019). Furthermore, Odunlade (2012) identified a connection between compensation and job satisfaction. However, the compensation and motivation found a positive relationship with job satisfaction separately, whereas the authors did not measure the mediation effect of motivation in the relationship between compensation and satisfaction, which is demonstrated in our study findings (Pudjiastuti and Sijabat 2022). According to Sousa-Poza and Sousa-Poza (2000), compensation influences how happy employees are at their jobs. However, there is still disagreement over how compensation could affect employee job satisfaction (Tian et al. 2020). On the other hand, the working environment directly impacts employee satisfaction, whereas compensation does not (Rojikinnor et al. 2022;Dietz et al. 2022). Setyorini et al. (2018) also confirmed a comparable finding, showing that compensation had a favorable and substantial influence on employee work satisfaction where motivation did not focus as a direct or indirect factor. Furthermore, another study found all the direct relationships between compensation and satisfaction, compensation and motivation, motivation and satisfaction; conversely, it was not considered the mediation effect of employee motivation in the connection between compensation and satisfaction as in this study (Sudiardhita et al. 2018). Additionally, it was discovered that remuneration had a favorable and considerable impact on work satisfaction, whereas benefits had no such impact (Mabaso and Dlamini 2017;Kowalski et al. 2022). Overall, this study mostly agrees with other findings where all the factors such as salaries, incentives, benefits packages, leave-related benefits, health benefits, retirement benefits, dismissal benefits, and staff welfare programs had a substantial impact on employees' satisfaction (Nane 2019;Dinter et al. 2022), except for motivation as a mediation impact. Despite having the prior evidence, modern organizations today view compensation with motivation as the pivotal element to achieve better returns in the form of improved employee satisfaction, competitiveness, or other financial measures. Conclusions Human resources are increasingly viewed as a corporation's most significant resource for achieving competitive advantage in the business sector. Recruiting and retaining the proper employees is one of the most challenging tasks for any company. The findings of this study looked into and identified the partial indirect effect of motivation on the relationship between overall compensation and employee satisfaction. The findings also implied that JBC's remuneration structure directly impacts employee pleasure via partial mediating of motivation. Employees will feel valued and have high motivation and job satisfaction. This could boost employees' morale and cause employees to be inspired and happy. Otherwise, everyone will admit that a group of dissatisfied employees will not be able to work appropriately for the company due to insufficient compensation and reluctance. It was discovered that compensation, motivation, and employee satisfaction enabled JBC to use its human resources more productively and efficiently. In addition, policy should be revised to upgrade the compensation system so that employees are more encouraged and committed to the organization. The high satisfaction of employees helps the firm meet its aim and target on time. Furthermore, compensation system practice and employees' inner desires vary considerably by age, gender, educational capacity, and employees' experiences, which needs to be considered. Additionally, this study outcome will help all of the Bangladesh insurance and financial industries to understand the significant factors or elements of compensation along with motivation, which leads to employee satisfaction. Furthermore, this investigation gives some benefits such as commitment, loyalty, and less turnover of employees for a long time. The insurance industries might also realize or understand the relevance of employee satisfaction and motivation in terms of the payment scheme they offer. Employees will provide their best effort to optimize the organization's long-term performance and reputation if they are stirred and content with their salary. The primary drawbacks are both direct and indirect compensation categories considered. However, in future studies, it would be wise to take these two categories as individual variables to measure employee turnover intention, employee performance, and commitment. Likewise, future research should also be comparative, adding more human resource functions. Perhaps the impact of motivation and satisfaction as a mediation or moderation construct on different industrial factors in an explanatory research framework can be investigated in any other financial and non-financial organizations.
2022-11-23T16:30:15.173Z
2022-11-18T00:00:00.000
{ "year": 2022, "sha1": "2347bc2cc06a239477d08b104a40627a6a56bde0", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-7099/10/11/290/pdf?version=1668768925", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "673d202cda494b3d0f46f44f1a24780fbb5b714b", "s2fieldsofstudy": [ "Business", "Economics" ], "extfieldsofstudy": [] }
235758810
pes2o/s2orc
v3-fos-license
Autism comorbidities show elevated female‐to‐male odds ratios and are associated with the age of first autism diagnosis Abstract Objective To investigate the association between the comorbidity rates in autism and sex, birth year and the age at which autism was first diagnosed and compare the relative impact of each. Method Using the Danish National Patient Registry, cumulative incidences up to the age of 16 for 11 comorbid conditions (psychosis, affective disorders, anxiety disorders, conduct disorder, eating disorders, obsessive‐compulsive disorder, attention‐deficit hyperactivity disorder, epilepsy, tic disorders, sleep disorders or intellectual disability) were calculated for individuals with autism (N = 16,126) and non‐autism individuals (N = 654,977). Individuals were further stratified based on the age at the first autism diagnoses and comorbid diagnoses up to the age of 16 were compared. Results Most comorbidities were significantly associated with birth year and sex. Female/male odds ratios for 8 of 11 comorbid conditions were up to 67% higher than the corresponding odds ratios in the non‐autism population, including conditions that are generally more common in males than in females as well as conditions that are more common in females. All comorbidity rates were significantly associated with the age at the first autism diagnosis, which was a stronger predictor than sex and birth year for 8 conditions. Conclusions Comorbidity rates for females exceed what would be expected based on the sex ratios among non‐autistic individuals, indicating that the association between autism and comorbidity is stronger in females. Comorbidity rates are also highly dependent on the age at the first autism diagnosis, which may contribute to autism heterogeneity in research and clinical practice. | INTRODUCTION Autism is a neurodevelopmental condition where symptoms manifest in childhood. Historically, autism has been considered a rare condition, but the prevalence has increased markedly in recent decades. 1,2 The condition is diagnosed more frequently in males than in females, with recent studies indicating a sex ratio around 3:1. 3 The term autism covers a highly heterogeneous group of individuals 4 and recent findings suggest that individuals diagnosed with autism deviate less from the general population than in previous years, [5][6][7] which could contribute to increasing heterogeneity within the autism population. Autism is reported to be associated with an increased risk of other conditions including affective conditions, anxiety, attention-deficit hyperactivity disorder (ADHD) and psychotic conditions. 8,9 This likely contributes to the heterogeneous clinical presentation, the difficulties that an individual with autism is experiencing, as well as the responses to various interventions. 10,11 High heterogeneity has also been found among studies estimating the rates of comorbid conditions in the autism population. For example, a meta-analysis of comorbidity rates found a 95% prediction interval for the comorbidity rate of anxiety to be 2%-48%. 8 While differences in how a comorbid diagnosis is ascertained may explain some of this variability, 9 it is likely that different groups within the current autism population are associated with different risks of a given comorbid condition. Gaining a better understanding of the variability in autism comorbidity might thus lead to a better understanding of autism heterogeneity in general. Several factors may contribute to the variability in autism comorbidity rates (CR). First, CRs may vary over time, that is differ between birth year cohorts, as the prevalence of several psychiatric conditions have changed over time. 12 Furthermore, it is likely that the temporal shift in how autism is diagnosed has resulted in changes in the composition of the autism population, which might also affect CRs. Second, there may be sex-based differences in CRs since the prevalence of many psychiatric conditions are known to differ between males and females. 13 The effects of time and sex on autism comorbidity have previously been investigated, but significant unexplained variability remains 8,14,15 and additional sources of variation in comorbidity should thus be investigated. A factor that may further explain some of the observed heterogeneity in CRs is variation in the age at which autism is first diagnosed. Autism is generally viewed as a condition with an early childhood onset, but there is a significant number of individuals who are not diagnosed until a later age. 16 As the age at the first autism diagnosis likely correlates with other aspects of variation in autism, it is also possible that differences in age of diagnosis can account for some variability in comorbidity rates. For example, different degrees of deviation or distinct biological subtypes of autism may each be associated with specific comorbidities, 17 as well as with developmental patterns affecting how early an autism diagnosis would be given. | Aims of the study Here, we performed a registry-based study to investigate how autism comorbidity rates vary according to birth year, sex and the age at which autism was first diagnosed and compared the explanatory power of each of them. Additionally, we examined whether sex ratios in comorbidity rates deviate from the sex ratio of prevalence rates in the non-autism population. | Data We analysed data from the Danish National Patient Registry (DNPR), which tracks all diagnoses given to in-and Significant outcomes • In addition to sex differences and temporal changes, comorbidity rates were strongly associated with the age at which autism was first diagnosed. Age of diagnosis may thus provide information on comorbidity risk in a clinical setting. • Comorbidity rates in females exceeded what would be expected based on sex differences in prevalence in the non-autism population, which suggests that comorbidity in females increases the likelihood of receiving an autism diagnosis more than is the case for males. • Autism and comorbid conditions were often diagnosed closely after each other, suggesting that some individuals with autism may not be diagnosed until they develop a comorbid condition. Limitations • Some milder cases of depression and anxiety may have been treated by a family physician and thus may not be represented in the data used here. • The present results are based on data from a single country and replication in other datasets is required to assess the generalizability of the findings. • The study primarily focused on psychiatric conditions, and further research is needed to investigate whether similar patterns exist for other medical comorbidities. out-patients in the Danish hospital system. All diagnoses are linked to specific individuals, which allows analysis of comorbidity patterns by combining diagnoses across distinct hospital contacts and facilitates the analysis of DNPR data based on personal data, such as sex and age. The DNPR contains diagnostic data from 1994 to 2018 based on the 10th version of the International Classification of Diseases (ICD-10). Since our aim was to investigate the associations of birth year, sex and age of diagnosis with comorbidity rate, we had to limit our analyses to conditions with a relatively high prevalence in the autism population. The set of conditions that we investigated ( Table 1) was preselected based on overrepresentation among individuals with autism in previous studies (eg [7][8][9]18 ). We restricted the dataset to individuals born from January 1, 1993 to 31 December 31, 2002. Although diagnoses were only available from 1994, we included individuals born in 1993, as we did not expect that any of the diagnoses of interest in this study would be given in the first year of life. We restricted our analyses to diagnoses given before the age of 16 to allow comparison of the rates across birth year cohorts. Individuals born after 2002 were thus not included. The age of 16 was chosen as the cut-off, as the incidence of several of the comorbid diagnoses of interest, for example eating disorders, was expected to increase significantly with the onset of adolescence. 19 The data was extracted on August 24, 2020. Among the selected birth year cohorts (1993-2002), we identified 16,126 individuals who had received an autism diagnosis (F84.0, F84.1, F84.5, F84.8 or F84.9) before their 16th birthday. Total birth-cohort sizes for the calculation of prevalence rates were retrieved from Statistics Denmark, which is a governmental institution tasked with keeping statistics of the Danish population. The total number of individuals in the included birth year cohorts was 671,103. Aggregated counts of individuals with and without autism diagnosed with each condition before age 16 are listed in Tables S1 and S2. Our analysis protocol was approved by the University of Copenhagen, Faculty of Social Sciences. | Data quality Diagnoses in the DNPR are given by specialist medical doctors within the Danish hospital sector, whereas diagnoses given by, for example family physicians are not included. The data in the DNPR thus reflects that a qualified professional has deemed the patient to fulfil the diagnostic criteria at a given time. Several previous studies have reviewed detailed records and compared these to DNPR diagnoses. These studies have generally found good validity of diagnoses in the DNPR, for example for autism, 20 obsessive-compulsive disorder (OCD), 21 ADHD, 22 schizophrenia 23 and depression, 24 although some concerns have been raised about diagnoses given in psychiatric emergency departments 24 (see Supporting Material for details). | Epidemiological calculations Comorbidity rates were calculated as the cumulative incidence, from birth to the 16th birthday, among individuals who received an autism diagnosis in that same time span, regardless of which diagnosis was given first. Cumulative incidence rates were calculated separately for males and females and two-year birth cohorts (from 1993-1994 to 2001-2002). The individuals diagnosed with autism before 16 years of age were further stratified based on the age at which they first received an autism diagnosis, that is during early childhood (0-5 years), mid-childhood (6-10 years) or late childhood (11-15 years), and CRs were calculated for each group. Crucially, the CRs were calculated over the same 0-15 year age window for all three groups ( Figure 3a). For each case of a comorbidity, the date of the contact where autism was first diagnosed was compared to the date of the contact where the comorbid condition was first diagnosed. This information was used to calculate the fraction of comorbidity cases where autism and the comorbid condition were diagnosed less than 6 months apart. | Statistical analyses Each comorbid condition was analysed separately. Binomial regression was used to statistically test whether CRs differed significantly according to the factors sex, birth year or age of the first autism diagnosis. Thus, a generalized linear model was fitted using the 'glm' function in R version 3.6.2. Analysis of deviation was performed through likelihood ratio tests using the 'ANOVA' function of the 'car' package version 3.0-6. 25 The association of each factor with CR was assessed by testing the main effect of each, while controlling for the two other factors. The relative impact of each factor in the CRs was quantified by the likelihood ratio. A high likelihood ratio indicates that the model fits the data better when the factor in question is included than when it is not. Differences in female/male ratios for each condition were assessed by testing the interaction effect between autism and sex. P-values were corrected for multiple testing using the Benjamini-Hochberg method. 26 | Birth cohorts and sex We first investigated the CR within five consecutive twoyear birth cohorts ( Figure 1) and calculated whether they differed significantly between the sexes and between birth year cohorts ( Counts and percentages of comorbidity for males and females (first two columns) and comparison of comorbidity rate by sex and birth year (middle two columns) and sex, birth year and age of autism diagnosis (last three columns). The codes in parentheses indicate the ICD-10 codes that were used when identifying comorbid conditions. Asterisks indicate that all sub-diagnoses of the listed code were included. Likelihood ratios (LR) indicate how much the model fit improved by including a given factor in the model, where a high LR reflects a large improvement in model fit when including the given factor. disability (ID) or epilepsy. Similarly, CRs differed significantly between birth cohorts for 8 comorbid conditions. | Female/male odds ratios We calculated the female/male odds ratios (OR) for each condition for the autism cohorts and the corresponding nonautism cohorts for each birth year (Figure 2). A binomial regression model was fitted to the data for each comorbid condition to assess whether the female/male OR's for the autism and non-autism cohorts significantly differed. The female/male OR was higher in the autism group than the nonautism group for all conditions and the difference was significant for 8 of 11 conditions ( | Age of diagnosis In addition to sex and birth year, the autism cohorts were separated based on the age at which the first autism diagnosis was given, and CRs for the 0-15 years age period were calculated for each group (Figure 3). Differences in CR for each of the three factors were statistically tested ( Table 1). The association between CR and age of autism diagnosis was significant for all comorbid conditions. The magnitude of the association between CR and age of autism diagnosis varied between comorbid conditions. For 8 out of 11 conditions, age of autism diagnosis had a larger impact on model goodnessof-fit (quantified by the likelihood ratio) than either sex or birth year. ID and epilepsy were diagnosed most frequently among the group diagnosed with autism in early childhood (0-5 years), whereas this group had lower rates of affective disorders and psychotic disorders. The group that was diagnosed with autism in mid-childhood (6-10 years) had the highest frequencies of ADHD and tic disorder. The age group that was diagnosed with autism in late childhood (11-15 years) Positive sex ratio differences indicate that the female/male odds ratio (OR) is higher in the autism population than in the non-autism population. The numbers in brackets indicate the 95% confidence intervals for the log OR differences, while the right column shows p-values for the null hypothesis that Δlog(OR) = 0, as well as p-values corrected for multiple testing. T A B L E 2 Differences in female/male odds ratios between the autism and nonautism groups disorders, OCD, eating disorder and anxiety more frequently than those diagnosed with autism in early or mid-childhood. For each case of a comorbidity, we investigated whether the comorbid condition was first diagnosed within 6 months (before or after) of the first autism diagnosis. This was true for 56% of the cases of comorbidity ( Figure S1). | DISCUSSION We found birth year and sex to be significantly associated with differences in CRs for most of the investigated conditions. This is not surprising as the general prevalence of many psychiatric conditions is known to differ between sexes and to have changed over time. 12,13 However, both differences in birth year and sex contribute to the observed heterogeneity in autism comorbidity and are thus relevant to account for when mapping comorbid conditions associated with autism. The difference in CRs between birth year cohorts may reflect a change in the composition of the autism population in recent years as a result of increased autism prevalence and potential broadening of the autism diagnosis. 5,6 For example, we found a decrease in the CR of ID, as previously reported by Idring and coworkers, 7 indicating that autism without intellectual impairment constitutes an increasing proportion of the autism population. The observed sex-based differences are largely consistent with previously reported differences in comorbidity patterns between males and females. 14, 15 We generally found the female/male OR to have the same direction as in the nonautism population (Figure 2), but the female/male OR was significantly higher among individuals with autism for 8 out of 11 conditions than in the non-autism population, regardless of whether the condition was most common in males or F I G U R E 3 Comorbidity by age of diagnosis (A) Diagram of how the autism sample was stratified by age of first autism diagnosis. CRs were calculated over the age window of 0-15 years for all three groups. (B) Cumulative incidence from birth to 16th birthday per 100 individuals with autism, showing how CRs differ based on the age of first autism diagnosis for males and females, respectively. For simplicity, the data is aggregated across birth year cohorts. Figure S2 shows the data separated by birth year females. Autism is thus associated with a disproportionately increased risk of comorbid diagnoses for females than males, further indicating that comorbidity studies should examine and account for sex-based differences. The fact that a sex ratio disparity was observed across comorbid conditions indicates that the effect is caused by a common factor related to autism rather than distinct factors for each of the comorbid conditions. Such a factor may be either biological or diagnostic in nature. A similar pattern of females having a disproportionately higher risk for a range of comorbid conditions has previously been observed in individuals with ADHD, 27 suggesting that a sex-skewness in comorbidity could be a general feature of developmental conditions. Furthermore, in all investigated conditions we found the age at which autism was first diagnosed to be a significant predictor for risk of comorbidity before the 16th birthday (Figure 3), and for 8 out of 11 conditions the age of autism diagnosis was a stronger predictor than sex and birth year. Comorbid conditions were tracked over the same 16-year time span for all individuals regardless of age of first autism diagnosis (Figure 3a), and these findings are thus not simply caused by general differences in the age at which a condition occurs or is commonly diagnosable. For several conditions, there were striking differences in comorbidity rate depending on the age at the first autism diagnosis. For example, among those with a late autism diagnosis 26% of females and 13% of males were diagnosed with an affective disorder at a point during childhood (0-15 years). This was true for less than 3% of those with an early autism diagnosis, which was considerably closer to the non-autism cohort (1%). In contrast, ID was diagnosed in around 40% of those with an early autism diagnosis, and only in around 10% of those with a late autism diagnosis. Although more research should address this issue for each condition individually, age of diagnosis appears to be a useful proxy for comorbidity risk and potentially other aspects of autism heterogeneity in research as well as in clinical practice. | Biological heterogeneity The association between CR and age of autism diagnosis may stem from biological heterogeneity in the nature of autism, such as subgroups each associated with different comorbid conditions and differences in the onset of autism symptoms and sex ratios. Previous research has shown that epilepsy in autism is often found among individuals who also have ID 17,28 and that the co-occurrence of autism and epilepsy is associated with higher levels of hyperactivity 29 and symptom severity. 30 There is also evidence that the proportion of females among individuals with autism with ID is higher than for autism as a whole, 3,31 consistent with our finding that the female/male OR for ID is higher among individuals with autism than in the general population. This could indicate the presence of an autism subgroup with a relatively large proportion of females that is associated with ID and epilepsy, 28 for example syndromic conditions associated with rare deleterious genetic variants. 17 If this subgroup were also associated with an early onset of noticeable autism symptoms, it could explain our finding that these conditions are both frequently found in those diagnosed with autism before the age of 6 years and relatively rarely in those diagnosed later in childhood. The association between the age of autism diagnosis and CR for anxiety and affective disorders may be mediated by differences in intellectual ability. Individuals with autism with a higher IQ may be better at using coping strategies such as scripted conversation or socially appropriate body language which make their autism characteristics less visible, and make them appear more socially competent, resulting in a later diagnosis. 32,33 Furthermore, depression among individuals with autism is more often diagnosed in those with a higher IQ, 34 possibly due to having better insight about their own social difficulties. 35 Conversely, depression and anxiety in individuals with autism with a low IQ may often be missed because of difficulties in verbalizing their distress in a way that is recognized and diagnosed. 36 Biological differences between those diagnosed with autism at different ages could also reflect a gradient of symptom presentations. For instance, individuals with large deviations in language development and/or abnormalities may be more likely to be diagnosed early, while those with smaller deviations could go undiagnosed longer. This is consistent with our finding that those diagnosed with autism in early childhood had higher rates of ID and epilepsy, both of which have been associated with more pronounced symptoms when they co-occur with the autism condition. 17 However, it is not clear why these individuals with a presumably high degree of symptoms would be less likely to later develop anxiety, psychosis or affective disorders than individuals who are diagnosed with autism later in childhood, as indicated by our results. The finding that autism is associated with a disproportionately high increase in risk of comorbidity for females may be explained by biological differences in how autism affects males and females. Such biological sex differences have previously been proposed to explain the male preponderance in autism. [37][38][39] One hypothesis posits that females have a higher threshold for autism-associated etiological factors, that is that females have lower risk of autism compared to males with the same levels of autism-causing factors such as genetic variants. 40 Since autism-associated genetic variants are often also associated with an increased risk of other psychiatric conditions, 41,42 this could explain why the females that do develop autism have higher risk of comorbidity, as they would generally have a higher load of the predisposing genetic variants. | Non-biological heterogeneity As this study examined trends in diagnostic records, the observed results may not necessarily be caused by biological effects but could reflect non-biological patterns concerning the diagnostic process. In support of this interpretation, comorbidity most often occurred in the group diagnosed with autism in the same age range in which the given comorbid diagnosis would generally be given in the general population ( Figure S3) 19,[43][44][45] and 56% of comorbid diagnoses were given within six months of the same hospital contact as the first autism diagnosis. Symptom overlap between different diagnostic categories may explain the tendency for autism to be diagnosed simultaneously with a comorbidity. For example, there are symptom similarities between autism and ADHD, 11 which frequently co-occur in those diagnosed in mid-childhood and diagnostic instruments have been found to have only modest specificity in a clinical setting, with ADHD responsible for the largest disagreement between standardized diagnostic scores and clinical judgement. 46 Similarly, several studies have found that scores of autistic traits are higher for individuals with depression or social anxiety than controls, 47,48 whereas the elevation of autism trait scores is smaller for individuals with depression in remission than for those not in remission. 49 Some individuals who develop depression or anxiety in late childhood may thus mistakenly also be diagnosed with autism because certain symptoms resemble autism symptoms, such as a lack of emotional facial expression or social withdrawal. In contrast, when autism has been diagnosed in early childhood, diagnostic overshadowing might cause symptoms of, for example depression or anxiety that develop later to be attributed to autism instead of being diagnosed as comorbidity, 50 which may contribute to the relatively low frequency of some comorbid conditions in this group. A direct causal effect of autism diagnoses on the development of comorbid conditions may also contribute to the association between CRs and the age of autism diagnosis. For example, undiagnosed autism may lead to problems that could have been managed more effectively if an autism diagnosis had been given earlier. 51 This could potentially result in vulnerability to depression and anxiety and explain why individuals diagnosed with autism in late childhood are diagnosed with depression and anxiety more frequently than those who received an autism diagnosis earlier. The tendency for autism and comorbidities to be diagnosed closely after each other could be explained by the appearance of a comorbidity increasing the likelihood that a child be referred for psychiatric assessment, thus leading to the discovery of a previously unrecognized autism condition. This mechanism was also hypothesized by Joshi and coworkers, 52 who found that youths who were diagnosed with autism after being referred to a paediatric psychiatric centre were often given several additional diagnoses, such as ADHD, depression and anxiety. Furthermore, Aggarwal & Angus 51 found that when adolescents and young adults were diagnosed with autism, they had often initially been referred for assessment due to symptoms of mood disorders, anxiety or psychosis, which could support this hypothesis. This suggests that there might be a selection bias in the diagnosed autism population with a larger presence of comorbid conditions than would be the case if all true autism cases would have been identified. Such a selection bias may also contribute to the observed sex ratio disparity. A diagnostic sex bias has been suggested, requiring females to exhibit more symptoms to be recognized as having autism, 53,54 which would likely cause the comorbidity enrichment to be stronger among females than males, as females with autism without comorbidities would be even less likely to be diagnosed than their male counterparts. | Implications The pronounced differences in childhood comorbidity rates between the groups diagnosed with autism at different ages could indicate biological differences between these groups. It is possible that the age of diagnosis also correlates with other aspects of autism symptomatology and an increased understanding of the interaction between age of diagnosis and symptom profile would be beneficial for research as well as for clinical practice. It is also relevant to investigate whether the difference in comorbidity rates of, for example affective disorders could partly reflect underdiagnosis within certain groups of the autism population, for example in those with low IQ and/or who may have difficulty verbalizing their symptoms. The association between comorbidity and age of first autism diagnosis could also be driven by a tendency for autism and comorbid conditions to be diagnosed closely after each other, because the likelihood of being diagnosed with autism might increase by the presence of other conditions. This may be partly explained by autism cases that are not noticed, until a comorbid condition develops, possibly accelerated by problems that could have been managed if an autism diagnosis had been given earlier. These cases may benefit from improved identification of autism before serious problems occur. Alternatively, the results may be explained by 'false positive' autism diagnoses where conditions such as mood disorders, psychosis or eating disorders result in symptoms that are mistaken for autism or temporarily amplify existing sub-clinical autism-like traits. In such cases, it might be beneficial to defer autism diagnoses until after other conditions have been managed. More generally, it might be worthwhile to consider a stronger focus on differential diagnosis, where the level of certainty required for an autism diagnosis is heightened in the presence of other psychiatric conditions with symptoms that overlap with autism. The specificity of autism diagnoses may be improved by further research into how the validity of autism diagnoses is affected by the presence of other conditions. The explanations mentioned above are not mutually exclusive, and it is possible that each contributes partly to the patterns observed.
2021-07-08T06:16:33.835Z
2021-07-05T00:00:00.000
{ "year": 2021, "sha1": "8b3eeab5c710f5b2268bec5c603d511ab76d553f", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/acps.13345", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "4632eb2d25a15db4fbde6a9136899bd463da053c", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
26554469
pes2o/s2orc
v3-fos-license
Association of Higher Omega-6/Omega-3 Fatty Acids in the Diet with Higher Prevalence of Metabolic Syndrome in North India The epidemic of obesity and hypertension over the last two decades, in the middle and high income countries is associated with marked rise in the incidence of metabolic syndrome (MS), CAD, type 2 diabetes, myocardial infarction and stroke and the total burden of cardiovascular disease (CVD) [1-4]. The metabolic syndrome is associated with constellation of metabolic disturbances, of all the risk factors for CVD, [4]. MS appears to be a major cause of mortality and morbidity due to CVD and diabetes [5-9]. In the 1920s, Kylin, a Swedish physician, described MS, as the clustering of hypertension, hyperglycemia, and gout [10]. However, the concept of the MS has existed for at least 80 years [4]. In 1947, Vague [11], drew attention to upper body adiposity (android or male-type obesity) as the obesity phenotype that was commonly associated with metabolic abnormalities characterized with type 2 diabetes and CVD. There is urgent need for strategies to prevent the emerging global epidemic, as this syndrome appears to be a master of disguise since it can present in various ways according to the various components that constitute the syndrome [3-9]. Reaven [12], described the MS as syndrome X, De Fronzo et al. [13], as the insulin resistance syndrome, and Kaplan [14], the deadly quartet. The MS represents a constellation of metabolic abnormalities including glucose intolerance (type 2 diabetes, impaired glucose tolerance, or impaired fasting glycaemia), insulin resistance, central obesity, dyslipidemia, and hypertension, which are well known risk factors for CAD [2,3]. Epidemiological studies suggest that primary risk factors such as physical inactivity and unbalanced nutritional consumption of excess calories, simple refined carbohydrates with a high glycemic index and load of high saturated fat (SF), trans fatty acids (TFA), and high w-6/w-3 ratio and lower monounsaturated fatty acids, in the diet are pro-inflammatory [8-15], and contribute to the escalating rates of obesity, MS and mortality due to CVD [1-16]. Introduction The epidemic of obesity and hypertension over the last two decades, in the middle and high income countries is associated with marked rise in the incidence of metabolic syndrome (MS), CAD, type 2 diabetes, myocardial infarction and stroke and the total burden of cardiovascular disease (CVD) [1][2][3][4]. The metabolic syndrome is associated with constellation of metabolic disturbances, of all the risk factors for CVD, [4]. MS appears to be a major cause of mortality and morbidity due to CVD and diabetes [5][6][7][8][9]. In the 1920s, Kylin, a Swedish physician, described MS, as the clustering of hypertension, hyperglycemia, and gout [10]. However, the concept of the MS has existed for at least 80 years [4]. In 1947, Vague [11], drew attention to upper body adiposity (android or male-type obesity) as the obesity phenotype that was commonly associated with metabolic abnormalities characterized with type 2 diabetes and CVD. There is urgent need for strategies to prevent the emerging global epidemic, as this syndrome appears to be a master of disguise since it can present in various ways according to the various components that constitute the syndrome [3][4][5][6][7][8][9]. Reaven [12], described the MS as syndrome X, De Fronzo et al. [13], as the insulin resistance syndrome, and Kaplan [14], the deadly quartet. The MS represents a constellation of metabolic abnormalities including glucose intolerance (type 2 diabetes, impaired glucose tolerance, or impaired fasting glycaemia), insulin resistance, central obesity, dyslipidemia, and hypertension, which are well known risk factors for CAD [2,3]. Epidemiological studies suggest that primary risk factors such as physical inactivity and unbalanced nutritional consumption of excess calories, simple refined carbohydrates with a high glycemic index and load of high saturated fat (SF), trans fatty acids (TFA), and high w-6/w-3 ratio and lower monounsaturated fatty acids, in the diet are pro-inflammatory [8][9][10][11][12][13][14][15], and contribute to the escalating rates of obesity, MS and mortality due to CVD [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16]. In the 5 th century (BCE), Confucius, the Chinese philosopher taught his students, "Cereals, the basic, fruits the subsidiary, meat the beneficial and vegetable the supplementary". Therefore, the concept of eating a diet high in animal foods, and preference for meat, possibly wild animals and birds rich in w-3 fatty acids, and whole grain cereals have been shaped over hundreds of years among Chinese. While Greek physician Hippocrates (600 BCE) Volume advocated food as medicine, adverse effects of Tamasic foods characteristics of the Western diet, were proposed by Indian ancient physicians; Charak and Sushruta in 600 BCE. Charaka, who was Brahmin, supposed to live in Taxila University in North of India, proposed that "Heart attack is born by the intake of fatty meals, overeating, excess of sleep, lack of exercise and anxiety" (Charaka Sutra, 600BC). Sushruta was a surgeon from Vishwamitra family from Varanasi, in East India who was in a position to do surgery and described atherosclerosis as madroga; "Excess intake of fatty foods and lack of exercise causes obesity and narrowing of the channels taking blood to the heart. It is useful to use guggul, triphala and silajit in the treatment". The total fat and saturated fat intake as a percentage of total calories has continuously decreased in Western diets in the last 40 years, whereas the omega-6 fatty acid has increased and the omega-3 fatty acid decreased, resulting in a large increase in the omega-6/omega-3 ratio from 1:1 during evolution to 20:1 today in the western world [7][8][9]. The ratio of omega-6 and omega-3 fatty acids has markedly increased to 45:1 among South Asians, due to marked increase in the intake of sunflower, corn oil and soya bean oils [8,9]. These dietary changes in the composition of fatty acids are associated with significant increase in the prevalence of obesity, hypertension and metabolic syndrome [17]. In the present study, we examine the association of w-6/w-3 ratio of diet with risk of MS and its other components; hypertension, CAD and dyslipidemia. Selection of subjects We randomly selected 20 streets from the urban area of the city of Moradabad. From each street; blocks or clusters were randomly selected, and from each block, 40-100 adults, aged 25 years and above were randomly selected based on voter's list. When the random number fell to a subject, who was <25yrs or not available, it was assigned to next person in the list. We contacted 2422 urban subjects aged 25 yrs and above, of which 220(9.08%) refused to participate and rest 2002(1016men and 986 women volunteered to be included in the study. Detailed interviews were performed with the help of pretested and validated questionnaire, prepared according to the guidelines of WHO and Indian Council of Medical Research. Dietary assessment was made by 7-day food intake record by questionnaires. Evaluation was by a physician and dietitian administered questionnaire, a physical examination and sphygmomanometer and blood tests. The diagnosis of metabolic syndrome was based on WHO criteria (presence of 3 risk factors or more; hypertension, central obesity, type 2 diabetes or glucose intolerance), and subjects were graded according to w-6/w-3 ratio of fatty acids, in the diet. Criteria Body mass index was calculated and obesity was defined as a body mass index >30 kg/m 2 and above, overweight when body mass index >25 kg/m 2 and up to 29.9. Figures for criteria according to the Indian consensus group for overweight (>23 kg/m 2 ), were also calculated. Central obesity was considered when waist-hip ratio >0.90 in males and >0.85 in females were observed, as suggested in previous studies [1,2]. Diabetes mellitus was diagnosed in presence of fasting blood glucose >7.7 mmo1/1 (140 mg/dl) and postprandial 2 h after 75 g of oral glucose >11.2 mmol/1 (>200 mg/dl). Glucose intolerance was diagnosed in presence of fasting glucose between 110-140mg/dl and post prandial glucose between 180 to 200mg/dl. It is difficult to assess tobacco intake, because it is consumed in various forms. Cigarettes, beedies, Indian pipes, raw tobacco and chewing tobacco are all commonly consumed and people use tobacco in more than one form .We therefore categorized users of any form of tobacco as smokers as was done in previous studies. Individuals who admitted to ingesting alcohol more than once a week were categorized as alcohol consuming. Blood pressure was measured in right arm (systolic and diastolic phase V of Korotkoff) after 5 min rest in sitting position according to WHO guidelines by a single mercury manometer and by the same physician in all subjects . Hypertension was diagnosed when systolic blood pressure was 140 mm Hg or more and diastolic blood pressure 90 mmHg or more. Nutrient intakes were calculated with the help of nutrient composition of Indian foods tables [18]. Presence of all of these three criteria was taken as confirmation of the diagnosis of CAD. Individual clinical criteria included known CAD, affirmative response to Rose questionnaire and electrocardiographic changes (Q wave change codes 1-1 and 1-2, ST segment depression or elevation codes 4-1, 4-2 and 9-2 and T wave inversions, codes 5-1 and 5-2. Prevalence rates of these electrocardiographic finding with and without clinical criteria for CAD are also given. A blood sample after an overnight fast was obtained from all subjects. Each participant was asked to drink 75 g anhydrous glucose in 200 ml of water and a second blood sample was collected after 2 h for analysis of glucose. Statistical Analysis The prevalence rates are given in percent and numerical variables as mean 1 standard deviation. Significance of association of various risk factors was determined by multivariate logistic regression analysis. Odds ratios and 95% confidence intervals were calculated by multivariate analysis after adjustment of age and sex using overall prevalence of metabolic syndrome, as the dependent variable. Subjects were classified based on n-6/n-3 ratio in the diet, and the association of various components of metabolic syndrome was demonstrated by Mantel Haenzel Chi square test and Kendal's t test. Results We studied 2002 subjects 25years and above, from North India. The age and sex distribution of the sample were comparable with the age and sex ratio in the population of Uttar Pradesh. Table 1 shows the fatty acid intakes in the urban population studied compared to a rural population of the same district [19]. The Consumption of total, saturated fat, polyunsaturated fat and linoleic acid were slightly higher among men compared to women ( Table 1). The consumption of alpha-linolenic acid which is a short chain w-3 fatty acid was also higher in men compared to women. The ratio of w-6/w-3 fatty acid showed no significant difference among sexes. Table 2 shows the prevalence of coronary risk factors and metabolic syndrome in our subjects. The prevalence of coronary artery disease, diabetes mellitus, low HDL and tobacco intake were significantly higher among men compared to women. The overall prevalence of metabolic syndrome was 19.3% without any sex difference. Table 3 shows the prevalence of coronary risk factors, CAD, MS and its components in relation to w-6/w-3 ratio in the diet. This table shows that there was an overall increase in the prevalence of CAD, type 2 diabetes, hypertension, hypertriglyceridemia (>150mg/dl), low HDL, central obesity and metabolic syndrome among subjects consuming high w-6/w-3 ratio diet and the trend was significant for both men and women which is better depicted in the Figure 1. An increasing ratio of w-6/w-3 ratio in the diet was also associated with a rising trend in mean levels of body mass index, waist-hip ratio, blood pressures, Triglycerides, HDL cholesterol and fasting blood glucose and the trends were significant (Table 4). Table 5 shows that there was a significant positive rank correlation between the level of w-6/w-3 fat ratio in the diet and components of metabolic syndrome and coronary risk factors; mean age, body weight, body mass index, waist-hip ratio, systolic and diastolic blood pressures, total cholesterol, triglycerides, and fasting blood glucose. Multivariate logistic regression analysis showed that regardless of age, in relation to w-6/w-3 ratio in the diet, hypertriglyceridemia, HDL cholesterol, hypertension, central obesity, physical activity, fasting blood glucose were significantly associated with metabolic syndrome among both sexes. Hypercholesterolemia was not associated with MS among men. including 19.8% among men and 18.7% among women. MS has been observed in many ethnic groups and it is estimated that it is prevalent in approximately one fourth of the adult population of the world [7][8][9][10][11][12][13][14][15][16][17][18][19][20][21]. In a multicenter case control study from India, involving 5088 subjects with known type 2 diabetes, the overall prevalence of MS was 77.2%, and the rates were significantly higher among women compared to men, respectively (87.7 vs. 69.3%, P<0.0001) [20]. Important components of MS in this study were, hypertension, followed by hypertriglyceridemia in men and central obesity followed by hypertension in women. In another study from India, among 1806 urban subjects, aged 25-64 years, the prevalence of type 2 diabetes mellitus was 6.0%, hypertension 24.0% and CAD 9.0% [22]. Among subjects with diabetes, the prevalence of central obesity was 95.3%, CAD 23.5% and hypertension 51.6%, which are lower compared to prevalence of these components in Mumbai, West India. These differences in risk factors may be explained by the differences in diet and lifestyle factors in West and North India [23]. 6/9 Copyright: ©2017 Singh et al. Table 6: Age-adjusted Odds Ratio and confidence intervals for association of risk factors with metabolic syndrome in relation to omega-6/omega-3 fat ratio by logistic regression analysis. Risk Factors Odds Ratio Men (95% CI) Odds Ratio Women(95% CI) We observed a higher prevalence of CAD, hypertension, diabetes mellitus, hypertriglyceridemia, central obesity and MS and high prevalence of low HDL among subjects consuming high w-6/w-3 ratio(5.0-10.0 and >10.0) diet compared to subjects taking low w-6/w-3 ratio(<5.0) diet among both sexes and the trends were significant ( Table 3). The levels of mean systolic and diastolic blood pressures, serum triglycerides, and fasting blood glucose were significantly greater among subjects receiving high w-6/w-3 ratio diets, compared to those receiving low w-6/w-3 ratio diets in both men and women (Table 4). We also observed a significant positive rank correlation between w-6/w-3 ratio in the diet and body mass index, waist/hip ratio, blood pressures and fasting blood glucose (Table 5). There are no population based studies from India regarding association of CAD, MS and diabetes with w-6/w-3 ratio in the diet; hence we cannot compare our results with other studies. Deficiencies of EPA and DHA have been observed in subjects of south Asian origin, living in UK. Despite adequate intake of ALA (1.0 to 1.6mg/day) in the diet, this may be due to decreased delta-6, delta-5 desaturase enzyme activity, responsible for conversion of ALA to EPA and DHA or their increased consumption in the tissues [9]. There is evidence that green leafy vegetables, whole grains, walnuts, flex seeds and canola oil or mustered oil are rich in alpha-linolenic acid (w-3 fatty acid) and Mediterranean diet or Indo-Mediterranean which is rich in these foods may be protective against cardiovascular disease [28][29][30][31][32][33][34][35]. In the Lyon diet heart study [33], 605 patients with post myocardial infarction were randomly assigned to Mediterranean style diet or control diet resembling National Cholesterol Education program step 1 diet. The Mediterranean diet supplied more than 0.6% energy from alpha-linolenic acid and <10% from saturated fatty acid, out of 30% energy from fats. After a mean follow up of 27 months, the risk of new AMI and episodes of unstable angina were reduced by 70% in the Mediterranean diet group compared to control group. The Indo-Mediterranean Diet Heart Study was a randomized, single blind trial, conducted among 1000 patients with high risk of recurrent cardiac events [32]. Half of the patients (n=499) were administered a diet rich in whole grains, fruits, vegetables, walnuts and mustered or soya bean oil as a source of alpha-linolenic acid (ALA, w-3), and 501 patients were advised to consume prudent diet. After 2 years of follow up, the intervention group received two fold greater ALA compared to control group (1.8 vs. 0.8g/day) resulting into marked decline in the w-6/w-3 ratio in the two diets (mean SD 9.1+12 v/s 21+10, p<0.001), respectively. Total cardiac events were significantly fewer, in the intervention group, than in the controls (39 v/s 76 events, p<0.001). Sudden cardiac deaths were also decreased (6 v/s 16, p<0.015), as were nonfatal infarction (21 v/s 43, p<0.001). These findings indicate that dietary changes may alter w-6/w-3 ratio, which may be associated with large reduction in CAD risk. Further benefit may be observed, if soya bean oil is avoided by using more traditional mustered oil in the Indian diets, as observed in the rural Indian diets (Table 1). In a more recent study [36], Esposito et al randomized 180 patients with MS to a Mediterranean style diet or a step 1 diet with fat intake <30.0%. After 2 years, intervention group showed greater weight loss, had lower C-reactive proteins and other proinflammatory cytokines levels , less insulin resistance, lower total cholesterol and triglycerides and higher HDL-cholesterol levels and had a 50% decrease in the prevalence of MS. This study confirms that Mediterranean diet provide beneficial effects on inflammation, dyslipidemia as well as decrease insulin resistance and improve endothelial function in patients with MS. Recent studies indicate that w-3 fatty acids appears to be important in the pathogenesis of acute coronary syndrome and its complications; arrhythmias, heart failure and cardiac events [37][38][39]. Further evidence indicate that total fat, saturated fat and trans fat can enhance inflammation and visceral obesity resulting in to MS and treatment with w-3 fatty acids may be beneficial [40][41][42]. In a recent study, a total of 117 volunteers completed the 12-week trial. Participants in the 1-, 3-, and 6-portions/d groups reported consuming on average 1.1, 3.2, and 5.6 portions of fruit and vegetables, respectively, and serum concentrations of lutein and β-cryptoxanthin increased across the groups in a dosedependent manner [43]. For each 1-portion increase in reported fruit and vegetable consumption, there was a 6.2% improvement in forearm blood flow responses to intra-arterial administration of the endothelium-dependent vasodilator acetylcholine (P=0.03). There was no association between increased fruit and vegetable consumption and vasodilator responses to sodium nitroprusside, an endothelium-independent vasodilator. In an earlier randomized trial in patients with high risk of CVD, two third had MS, supplementation with fruits, vegetables, whole grains and nuts was protective against risk factors; dyslipidemia, hyperglycemia and oxidative stress which are components of MS [44]. Omega-3 fatty acids can regulate leptin gene expression and the concentrations of anandamides in the brain, which in turn binds to endogenous cannabinoid receptors and regulate food intake and satiety as well as weight gain [9]. Deficiency of w-3 fat can increase appetite, resulting into obesity and MS. CVD, diabetes mellitus, cancer, autoimmune diseases, rheumatoid arthritis, asthma and depression are associated with increased production of thromboxane A2, leucotrienes, interleukins-1 and 6, tumor necrosis factor-alpha and C-reactive proteins [31][32][33][34][35]. Increased dietary intake of w-6 fatty acids without consideration for w-3 fat is known to enhance all these risk factors as well as atherogenicity of cholesterol and oxidized LDL cholesterol which have adverse pro-inflammatory effects and may result into thrombosis and acute coronary syndrome (ACS), cancer, diabetes mellitus and metabolic syndrome. Omega-3 fatty acids are known to reverse all these biochemical adverse effects hence a low w-6/w-3 ration of 1:1 has been suggested in the Columbus concept [17]. Although, most workers working on dietary patterns do not mention the nutrient content of their prudent diet, but one single difference is the w-3 fatty acid, apart from other micronutrients, which is rich in fruits, leafy vegetables, nuts and whole grains. It would be very interesting to know the role of refined starches and sugar, large meals, decreased intake of fruits, vegetables, whole grains and nuts on inflammation and endothelial function and nitric oxide levels as risk predictors of MS and its components [44][45][46][47][48][49][50][51]. Fruits, vegetables, nuts, whole grains, animal foods rich in w-3 fatty acids are slowly absorbed and may prevent the increase in free fatty acids, and inflammation, which is a characteristic of MS. There is evidence that omega-6 and omega-3 fatty acids elicit divergent effects on body fat gain through mechanisms of adipogenesis, browning of adipose tissue, lipid homeostasis, brain-gut-adipose tissue axis, and most importantly systemic inflammation [53][54][55][56]. Prospective studies clearly show an increase in the risk of obesity as the level of omega-6 fatty acids and the omega-6/omega-3 ratio increase in red blood cell (RBC) membrane phospholipids, whereas high omega-3 RBC membrane phospholipids decrease the risk of obesity. Recent studies in humans show that in addition to absolute amounts of omega-6 and omega-3 fatty acid intake, the omega-6/omega-3 ratio plays an important role in increasing the development of obesity via both AA eicosanoids metabolites and hyperactivity of the cannabinoid system, which can be reversed with increased intake of eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA). A balanced omega-6/omega-3 ratio in the diet, in conjunction with reduction in refined carbohydrates and saturated fat are important for health and in the prevention and management of obesity, metabolic syndrome and other chronic diseases [57][58][59].
2019-03-17T13:10:41.250Z
2017-12-20T00:00:00.000
{ "year": 2017, "sha1": "ec1d3d22f9fb76da3329093007ecfdc437cc670c", "oa_license": "CCBYNC", "oa_url": "https://medcraveonline.com/MOJPH/MOJPH-06-00193.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "28d76caeca944e1ae58d99d223c606fba1f89216", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235207297
pes2o/s2orc
v3-fos-license
On the Suitability of VLC Enabled Fronthaul for Future Mobile Network The quest to meet diverse services and applications that are emanating from the increasing inter-connected devices including machine-to-human communication has brought about consideration of massive deployment of small cells for fifth generation (5G) and Beyond network. With densification of small cells, mm-wave and THz frequencies play significant role in delivering mobile services. Consequently, supporting radio frequency (RF) communication with visible light communications (VLC) link exhibits potentials to ameliorate the challenges of RF communication especially bandwidth limitations and interference. Capitalising on the strength of VLC, we propose a basic end-to-end system to experimentally evaluate the suitability of VLC as fronthaul using real-time mobile traffic. Successful downlink (DL) transmission was achieved via VLC. To investigate the performance of the VLC link, transmission signal-to-noise ratio (SNR) and throughput of the VLC and RF Links are measured and compared. The light emitting diodes (LEDs) and Photo-diodes (PD) have shown to have significant effects on the performance of the VLC link. However, the results establish a fundamental premise for the suitability of VLC as fronthaul for small cell network. Abstract-The quest to meet diverse services and applications that are emanating from the increasing inter-connected devices including machine-to-human communication has brought about consideration of massive deployment of small cells for fifth generation (5G) and Beyond network. With densification of small cells, mm-wave and THz frequencies play significant role in delivering mobile services. Consequently, supporting radio frequency (RF) communication with visible light communications (VLC) link exhibits potentials to ameliorate the challenges of RF communication especially bandwidth limitations and interference. Capitalising on the strength of VLC, we propose a basic end-to-end system to experimentally evaluate the suitability of VLC as fronthaul using real-time mobile traffic. Successful downlink (DL) transmission was achieved via VLC. To investigate the performance of the VLC link, transmission signal-to-noise ratio (SNR) and throughput of the VLC and RF Links are measured and compared. The light emitting diodes (LEDs) and Photo-diodes (PD) have shown to have significant effects on the performance of the VLC link. However, the results establish a fundamental premise for the suitability of VLC as fronthaul for small cell network. Index Terms-RF over VLC, mobile fronthaul, visible light communications I. INTRODUCTION T HE concept of facilitating communication through visible light has been conceived and practiced from time immemorial, with the use of light source (light flashing from semaphore or fireworks) to transmit message in sequential manner to the receivers who in turn decode the message based on predefined formats agreed upon by both parties. Advancing from this fundamental concept, optical wireless communication (OWC) is evolving and consistently attracting significant research interests in diverse areas of applications including mobile networks [1]- [3]. As an emerging This work is funded by FCT/MEC through national funds and when applicable co-funded by FEDER -PT2020 partnership agreement under the project UID/EEA/50008/2019.Fernando P. Guiomar acknowledges a fellowship from "la Caixa" Foundation (ID 100010434) under the fellowship code: LCF/BQ/PR20/11770015. technology with dynamic research interests, OWC is proffering a vast number of solutions to complex communication challenges. With particular reference to evolving mobile network, high capacity, high data rate and reduced link interference for short-range communication are among obvious considerations to meeting the requirements of beyond 5G (B5G) mobile network to support innovative services and applications like virtual reality and augmented reality (AR) that consume high bandwidths. As dense small cells are envisaged in heterogeneous architecture like fifth generation (5G) and beyond network to achieve high capacity, mm-wave and THz wireless communication are receiving tremendous attention to facilitate support for diverse services including machine-type communication (MTC) and the internet of things (IoT). Considering the enormous bandwidth potential of OWC, desired highspeed data transmission at considerable minimal latency could be accomplished. Visible light communication (VLC) is a class of OWC that is operating in the visible band (390-750 nm) and offers huge bandwidth that could support 5G and beyond services [4]- [7]. Unlike radio frequency (RF) channels, VLC links are characterised by their immunity to electromagnetic interference, immense unlicensed spectrum, health safety compared to infrared (IR) communication, seamless integration into existing infrastructure, costeffectiveness and energy efficiency because of the use of light emitting diodes (LEDs) [8], [9]. Based on the aforementioned promising benefits offered by VLC, its adoption as mobile fronthaul for small cells for outdoor and indoor applications is increasingly being investigated [10], [11]. Apparently, VLC enabled fronthaul will help to avert the inherent limitations of RF in densely populated mobile cells while also serving as lighting facilities. Consequently, there are significant volume of research efforts geared towards diverse applications of VLC. An end-to-end connectivity solution proposed for dense urban scenarios is introduced in [12] by integrating 5G, PON and VLC technologies which, according to the authors, supports ubiquitous communication. Also, a hybrid VLC-OFDMA network model comprising a dynamic number of VLC hotspots is proposed in [13] to solve hotspot's multiuser access challenges. Furthermore, for a hybrid LiFi/WiFi network, [14] reports, using simulations, that an OFDMA based LiFi system shows better performance compared to TDMA systems. Similarly, hybrid power line communication (PLC)/VLC/RF fronthaul was considered in [15] to enhance the sum rate capacity of the hybrid system. In [16], a mobile fronthaul (MFH) consisting of low-cost lightemitting diode-based VLC links and a spectral efficient fiber is reported to achieve cell coordination in a less dense network environment. Considering available research works reported in the literature, using a real-time mobile signal for experimental demonstration of VLC based MFH is rarely mentioned. With this in mind, we developed an emulation of end-to-end LTE system based on software-defined radio (SDR) to generate real-time mobile traffic over a combined RF and VLC link. Further, we experimentally evaluate the performance of VLC link for transmission of the real-time mobile signal. This work provides a fundamental step towards investigating VLC based fronthaul for 5G and beyond mobile networks. To the best of our knowledge, this works presents for the first time, an experimental demonstration involving real-time mobile traffic for evaluation of VLC link. The rest of the paper is organized as follows. In Section II, we present a brief background information on VLC system while the experimental setup is described in Section III. Thereafter, we present and discuss our preliminary results in Section IV. Finally, conclusions and the future research direction are presented in Section V. II. A BASIC VLC SYSTEM Conceptually, a VLC system uses lighting facility for dual purposes of providing illumination and indoor communication simultaneously. A simplified structure of VLC system is represented in Figure 1, comprising a transmitter and a receiver and operating on intensity modulation/direct detection (IM/DD) scheme. A basic VLC transmitter is composed of an array of LEDs, a driver circuit for driving the LEDs by DC bias current and a modulator which modulates data onto rapidly switching light being emitted from the LEDs. Moreover, laser diodes can also be used in place of LEDs as lighting sources for VLC systems because they possess large bandwidths [17]- [19]. However, LEDs are cheaper but have limited bandwidth of a few MHz [20]. For the VLC receiver, a photo-diode (PD) module or an imaging sensor is normally used. PD uses semiconductor device to convert the received optical intensity to an electrical signal, while with a camera, the imaging sensor device captures visible light signal and recovers the transmitted data. Compared to imaging sensors, PDs are capable of supporting more than 1 Gsps (Giga-symbols per second), which is greater than 3000 times the symbol rate offered by imaging sensors as VLC receiver due to limitation of frame rate and non-uniform exposure to light [21], [22]. Key parameters considered for designing a PD include bandwidth, responsivity and noise level. The optical input power conversion to the electrical output current is represented by the PD responsivity. The transmitted optical intensity is normally affected with additive noise including thermal and shot noises. Furthermore, the channel impulse response is dependent on the position of the receiver relative to the transmitter, corresponding to line-of-sight (LOS) and nonline-of-sight (NLOS) parts of the received light. Meanwhile, several channel models have been proposed by researchers, however this is out of the scope of this paper. III. EXPERIMENTAL SETUP The experimental setup consists of a simple physical arrangement of a full end-to-end mobile network connectivity, a VLC assembly, a frequency mixer and local oscillator as shown in Figure 2. The end-to-end mobile network is realized by using srsLTE, an open source based software and two software defined radio (SDR) devices. By deploying srseNB on a commodity microprocessor system, monolithic processing of the baseband functions is accomplished. We selected USRP B210, SDR device made by Ettus/ National Instruments to provide RF frontend for the srseNB and srsUE. The USRP B210 operates on continuous frequency ranging from 70 MHz to 6 GHz and allows a 61.44 sampling rate which is in conformity with the sampling rates defined in 3GPP specifications. The USRP B210 provides interface to the user equipment (UE). As common with most of the SDR devices, USRP B210 allows flexible configurations and engenders analog-to-digital and digital-to-analog conversions close to the antenna. However, for our setup, RF cable is employed for the UL transmission while the downlink (DL) is provided by the VLC link. Also, the UE is developed by deploying srsUE component of srsLTE software on general purpose processor (GPP). Interestingly, srsLTE software has facility for measuring metrics of the signal being transmitted and is used to capture the signal-to-noise ratio (SNR) in dB which is presented in Figure 4. For the VLC system, we employ a low cost LED module which has 7 white LEDs for the transmitter section and a PD assembly for the receiver. A line-of-sight (LOS) distance of 0.5m between the LED module and PD assembly is considered to establish DL transmission. Furthermore, considering that the low-cost LEDs module used for this work is limited by intrinsic modulation bandwidth to a few MHz as typical with most Commercial off-the-shelf (COTS) LED modules. A local oscillator is used for up and down frequency conversion as shown in Figure 2. It should be noted that the radio access network (RAN) operates at 2.68 GHz and 2.56 GHz RF for DL and UL, respectively (Band 7). Furthermore, considering that the low-cost LEDs used for this work is limited by intrinsic modulation bandwidth to a few MHz, a local oscillator is used for down-conversion and up-conversion allowing an output frequency of 2 MHz that falls within the range supported by the LEDs. Consequently, the channel is configured with 1.4 MHz (6 Physical Resource Blocks (PRB)), 3 MHz (15 PRB), 5 MHz (25 PRB) and 10 MHz (50 PRB). The generated real-time signal is transmitted for each aforementioned bandwidth while the SNR and data rate of the VLC link for each channel configuration is measured. IV. PRELIMINARY RESULTS AND DISCUSSION For each channel bandwidth, transmission of real-time radio over VLC link is established. Successful data transmission is achieved when the mobile network was operating at 1.4 MHz and 3 MHz as shown with the data rate measured data rate representation in Figure 3 and the measured SNR at the UE side as shown in Figure 4. However, no successful data is transmitted on 5 MHz and 10 MHz channel bandwidth configuration as a result signal degradation that is contributed by limitation arising from inherent modulation bandwidth of COTS LED. Consequently, this effect is reflected in the measured SNR for transmission on the configured bandwidth. As expected, transmission at 3 MHz has better SNR than 1.4 MHz channel bandwidth with RF (no VLC). However, the SNR characteristics differ when VLC is connected to the network, with 1.4 MHz channel bandwidth showing higher SNR compared to 18 dB SNR of 3 MHz bandwidth, thus justifying the limitation of the commercially available LEDs [23]. The SNR pattern is further reflected on the mobile data rates shown in Figure 3 as measured by the iperf tool. While a maximum of 8.5 Mbps and 5.2 Mbps are achieved for 1.4 MHz and 3 MHz respectively when transmitting with RF (no VLC), 4.2 Mbps and 3.1 Mbps for 1.4 MHz and 3 MHz respectively are measured as maximum DL data rates when transmitting via the VLC. These preliminary results further illustrate the effect of intrinsic modulation bandwidth occasioned with COTS LEDs. V. CONCLUSION In this work, an experimental demonstration of realtime mobile signal over VLC channel is presented, for the first time to our knowledge, by adopting a low cost VLC system, leading to successful transmission to a UE from the base station of an end-to-end LTE mobile network. The results provide a foundation towards the adoption of VLC technology for 5G and beyond network. Furthermore, the preliminary results demonstrate the effect of intrinsic modulation bandwidth as occasioned by COTS LEDs. Notwithstanding, successful transmission can be achieved at higher channel bandwidths with more sophisticated LED modules and by adopting MIMO techniques which involve the use of multiple LED elements. Further enhancement of the VLC performance could be introduced as a future work with the adoption of micro-LEDs to support gigabits per second (Gbps) data rates. However, the challenge of weak radiation power of micro-LED has to be investigated to achieve practical demonstration of micro-LED based VLC.
2021-05-27T13:43:32.082Z
2021-02-11T00:00:00.000
{ "year": 2021, "sha1": "0e7fbd0405f3ba5b4233bea77dafa5e1f3e714c9", "oa_license": "CCBY", "oa_url": "https://zenodo.org/record/4503099/files/ConfTele2021_VLC_5G_cameraReady.pdf", "oa_status": "GREEN", "pdf_src": "IEEE", "pdf_hash": "0e7fbd0405f3ba5b4233bea77dafa5e1f3e714c9", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
16331903
pes2o/s2orc
v3-fos-license
Prevalence of genital chlamydia infection in urban women of reproductive age, Nairobi, Kenya Background Chlamydia trachomatis is one of the major causes of sexually transmitted infections throughout the world. Most infections are asymptomatic and remain undetected. Burden of disease in the Kenyan population is not well characterised. This study was done to define the prevalence of genital Chlamydia infection in a representative female population. Findings A cross-sectional study design was employed. All women attending out-patient clinics (antenatal, gynaecology, family planning) and accident and emergency departments at two study sites over a five month period were invited to consent to completion of a questionnaire and vaginal swab collection. A rapid point-of-care immunoassay based test was performed on the swabs. Women who tested positive for Chlamydia were offered treatment, together with their partner(s), and advised to come for a follow-up test. A total of 300 women were tested. The prevalence of genital Chlamydia trachomatis was found to be 6% (95% CI 3.31% – 8.69%). The prevalence was higher in women who represented a higher socioeconomic level, but this difference was not significant (p=0.061). Use of vaginal swabs was observed to be a more acceptable form of sample collection. Conclusion The prevalence of genital Chlamydia is significant in our female population. There is a justifiable need to institute opportunistic screening programs to reduce the burden of this disease. Rapid and low cost point-of-care testing as a potential component of sexually transmitted infection (STI) screening can be utilised. Introduction In Kenya like in many African countries, the syndromic approach is used in the treatment of genital infections. However, this may not be entirely effective for Chlamydia trachomatis due to the asymptomatic nature of this infection in many women and the possibility of missing selected infections. Failure to detect infection can have severe consequences including ectopic pregnancy, infertility and pelvic inflammatory disease caused by fibrosis and scarring due to the repair of tissue damaged by Chlamydia induced inflammation [1]. With newer diagnostic methodologies available, testing is simple and less technically demanding. Rapid pointof-care diagnostic tests are particularly important in developing countries, where access to laboratories may be limited and patients are often unable or unwilling to return for test results or treatment [2,3]. Non invasive specimen types are preferable because they overcome some of the barriers associated with the treatment of sexually transmitted infections by being more accessible to the population at risk [4][5][6]. In addition, it provides the opportunity to diagnose, treat and counsel during the same visit. The Chlamydia Rapid Test W , which was used as the diagnostic tool in this study, is an immunoassay based test that detects chlamydial lipopolysaccharide (LPS). This test uses vaginal swabs as the specimen type and provides a same day result. Performance evaluation indicates that this point of care test can be used for diagnosis of chlamydial infection because of its good sensitivity (83.5%), specificity (98.9%), negative and positive predictive values (98.6% and 86.7% respectively) [2]. Treatment of infection is also effective, inexpensive and readily available and community screening programs have been instituted in some countries to control the progression of disease, prevent the transmission to current or new partner(s), and allow contact tracing, testing and treatment [7]. Study design and methodology The aim of this study was to assess the public health burden of genital Chlamydia infection in sexually active females of reproductive age. A prospective crosssectional study conducted at two hospital sites within Nairobi, Kenya was conducted. The two sites chosen represented a population with different socio-economic status. Adult women aged, 18-45 years, who were currently sexually active and were attending the outpatient clinics and accident and emergency departments at the two study sites during the study period were recruited ( Figure 1). Exclusion criteria included women who declined to give informed consent, those who were not accessible for follow-up after testing and those who were on chronic antibiotic treatment. A sample size of 300 was predetermined and calculated using prevalence rates from previous studies conducted in Kenya. Chlamydia testing was conducted on site, either by the principal investigator or a qualified nurse, deemed competent to run the test. The test had an in-built procedural control and known positive and negative control samples (supplied with each kit) were run concurrently with test samples. SPSS (version 15.0) and Microsoft Excel were used for data analysis. Research and Ethics committees of both participating hospitals (Aga Khan University Research Ethics Committee and St. Mary's Mission Hospital, Nairobi, Kenya) approved the study. Results Patients were recruited over a period of five months, July to November, 2010. 150 women from each study site (a total 300 women) fulfilled the eligibility criteria (see Figure 1). Women were recruited equally from the different clinics sampled. Table 1 below highlights the social and demographic differences between the two study populations. Patients from site one were generally of a higher socioeconomic standing than those from site two; parameters used to assess this difference included patient education, monthly income and rent. More patients, 51% (76/150), from site one had a graduate education compared to 3.3% (5/150) from site two. Only 14% (21/150) of patients from site one had no income compared with 35% (53/150) from site two. 55.6% (10/18) of women with a positive Chlamydia test result were older than 20 years old at first sexual encounter. No significant difference was present when odds ratios were calculated to determine if age at sexual onset (p=0.753), duration of sexual activity (p=0.57) and number of lifetime partners (p=0.928) predisposed to genital Chlamydia infection (Table 1). Patients who tested positive were counselled about the infection and offered treatment for themselves and their partner(s). One patient refused treatment. Four patients (22.2%) did not return for follow-up testing. Of the thirteen patients who returned for a follow-up testing one patient re-tested as positive. However, on repeat testing one month later the test was negative without a second dose of antibiotics. Discussion The overall prevalence of 6% is comparable with previous published reports from Kenya, which showed varied prevalence rates of between 4.2% and 21% [9][10][11]. Although the target populations in those studies differed from ours, it is interesting that the prevalence rates remain similar indicating that the burden of disease has not decreased in recent times suggesting that relevant intervention strategies are still necessary. The high prevalence noted among women of greater socioeconomic standing suggests that presence of infection in this group has been largely ignored by previous studies from this region. Our study provides new information and highlights the need to institute active opportunistic screening programs across all socioeconomic groups. Few studies from Kenya or Africa have examined the prevalence of genital Chlamydia with reference to age unlike in the developed world, where this is the fundamental focus. Investigators have found that the prevalence of Chlamydia infection is highest in younger women especially those under the age of 25 years [12,13]. Although this study was not sufficiently powered to determine the effects of age on prevalence rate, the prevalence rate was highest in women aged 25-30 years, 2%, emphasising that even in our population young women are at higher risk of being infected. Due to ethical concerns, our study population was aged between 18 and 45 years, excluding sexually active teenage girls, which could imply that we might have missed an age group with higher burden of infection. In our study, 50% (150/300) of the women were older than 20 years of age at the time of first sexual encounter. Studies differ, revealing the disparities in social and sexual behaviour and cultural backgrounds between different countries [13,14]. There was no significant difference between the age of first sexual encounter and infection with Chlamydia. The reason for this might be two-fold; first our study population did not include women younger than 18 years and second our sample size was not large enough to detect a difference in the prevalence of infection with respect to age. Patients of a higher socioeconomic standing, with a higher prevalence of genital Chlamydia infection, had more than two lifetime partners. Our findings reiterate what other studies have shown and it is postulated that multiple partnerships may increase the likelihood of encountering a sexually transmitted pathogen through the increased probability of choosing a partner with infection, while having new or casual sexual contacts may be related to increased risk because of a reduced familiarity between partners [14,15]. The majority of patients who tested positive for Chlamydia were asymptomatic, highlighting the inadequacy of using syndromic management in such patients. This finding reinforces the need to institute screening programs with the use of low cost and rapid point of care testing to prevent potential spread of infection in susceptible populations. This is the first study in Kenya using a rapid point-of care diagnostic test with a non invasive vaginal swab as the specimen type enabling patients to get tested and treated within one clinic visit. The majority preferred the non invasive vaginal swab as compared to the conventional endocervical swab for specimen collection. Rates of contraceptive use were low, possibly due to most women being married and in a monogamous relationship. Few women preferred barrier contraceptives therefore increasing their risk of becoming infected with genital Chlamydia and indeed with other sexually transmitted infections. Most women had little or no knowledge about genital Chlamydia trachomatis as campaigns on sexually transmitted infections in this region tend to focus more on other infectious agents such as HIV, gonorrhoea and herpes simplex. The asymptomatic nature of this infection accentuates the need to educate patients about associated risk factors and available testing modalities. Conclusion Prevalence of genital Chlamydia remains significant in our population especially in women living in highly urbanised areas. This new information provides evidence for the need to implement active opportunistic screening in young sexually active women in our population in patients of all socioeconomic groups. Point of care tests can be employed for infection detection, enabling rapid screening of large numbers with provision of a same day result and treatment if required. There is a considerable gap in current awareness about genital Chlamydia and an urgent need to prioritise patient and community education, so that young sexually active females are aware of the inherent risk factors that can predispose them to infections.
2017-06-27T01:49:49.457Z
2013-02-04T00:00:00.000
{ "year": 2013, "sha1": "8ebacbbb0839ba41db9060a0dc3f1552849511fd", "oa_license": "CCBY", "oa_url": "https://bmcresnotes.biomedcentral.com/track/pdf/10.1186/1756-0500-6-44", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8ebacbbb0839ba41db9060a0dc3f1552849511fd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
253827148
pes2o/s2orc
v3-fos-license
IUP-BERT: Identification of Umami Peptides Based on BERT Features Umami is an important widely-used taste component of food seasoning. Umami peptides are specific structural peptides endowing foods with a favorable umami taste. Laboratory approaches used to identify umami peptides are time-consuming and labor-intensive, which are not feasible for rapid screening. Here, we developed a novel peptide sequence-based umami peptide predictor, namely iUP-BERT, which was based on the deep learning pretrained neural network feature extraction method. After optimization, a single deep representation learning feature encoding method (BERT: bidirectional encoder representations from transformer) in conjugation with the synthetic minority over-sampling technique (SMOTE) and support vector machine (SVM) methods was adopted for model creation to generate predicted probabilistic scores of potential umami peptides. Further extensive empirical experiments on cross-validation and an independent test showed that iUP-BERT outperformed the existing methods with improvements, highlighting its effectiveness and robustness. Finally, an open-access iUP-BERT web server was built. To our knowledge, this is the first efficient sequence-based umami predictor created based on a single deep-learning pretrained neural network feature extraction method. By predicting umami peptides, iUP-BERT can help in further research to improve the palatability of dietary supplements in the future. Introduction Umami taste determines the deliciousness of foods. Many foods possess umami ingredients, such as meat products [1,2], mushroom [3], soy sauce [4], seafoods [5], and fermented foods [6]. In addition to sweet, bitter, salty, and sour, umami taste was recognized as the fifth taste, which is characterized as a meaty, savory, or broth-like flavor [7]. The perception of sweet, bitter and umami taste is inspired by the binding of taste components to the G protein-coupled receptor [8,9]. The main umami taste receptor is an independent heterodimeric T1R1/T1R3 receptor [10,11]. Umami ingredients are widely used in food production, with several health benefits [12]. Umami peptides are a group of specific structural peptides, which endow foods with a favorable umami taste [6]. The primary structure of umami peptides is usually short linear peptides, with a molecular weight distribution of less than 5000 Da. Dipeptides and tripeptides account for approximately 60% of the isolated umami peptides [3,10]. Longer linear peptides, including pentapeptides, hexapeptides, heptapeptides, and octapeptides, were also discovered to possess strong umami intensity [1,2,5,13]. The binding mechanism of umami peptides to the taste receptor was distinguished from that of other umami ingredients, indicating their special state-of-the-art performance was obtained for various downstream tasks [32]. With a global receptive field, BERT can effectively capture more global context information than the convolutional neural network-based models. Recently, BERT has achieved gratifying results in the prediction of various functional peptides, such as bitter peptides [33], antimicrobial peptides [34], and human leukocyte antigen peptides [35]. Soft symmetric alignment (SSA) has defined a brand-new method to compare arbitrary-length sequences within vectors [36]. An initial pretrained language model is used to encode a peptide sequence, as a three-tier stacked BiLSTM encoder output is meanwhile utilized. Each peptide sequence creates the final embedding matrix by employing a linear layer, R L×121 , in which L represents the peptide length. In the SSA embedded model, the model was trained and optimized using the SSA strategy [37,38]. Here, we created a novel ML-based predictor, namely iUP-BERT, which employed a deep learning pretrained neural network feature extraction method for model development. For model performance improvement, the synthetic minority oversampling technique (SMOTE) [39] was applied first to overcome the data imbalance. To achieve higher prediction accuracy, the pretrained sequence embedding technique SSA or BERT was then combined with five different ML algorithms (KNN, LR, SVM, RF, and light gradient boosting machine (LGBM) [38]) to build several models. The features of the BERT method combined with the SVM model were finally selected and used to raise the prediction efficacy after optimization. The results from both the 10-fold cross-validation and independent test represented that the application of the deep representation learning BERT method remarkably improved the model performance in identifying umami peptides. IUP-BERT achieved higher accuracy than existing methods based on peptide sequence information alone. Figure 1 illustrates the overall framework of iUP-BERT. The main steps are as follows: Upon the introduction of the peptide sequence, the pretrained sequence embedding technique, BERT, was used for feature extraction. For comparison, the SSA sequence embedding technique was included. 2. After the feature extraction, BERT was fused with SSA to make an 889D fusion feature vector. 3. The SMOTE was used to overcome the data imbalance. 4. For feature space optimization, the LGBM feature technique method was used. 5. Five different ML algorithms (KNN, LR, SVM, RF, and LGBM) were combined with the above techniques to build several models. The features of the BERT-SMOTE-SVM model were selected and applied to raise the prediction accuracy after optimization. 6. The optimized feature representations were combined to establish the final iUP-BERT predictor. Datasets For fair comparison, the same peptide datasets (Supplementary File S1) used in previous umami peptide ML models were chosen [24]. In the datasets, 140 peptides either from experimentally validated umami peptides [10,15,16,20] or from BIOPEP-UWM databases [40] were taken as positive samples, whereas the negative samples were 302 nonumami peptides, identified as bitter peptides [41,42]. All peptide sequences in both the positive and negative samples were unique. The training dataset includes 112 umami and 241 non-umami peptides. The independent test dataset contains 28 umami and 61 nonumami peptides. (1) The peptide sequence was included as text and feature-extracted by the BERT model and SSA method. (2) The 788D BERT extracted feature was fused with the 121D SSA extracted features to make an 889D fusion feature vector, with individual feature vectors as comparison. (3) The SMOTE method was used to overcome the data imbalance. (4) The LGBM feature selection method was used to attain the best feature combinations. (5) Five different ML algorithms (KNN, LR, SVM, RF, and LGBM) were combined with the above techniques to build several models. (6) The final iUP-BERT predictor was established by combining the optimized feature representations. Here, BERT is for Bidirectional Encoder Representations from Transformers; SSA is for Soft Sequence Alignment; SMOTE: Synthetic Minority Oversampling Technique; LGBM is for Lighting Gradient Boosting Machine; D is for Dimension; KNN is for K-Nearest Neighbors; LR is for Logistic Regression; SVM is for Support Vector Machine; RF is for Random Forest. (1) The peptide sequence was included as text and feature-extracted by the BERT model and SSA method. (2) The 788D BERT extracted feature was fused with the 121D SSA extracted features to make an 889D fusion feature vector, with individual feature vectors as comparison. (3) The SMOTE method was used to overcome the data imbalance. (4) The LGBM feature selection method was used to attain the best feature combinations. (5) Five different ML algorithms (KNN, LR, SVM, RF, and LGBM) were combined with the above techniques to build several models. Feature Extraction To extract different and effective features on umami peptide recognition, two deep representation learning feature extraction methods, the pretrained SSA sequence embedding model and the pretrained BERT sequence embedding model, were used. Meanwhile, the dataset was either pretrained with the SMOTE embedding model or not. To identify specific umami peptides, the models were trained on an alternate dataset. More comprehensive predictive models were created after comparison of different feature encoding schemes. Pretrained SSA Embedding Model SSA defines a brand-new approach to compare arbitrary-length sequences within vectors [36]. An initial pretrained model is utilized to encode a peptide sequence, as a three-tier stacked BiLSTM encoder output is utilized meanwhile (Figure 1) Each peptide sequence creates the final embedding matrix by employing a linear layer, R L×121 , in which L represents the peptide length. A model like this, which was trained and optimized by the SSA method, is called an SSA embedded model. Consider two embedded metrics of R L×121 , with the names P 1 and P 2 for two distinct peptide sequences with varying lengths, L 1 and L 2 where α i and β i represent the 121D vector. If each amino acid sequence is encoded into a vector representation sequence, called P 1 and P 2 , we created an SSA mechanism to calculate the similarity between two amino acid sequences. Based on their embedded vectors, the similarity between the two sequences was determined as follows:ω τ ij is calculated by the following Formulas (4)-(7) A completely differentiated SSA reversely matched these parameters to the sequence encoder parameters. Individual peptide sequence was transformed into an embedding matrix, R L×121 , using the trained model. A 121D SSA feature vector was produced by averaging pooling procedures. Pretrained BERT Embedding Model BERT is a powerful natural language processing-inspired deep learning method [31]. The core of BERT is a transformer language model which has a variable number of encoder layers and self-attention heads, as shown in Figure 1. It provides a pretraining and finetuning approach, using enormous amounts of unlabeled data [32,33]. Here, the traditional BERT architecture was used to construct a BERT-based peptide prediction model ( Figure 1) There is no need to systematically design and select feature encodings in advance. Peptide sequences were taken as input directly and passed on to the BERT method to generate feature descriptors automatically. First, the peptide sequences were converted into the token representation of k-mers as input, and the positional embedding was added to obtain the final input token. Then, the semantics of the context was captured through the multi-head self-attention model. Certain adjustments were made through linear transformation, thus ending the forward propagation of the first layer (as shown in Figure 1) There are 12 such layers in the model. The result was used for the pretraining task of BERT. The mask task is still the traditional method, covering the part and then predicting, and backpropagating through the cross-entropy loss function. A 768D BERT feature vector was produced by the BERT-trained model. Feature Fusion To obtain the most superior feature combination, the 121D SSA eigenvector was combined with the 768D BERT eigenvector, which generated the 889D SSA+BERT fusion feature vector. Synthetic Minority Oversampling Technique (SMOTE) SMOTE is also called the "artificial minority oversampling method". It is an improved scheme based on the random oversampling algorithm [39]. The random oversampling algorithm generates additional minority samples through adopting a simply copying samples strategy. As a result, it has the risk of model overfitting, where the feature information is too specific and not general enough. The SMOTE method can effectively achieve the class balance in training data [43]. The basic idea is to analyze the minority samples, synthesize new categories of samples accordingly, and add artificially simulated new samples to the dataset. Briefly, the sampling nearest neighbor algorithm calculates the KNN of each minority class sample [43]. N samples are randomly selected from K neighbors for random linear interpolation to construct new minority class samples. Combination was made subsequently between the new samples and the original data to create a new training set. The program is kept running until the data imbalance meets the relevant requirements. Machine Learning Methods Five commonly used high-performance ML models were used for modeling. The k-nearest neighbor algorithm (KNN) model [25] is to find the K sample that is most similar as the given new sample, or the K sample that is "closest to it". If most of the K samples belong to a certain class, the sample also belongs to the same class. Logistic regression (LR) [27] is a generalized linear model. It uses the sigmoid function to simulate the data distribution and act as the dividing line between positive and negative samples. The support vector machine (SVM) [28,29] is to find a segmentation curve that maximizes the closest distance (also known as the interval) between data points of different classes. For binary classification, SVM is to get the furthest classification boundary and to make sure that the slight deviation of data would not have much impact. Random forest (RF) [26] is an ensemble learning algorithm. It uses the samples with retractable samples to train multiple decision trees. Each node of the training decision tree only uses the partial features of the sampling, and it votes with the prediction results of these trees during the prediction. The voted majority class of a sample is the class to which the sample belongs. Lighting gradient boosting machine (LGBM) [38] adopts the histogram algorithm. It converts continuous floating-point features into k discrete values, and constructs the histogram with a width of k. Then, the training data are traversed and the cumulative statistics of each discrete value in the histogram are collected. It uses a depth-limited leaf-wise strategy and supports parallel computing. Performance Evaluation Six widely used binary classification metrics were applied for performance evaluation, which are ACC, MCC, Sn, specificity (Sp), and BACC [44][45][46][47][48]. Here, TP is the given true positive sample number of umami peptides. TN is the true negative sample number of non-umami peptides. FP is the false positive sample number of non-umami peptides. FN is the false negative sample number of umami peptides. The receiver operating characteristic curve (ROC) is a curve drawn according to a series of different classification methods (boundary value or decision threshold), with the true positive rate (sensitivity) as the ordinate and false positive rate (specificity) as the abscissa. ROC displays the relationship between true positives and false positives at different confidence levels [12,35,49]. Nevertheless, the ROC curve cannot clearly indicate which classifier is more superior. Thus, the area under the receiver operating characteristic curve (auROC) is usually adopted as an additional metric for model evaluation. The classifier with a larger auROC value performs better. The value of auROC for proposed models was computed and used to compare with the models reported previously. For the model evaluation method, the widely used K-fold cross-validation method and independent testing method were adopted. Firstly, the K-fold cross-validation were applied for model training and validation evaluation based on the training set. In this study, the K value was 10. That is, the training set was randomly divided into ten parts, of which nine were used for training and one for validation. The performance of the trained model was evaluated by the average of 10 validation scores. Independent testing was to use additional new data, not in the training set, to test and evaluate the trained model. A good model requires good metrics value for both K-fold cross-validation and independent testing. Preliminary Performance of Models Trained with or without SMOTE To overcome the data imbalance in modeling, the SMOTE method was first applied to the modeling. Meanwhile, to explore the embedding feature types in umami peptides, different models were built based on two deep representation learning feature extraction methods, the pretrained SSA embedding model and the pretrained BERT embedding model, in combination with five distinct widely-used ML algorithms (KNN, LR, SVM, RF, and LGBM) The performance of the different combination models pretrained with or without SMOTE was compared by performing the repeated stratified 10-fold cross validation tests 10 times (Figure 2) to the modeling. Meanwhile, to explore the embedding feature types in umami peptides, different models were built based on two deep representation learning feature extraction methods, the pretrained SSA embedding model and the pretrained BERT embedding model, in combination with five distinct widely-used ML algorithms (KNN, LR, SVM, RF, and LGBM). The performance of the different combination models pretrained with or without SMOTE was compared by performing the repeated stratified 10-fold cross validation tests 10 times (Figure 2). LGBM. For 10-fold cross-validation results (Figure 2), all five algorithm models using the SMOTE method based on either the SSA or BERT feature performed better across five metrics (ACC, MCC, Sn, auROC, and BACC) than the models not using SMOTE, with Sp as the exception. The scores after model parameter optimization are listed in Table 1. For example, the average ACCs of KNN, LR, SVM, RF, and LGBM based on SSA with SMOTE are 0.842, 0.857, 0.917, 0.915, and 0.917, respectively, which exceeded that of the models For 10-fold cross-validation results (Figure 2), all five algorithm models using the SMOTE method based on either the SSA or BERT feature performed better across five metrics (ACC, MCC, Sn, auROC, and BACC) than the models not using SMOTE, with Sp as the exception. The scores after model parameter optimization are listed in Table 1. For example, the average ACCs of KNN, LR, SVM, RF, and LGBM based on SSA with SMOTE are 0.842, 0.857, 0.917, 0.915, and 0.917, respectively, which exceeded that of the models without SMOTE by 1.08%, 10.44%, 10.88%, 9.45%, and 7.63%, respectively. A similar improvement was also observed in the 10-fold cross-validation results based on the BERT feature ( Figure 2 and Table 1) Although the best Sp values based on the SSA feature with SMOTE (0.913) were lower than those of the model without SMOTE (0.938), the overall best Sp score (0.959) was still obtained from the BERT feature optimized using the SMOTE method. For SMOTE performance in the independent test of the SSA or BERT feature vector (Table 1), still, the best scores were achieved using the SMOTE method across the five metrics. Take values based on SSA for example, the ACC is 0.866, with MCC to be 0.683, Sn to be 0.814, auROC to be 0.916, and BACC to be 0.825. These results indicate that increasing the sampling with SMOTE could effectively overcome the data imbalance and improve model performance in predicting umami peptides. Particularly, we noted that the BACC scores based on the five algorithms in the cross-validation results were the same as ACC with SMOTE being used ( Figure 2 and Table 1) As the metric BACC reflected the level of data balance, the data became balanced after SMOTE application, and BACC became redundant. Similar results were observed in the subsequent cross-validation analysis with SMOTE. LGBM: light gradient boosting machine. "−" indicates without the SMOTE method; "+" indicates with the SMOTE method. The Effect of Different Feature Types Meanwhile, from the cross-validation results ( Figure 2 and Table 1), the BERT feature vector developed using the SVM algorithm with SMOTE method performed best out of all the combinations tested across the five metrics (ACC, MCC, Sp, auROC, and BACC) Among them, ACC was 0.923 (0.65-18.9%) higher than the other options, with MCC being 0.849 higher by 1.67-75.0%, Sp being 0.959 higher by 2.24-33.0%, auROC being 0.884 higher by 1.76-20.9%, and BACC being 0.923 higher by 0.65-20.0%. Nevertheless, the SSA feature vector conjugated with KNN and SMOTE algorithms outperformed all the BERT combinations across the Sn metric (0.962) Regarding the performance of the BERT feature vector based on SVM with SMOTE in the independent test (Table 1), ACC was 0.876 lower by 2.03% compared with that of the BERT feature based on RF using SMOTE, with MCC being 0.706 lower by 11.0%, Sn being 0.714 lower by 21.1%, Sp being 0.951 higher by 7.09%, auROC being 0.926 lower by 4.63%, and BACC being 0.832 lower by 7.24%. Yet, the BERT-SVM-SMOTE combination was still supposed to be the best model out of all the combinations. The Effect of Feature Fusion To further improve the model performance and obtain more information, the SSA and BERT features were combined to make fusion features. The fusion feature was combined with the five algorithms (KNN, LR, SVM, RF, and LGBM) to train baseline models and improve model performance. Table 2 displayed the 10-fold cross-validation and independent testing results of the SSA-BERT fusion features with or without SMOTE. The performance metrics of the individual and fused features with SMOTE according to the ML methods are summarized in Figure 3. Consistent with the results in Section 3.1, for the 10-fold crossvalidation (Table 2), the SSA-BERT fusion feature with five models using SMOTE displayed a remarkably higher value than the models without SMOTE except for the Sp value, and the BACC score was the same as ACC with SMOTE being used. Particularly, the best performance of the fusion feature was slightly superior to the BERT feature alone across four metrics, with ACC being 0.934 higher by 1.19%, MCC being 0.867 higher by 1.90%, Sn being 0.971 higher by 1.25%, and BACC being 0.934 higher by 1.19%. However, the best performance of the fusion feature in the independent test results across all the six metrics (ACC = 0.876, MCC = 0.724, Sn = 0.857, Sp = 0.934, auROC = 0.919, BACC = 0.871) was in any aspect lower than the corresponding scores in the BERT feature alone (ACC = 0.896, MCC = 0.793, Sn = 0.905, Sp = 0.951, auROC = 0.971, BACC = 0.897) with SMOTE ( Figure 3 and Table 2) Thus, the feature fusion of SSA and BERT is not a beneficial choice for model optimization in umami peptide automatic prediction. LGBM: light gradient boosting machine. "−" indicates without the SMOTE method; "+" indicates with the SMOTE method. The Effect of Feature Selection As described in Section 3.3, feature fusion was not superior to BERT feature alone. In the training set, the sequence vector had 121 dimensions based on SSA feature, and 768 dimensions based on BERT, respectively. The feature vectors had 889 dimensions based on the combined fusion feature. Higher dimensions indicated a higher risk of information redundancy, that would result in model overfitting. Feature selection is a good way to solve this problem, which removes redundant and indistinguishable features [38]. The LGBM feature selection method has been proved to an effective approach for feature selection and was successfully applied for ML-based bio-sequence classification [38,50]. Here, we also used it to find the optimized feature space for umami peptide prediction task. Table 3 presented the performance metrics of the individual and fused features created based on five ML models (KNN, LR, SVM, RF, and LGBM) in conjugation with SMOTE. A visual illustration of the outcomes was shown in Figure 4. From the 10-fold cross-validation results ( Figure 4 and Table 3), using feature selection, all the individual or fusion features based on the SVM algorithm outperformed the other four algorithms (KNN, LR, RF, and LGBM) across four metrics, namely ACC, MCC, Sp, and BACC. The best performance was observed in the BERT feature encoding alone based on the SVM algorithm with 139 dimensions over all the other options (Table 3) The Effect of Feature Selection As described in Section 3.3, feature fusion was not superior to BERT feature alone. In the training set, the sequence vector had 121 dimensions based on SSA feature, and 768 dimensions based on BERT, respectively. The feature vectors had 889 dimensions based on the combined fusion feature. Higher dimensions indicated a higher risk of information redundancy, that would result in model overfitting. Feature selection is a good way to solve this problem, which removes redundant and indistinguishable features [38]. The LGBM feature selection method has been proved to an effective approach for feature selection and was successfully applied for ML-based bio-sequence classification [38,50]. Here, we also used it to find the optimized feature space for umami peptide prediction task. Table 3 presented the performance metrics of the individual and fused features created based on five ML models (KNN, LR, SVM, RF, and LGBM) in conjugation with SMOTE. A visual illustration of the outcomes was shown in Figure 4. Comparison of iUP-BERT with Existing Models The efficacy and robustness of the iUP-BERT model in umami peptide identification was evaluated subsequently. Its predictive performance was compared with that of the existing methods, namely iUmami-SCM and UMPred-FRL. As shown in Table 4, from the cross-validation results, iUP-BERT apparently outperformed iUmami-SCM and UMPred-FRL across ACC, MCC, Sn, auROC, and BACC. Regarding the independent test results, iUP-BERT produced remarkably better results in the five metrics than iUmami-SCM and UMPred-FRL; for ACC by 1.23-3.93%, for MCC by 5.31-13.99%, for Sn by 13.6-25.07%, for auROC by 1.52-3.90%, and for BACC by 4.30-8.86%. Taken together, the comparisons show that iUP-BERT based on the BERT-SVM-SMOTE combination is more effective, reliable, and stable than the existing methods for umami peptide prediction. Feature Analysis Using Feature Projection and Decision Function To visually explain the excellent performance of iUP-BERT, principal components analysis (PCA) and uniform manifold approximation and projection (UMAP) dimension reduction were used. First, the feature space vector optimized by feature selection, namely BERT features of 139D, was reduced to a 2-dimensional plane using PCA and UAMP algorithms, respectively. As displayed in Figure 5, red dots represented umami peptides and blue dots represented non-umami peptides. Then, a decision function boundary was drawn, which could distinguish between positive and negative samples. As shown in Figure 5, the distribution of positive and negative samples is relatively concentrated in two areas; the positive samples are most in yellow areas, while the negative samples in the purple area. Additionally, we can see from Figure 5, that SVM can distinguish most positive and negative samples, yet there are still some misclassified samples. Therefore, better feature extraction methods or more suitable machine learning methods were needed for modeling, to better identify umami peptide sequences from non-umami peptide sequences in the future. Construction of the Web Server of iUP-BERT To facilitate rapid and high-throughput screening of umami peptides and maximize the use of the iUP-BERT predictor, an open-access web server was established at https://www.aibiochem.net/servers/iUP-BERT/ (accessed on 23 September 2022). We hope the iUP-BERT would be a powerful tool that can be used to explore new umami peptides and to promote the food seasoning industry. Conclusions In this study, a novel machine learning prediction model, namely iUP-BERT, was developed for the accurate prediction of umami peptides based on the peptide sequence alone. A single deep representation learning feature encoding method (BERT) was adopted to generate predicted probabilistic scores of potential umami peptides. First, SMOTE was applied to balance the data. Then, feature extraction approaches (SSA, BERT, or fused feature) were combined with five different algorithms (KNN, LR, SVM, RF, and LGBM) to build different models. After extensive testing and optimization, the BERT-SVM-SMOTE model with 139 dimensions was the best feature set. Further feature selection produced a robust model. To our knowledge, this is the first report on the utilization of the deep representing learning feature BERT in the computational identification of umami peptides. Subsequent 10-fold cross-validation and independent test results indicated the efficacy and robustness of iUP-BERT in predicting umami peptides. By comparison with the existing methods (iUmami-SCM and UMPred-FRL) based on the independent test, the iUP-BERT with BERT feature extraction method alone significantly outperformed the existing predictors with several manual feature extraction combinations; for ACC by 1.23-3.93%, for MCC by 5.31-13.99%, for Sn by 13.6-25.07%, for auROC by 1.52-3.90%, and for BACC higher by 4.30-8.86%. Finally, to maximize the use of the predictor, an open-access iUP-BERT web server was built at https://www.aibiochem.net/servers/iUP-BERT/ (accessed on 23 September 2022). For deep learning-based models, larger training sample size improves the prediction performance. As the number of the training Construction of the Web Server of iUP-BERT To facilitate rapid and high-throughput screening of umami peptides and maximize the use of the iUP-BERT predictor, an open-access web server was established at https: //www.aibiochem.net/servers/iUP-BERT/ (accessed on 23 September 2022) We hope the iUP-BERT would be a powerful tool that can be used to explore new umami peptides and to promote the food seasoning industry. Conclusions In this study, a novel machine learning prediction model, namely iUP-BERT, was developed for the accurate prediction of umami peptides based on the peptide sequence alone. A single deep representation learning feature encoding method (BERT) was adopted to generate predicted probabilistic scores of potential umami peptides. First, SMOTE was applied to balance the data. Then, feature extraction approaches (SSA, BERT, or fused feature) were combined with five different algorithms (KNN, LR, SVM, RF, and LGBM) to build different models. After extensive testing and optimization, the BERT-SVM-SMOTE model with 139 dimensions was the best feature set. Further feature selection produced a robust model. To our knowledge, this is the first report on the utilization of the deep representing learning feature BERT in the computational identification of umami peptides. Subsequent 10-fold cross-validation and independent test results indicated the efficacy and robustness of iUP-BERT in predicting umami peptides. By comparison with the existing methods (iUmami-SCM and UMPred-FRL) based on the independent test, the iUP-BERT with BERT feature extraction method alone significantly outperformed the existing predictors with several manual feature extraction combinations; for ACC by 1.23-3.93%, for MCC by 5.31-13.99%, for Sn by 13.6-25.07%, for auROC by 1.52-3.90%, and for BACC higher by 4.30-8.86%. Finally, to maximize the use of the predictor, an openaccess iUP-BERT web server was built at https://www.aibiochem.net/servers/iUP-BERT/ (accessed on 23 September 2022) For deep learning-based models, larger training sample size improves the prediction performance. As the number of the training datasheet used here were relatively low (112 positive and 241 negative samples), future efforts could be exerted on constructing an optimized larger size datasheet with higher amounts of identified umami and non-umami peptides for better model performance. Additionally, it would be to achieve a more accuracy model by fine-tuning the BERT for feature extraction. Finally, we hope the iUP-BERT would be a powerful tool for exploring new umami peptides to promote the umami seasoning industry. Data Availability Statement: The data used to support the findings of this study can be made available by the corresponding author upon request. Conflicts of Interest: The authors declare no conflict of interest. Abbreviation The following abbreviations are used in this manuscript:
2022-11-24T16:11:50.465Z
2022-11-01T00:00:00.000
{ "year": 2022, "sha1": "c497e4bb66fc1488a13ce19343c7b6aba518e549", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2304-8158/11/22/3742/pdf?version=1669025340", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6c4e9955b8f5f2c7d758d46b7c08cdfb50317a4b", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
257255297
pes2o/s2orc
v3-fos-license
The TESS Triple-9 Catalog II: a new set of 999 uniformly-vetted exoplanet candidates The Transiting Exoplanet Survey Satellite (TESS) mission is providing the scientific community with millions of light curves of stars spread across the whole sky. Since 2018 the telescope has detected thousands of planet candidates that need to be meticulously scrutinized before being considered amenable targets for follow-up programs. We present the second catalog of the Plant Patrol citizen science project containing 999 uniformly-vetted exoplanet candidates within the TESS ExoFOP archive. The catalog was produced by fully exploiting the power of the Citizen Science Planet Patrol project. We vetted TESS Objects of Interest (TOIs) based on the results of Discovery And Vetting of Exoplanets DAVE pipeline. We also implemented the Automatic Disposition Generator, a custom procedure aimed at generating the final classification for each TOI that was vetted by at least three vetters. The majority of the candidates in our catalog, $752$ TOIs, passed the vetting process and were labelled as planet candidates. We ruled out $142$ candidates as false positives and flagged $105$ as potential false positives. Our final dispositions and comments for all the planet candidates are provided as a publicly available supplementary table. more, TESS also acquired a series of Full Frame Images (FFIs) at 10 and 30 minute cadences, with the goal of expanding the transit search to the entire sky; since September 2022 (the mission's second extension), FFIs have being acquired at a cadence of 200 s. At the time of writing, TESS has detected almost 6,000 TESS Object of Interest (TOIs) while ∼ 10,000 are expected to be found in the FFIs within the primary mission duration (Barclay et al. 2018). According to the ExoFOP-TESS archive 2 , at December 2022, 277 candidates out of the currently-known 5,887 TOIs have been validated to date by follow-up measurements. Detecting a transit-like signal in the light curve of a distant star is not sufficient to confirm the discovery of an exoplanet. Several astrophysical sources (e.g., eclipsing binary stars, stellar spots and/or pulsations; Ciardi et al. 2018) or instrumental artefacts (e.g., jitter noise and momentum dumps) can mimic a transit-like signal in the light curve of the observed target leading to a false positive detection. In light of the many potential false positive scenarios that affect the photometric transit method, a planet candidate has to be carefully examined before being promoted as a suitable target for spectroscopic follow-up observations aimed at its confirmation as a bona-fide planet. Precision radial velocity (PRV; Baranne et al. 1996;Pepe et al. 2004) measurements are challenging, time-consuming, and achievable by only a handful of instruments. The vetting procedure is one of the key steps in the process of confirming the planetary origin of a transit feature found in the light curve of a star. A catalog of uniformly-vetted transiting planet candidates is essential to optimize spectroscopic follow-up observations by promoting targets for which common false positive scenarios have been already ruled out. Moreover, the vetting procedure enables statistical validation of planet candidates for which no PRV measurements are feasible. Finally, complementary human vetting also provides the opportunity to create a knowledge base for machine learning approaches aiming to automate the entire vetting process. Several automated vetting pipelines have been developed over the years to tackle the issue of false positives in transit photometry. For example, the AUTOVETTER (McCauliff et al. 2015), ROBOVETTER (Coughlin et al. 2016) and SIDRA (Mislis et al. 2016) pipelines are decision-tree based machine learning codes trained on massive human-inspected data sets to produce uniformly-vetted catalogs of planet candidates discovered from the Kepler mission. Deep learning algorithms have been trained to identify planet candidates in both Kepler and TESS light curves. These work both as likelihood-based rankers (Shallue & Vanderburg 2018) or binary classifiers (e.g. Olmschenk et al. 2021). Since the innovative and high-performance approach provided by these models, vetting efforts have shifted towards deep learning (DL) methods. Despite the fact that DL models usually outperform traditional machine learning methods, they come with certain drawbacks. Most notably, DL models are computationally expensive and the results they produce are sometimes difficult to interpret (Samek et al. 2017). Apart from models based on neural networks, pipelines such as VESPA (Morton 2012) and TRICERATOPS (Giacalone et al. 2021) evaluate the Bayesian probability that a signal is a false positive based on the shape of the light curve as well as the stellar parameters of the nearby sources within the aperture mask used to extract the light curve. Furthermore, once a certain false positive threshold value is set, these algorithms allow to statistically validate a signal as a true planet. The Discovery And Vetting of Exoplanets (DAVE) Kostov et al. (2019a) vetting pipeline determines whether a transit-like signal is caused by a planet candidate or is a false positive by testing the candidate at both the pixel and light curve levels. Building upon methods used for vetting exoplanet candidates from the Kepler mission, DAVE was designed to analyze transit photometry from the K2 mission, and later modified to work with TESS light curves as well (Kostov et al. 2019b;Gilbert et al. 2020). It is important to note that none of these pipelines can completely replace visual human inspection. Automatic pipelines, for example, can fail to correctly classify signals with low signal-to-noise ratio (SNR) (e.g., small planets with long periods), that are dominated by stellar variability, or that are plagued by various systematic effects and instrumental artefacts. Furthermore, different planet search and/or vetting pipelines use different methods to extract and process the raw data. For example, Kostov et al. (2019a) demonstrated that nearly one in every three K2 planet candidates has insufficient SNR across all available light curve sets to provide a reliable classification (i.e., planet candidate or false positive). Hence, all automated vetting pipelines come with inherent data-processing and data-analyzing biases and peculiarities, making complementary human inspection not only recommended but also essential. Traditionally, complementary human vetting is typically done by a small group of professional astronomers. However, the everincreasing number of exoplanet candidates in need of careful examination makes this approach impractical. Vetting hundreds of targets by a handful of scientists may take months; unforeseen biases may emerge unless a clear workflow is defined within the team at the start of the work, and strictly adhered to (e.g. Thompson et al. 2018), and an intuitive, interactive, user-friendly vetting platform is used by all vetters. Citizen science is a powerful way of doing science that is becoming increasingly popular due to new collaboration tools. It offers the opportunity to address the human vetting bottleneck by harnessing the expertise and enthusiasm of amateur astronomers. For example projects like Planet Patrol (Kostov et al. 2022), Planet Hunter TESS (PHT) (Eisner et al. 2020), Exoplanet Explorers (Christiansen et al. 2018) and Disk Detective (Kuchner et al. 2016), hosted by the Zooniverse platform (Lintott et al. 2008), helped scientists achieve, in a few weeks, results that would have otherwise taken years to complete. Planet Patrol is a citizen science project designed to assist with the vetting workflow of TESS planet candidates based on the automated results and dispositions produced by the DAVE pipeline Kostov et al. (2022). After the first stage of the project was completed on Zooniverse, several citizen scientists expressed interest in continuing assisting the scientific core team with the vetting efforts. Under the guidance of members from our core science team, these "superuser" volunteers were trained to classify TESS planet candidates by critically interpreting and analyzing the entire output from DAVE. The superusers became an integral part of the team and played an essential role in our first TESS Triple 9 Catalog (Cacciapuoti et al. 2022, Paper I hereafter), where they assisted with the vetting of 999 TOIs, classifying 709 of them as planet candidates. In this work we present the continuation of our vetting efforts, in the form of a catalog of 999 uniformly-vetted TESS planet candidates detected by the Science Processing Operations Center pipeline (SPOC, Jenkins et al. 2016) and Quick Look Pipeline (QLP, Huang et al. 2020) pipelines. We utilize the same workflow as used in Paper I and introduce several new vetting tools and diagnostics. The outline of the paper is the following: in Section 2 we discuss the workflow adopted to uniformly vet 999 candidates within the TESS database, including the new implementations with respect to Paper I. In Section 3 we highlight the details of the Planet Patrol Project and how it helped in carrying out this work. The catalog and its details are discussed in Section 4. Finally we summarize our conclusions in Section 6. METHOD We have conducted a uniform vetting of 999 TOIs by means of the DAVE pipeline. DAVE utilizes a two-step vetting process for each TESS sector where the target has been observed, namely a pixellevel photocenter analysis and a flux-based analysis at the light curve level. The pipeline vets both the SPOC short-cadence and the FFI long-cadence TESS data, using the "Corrected Flux" eleanor light curves (Feinstein et al. 2019) for the latter as-is, i.e. without further detrending or post-processing. DAVE uses the target's TIC ID, transit ephemeris, depth and duration as provided by the publicly-available ExoFOP website. For completeness, we outline the main products of DAVE below; for further details we refer the reader to Kostov et al. (2019a). (i) The centroids module generates a difference image by subtracting the overall in-transit image from the corresponding out-of-transit image for each transit and for each sector. Then, for each transit, the code calculates the photocenter of the light distribution by fitting to the difference image the TESS Pixel Response Function (PRF) and a Gaussian point-spread function (PSF). Finally, the overall position of the photocenter for a particular sector is computed by taking the average over all the transit events detected in that sector. We note that the centroid difference images created by DAVE can be difficult to interpret when the SNR is low or there are significant artefacts. In such cases the centroid measurements can be unreliable (flagged as "UC", for "Unreliable Centroid" in our catalog) and the corresponding automated photocenter disposition might be incorrect. For example, if some of the individual difference images exhibit prominent systematics, the calculated average photocenter position may be affected to the point of DAVE flagging the candidate as a false positive due to a nominal centroid offset. Thus it is important for a human vetter to inspect the individual difference images, the corresponding photocenter measurements, and the average difference image, and evaluate the reliability of the automated photocenter dispositions provided by DAVE. The vetter is trained to distinguish between valid difference images and photocenter measurements, and those that could be affected by instrumental and/or computational systematics. The vetter ignores the poor measurements and makes a final decision based on the reliable photocenters. For example, if there is a clear Centroid Offset (flagged as "CO") with respect to the catalog position of the target star, and there are no obvious systematics that might affect the measurements, then the candidate is flagged as a False Positive (FP) (see Fig. 1). (ii) The Modelshift module uses the phase-folded light curve along with the best-fit trapezoid transit model to evaluate the significance of the primary signal together with any secondary and tertiary signals, as well as of potential Odd-Even Difference (OED) between consecutive transits. This module determines whether the source of the signal is consistent with an eclipsing binary system instead of a transiting planet. For example, if there is a significant secondary eclipse at any phase other than zero or an OED, DAVE flags the target as a false positive (see Fig. 2). We note that since DAVE uses the "Corrected Flux" eleanor light curves without further processing, highly variable stars that were observed in long-cadence only can trick the pipeline by mimicking an OED. Figure 3 shows an example of this for the case of TIC 294179385, which is considered a false positive by DAVE because of the nominal OED but was labelled as a Planetary Candidate (PC) after human inspection. Thus the human vetter has to inspect the output of the Modelshift module and decides whether the detected features are genuine, also paying close attention to (i) the shape of the signal (whether it is U-or V-shaped); (ii) the depth of the primary signal (with respect to the stellar radius as provided by ExoFOP); (iii) the depth of the secondary signal (with respect to a typical expected depth of planet occultation, on the order of a few hundred parts-per-million); and (iv) the overall shape and amplitude of the light curve variability both in-and out-of-transit. (iii) The variability is evaluated by the human vetter together with DAVE's Lomb-Scargle (LS) analysis (Lomb 1976;Scargle 1982) of the transit-masked light curve. This submodule provides a quantitative and qualitative criteria to evaluate the presence of possible light curve modulations (LCMOD) due to intrinsic and/or rotational variability. If the detected modulations have the same (or half or double) period of the detected transit-like signal, they could be the result of gravitational (beaming effect and tidal ellipsoidal distortion) and/or atmospheric (reflected light and thermal emission) effects in a close binary star system (Morris & Naftilan 1993;Faigler & Mazeh 2011;Shporer 2017). This particular scenario is usually referred to as a BEaming, Ellipsoidal, Reflection Binary (BEER) binary. Whenever we found any suspicious modulations strictly related to the orbital detected period we flagged the target using comments such as ellipsoidal variations ('EV') or synchronous variations ("synch"). An example of a synchronous scenario is shown in Fig. 4. Overall, while DAVE produces automated dispositions for each target, we mandate complementary human supervision for all targets due to the likelihood of systematics that can affect the pipeline's classification. Importantly, our human vetters can override DAVE's disposition and ultimately have the final word -any target in our catalog that exhibits potential signs of concern has been subjected to rigorous group discussions. Aside from the vetting dispositions, for each TOI we keep track of any noteworthy features using pre-defined acronyms and freetext comments as described in Table 1. As described below, we also updated the workflow presented in Paper I by introducing new diagnostic tests that are useful for the most challenging cases. Ancillary information In many cases a target's light curve or target pixel files are affected by prominent systematics and/or the detected transits have a low SNR compared to the baseline variability. This complicates the vetting procedure and can even make it unreliable altogether. To address this issue and confirm or dispel any concern, we use additional information beyond that provided by DAVE. For instance, the vetter manually checks whether the aperture mask used to extract the light curve includes nearby field stars that are bright enough to contaminate the inspected signal. Below we briefly discuss new diagnostics that have been used in this work and we are currently implementing in DAVE, to provide the vetters with a self-consistent tool without asking them to manually seek this ancillary information. Unresolved sources TESS has a large pixel scale, about 21 /pixel, with a focus-limited PSF. Hence, the flux measured in a single pixel might be contaminated by nearby background or foreground field sources. Based on our experience with DAVE and TESS data, and depending on the particular target and sector, measuring a reliable photocenter offset of ∼ 5 − 10 arcsec (∼ 0.25 − 0.5 pixels) is relatively straightforward. In contrast, a bona-fide offset of ∼ 1 − 2 arcsec (∼ 0.05 − 0.1 pixels) The centroids module shows a statistical offset of the photocenter. It indicates that the target star is not the source of the investigated signal. UC Unreliable Centroids The centroids module is not reliable due to the difference images is too noisy. It is mainly caused by stray light, bright field stars or very weak signals. Odd-Even Difference The Modelshift shows a statistically significant difference between odd and even eclipses. It usually indicates an eclipsing binary star. Vshape V-shaped The Modelshift highlights that the shape of the transit is V-like and not U-shaped as expected from a typical planetary transit. It might indicate an eclipsing binary star. Indeed, the transit of a planet produces a sharp ingress, a flat bottom, and a sharp egress. An eclipsing star mostly produces gradual ingress and egress due to the two objects have comparable sizes. LCMOD Light Curve MODulation Both Modelshift and Lomb-Scargle periodogram indicate oscillations in the starlight due to intrinsic and/or rotational variability that are not synchronized with the orbital period. These can be produced by either the target itself of by a nearby field star that falls in the aperture used to extract the light curve. Such lightcurves are generally not indicative of a potential false positive. BEER BEaming Ellipsoidal Reflection binary system A close binary star system whose gravitational and atmospheric interactions cause periodic modulations of the light curve. EV/synch Ellipsoidal Variations/ synchronous The Lomb-Scargle periodogram highlights LCMOD with the same (or half or double) period of the detected transit. This might indicate a BEER scenario, thus a false positive. FSCP Field Star in Central Pixel There is at least one unresolved source within the same pixel of the target (i.e. < 21 ) that is bright enough to contaminate the detected signal. In the worst case, this source might be the true source of the signal. FSOP Field Star in Other Pixel There is at least one resolved source within the aperture mask used to extract the observed light curve that is bright enough to contaminate the detected signal. In the worst case, this source might be the true source of the signal. If it is the case we will rule out the target as a FP due to a CO. TD Too Deep The transit is particularly deep ( 2.5 − 3%) that might be the result of an eclipsing binary system. NT No Transit The eleanor light curve does not show any transit-like signals for QLP-detected TOIs. SS Significant Secondary The Modelshift shows a statistically significant secondary. A secondary eclipse is typical of an eclipsing binary star. If this is the case the SS is located at half phase. LOWSNR Low Signal to Noise Ratio The signal-to-noise ratio of the expected transits is too low for a reliable inspection. HPMS High Proper Motion Star The star exhibits a high proper motion as after consulting the SIMBAD archive. AT Additional Transits The Modelshift shows additional transits in the phase curve. They could be caused by other planets within the system not yet detected. † Each of the comments can be preceded by a 'p' which stands for 'potential'. It is used when the vetter is not fully convinced of that specific flag. is extremely challenging to measure. Thus even if the photocenter module of DAVE does not measure a significant CO, there might still be sufficiently bright field sources that contaminate the target's light curve and/or are too close to the target to reliably rule out as potential source of the detected transit-like signals. In the former case, the additional light dilutes the transits, resulting in an underestimated planet radius (Ciardi et al. 2015). To account for this effect, we consult stellar catalogs (e.g., SIMBAD; Wenger et al. 2000, GAIA EDR3;Gaia Collaboration et al. 2021) to check whether known sources fall within the immediate vicinity of the target. Based The dashed white contour is the aperture mask used to extract the light curve, the star symbol represents the catalog position of the target, the purple triangle is the measured average out-of-transit photocentre, the small red dots represent the position of the individual photocentres and the large red circle represents the measured overall difference image photocentre. Upper left: the difference image; upper right: the average out-of-transit image; lower left: the average in-transit image; lower right: signal-to-noise ratio of the mean difference image. The color bar indicates the number of electrons/sec for each of the aforementioned cases. The difference image clearly shows a centroid offset and no artefacts. Hence, we rule out this TIC as a false positive due to a clear centroid offset. on the transit depth ( ) and magnitude difference between the target and resolved nearby field stars, the vetter would then investigate whether these alleged sources could have produced the observed transit signal. For a given target of magnitude 0 , we considered a threshold Δ = −2.5 log 10 . Thus only sources with a magnitude * such that | * − 0 | < Δ could produce a signal with the same depth of the one observed. The scientific core team provides the vetters with the Δ for each target. If some unresolved stars falls within the same pixel of the target, the vetter adds a comment 'FSCP' (Field Stars in Central Pixel). For completeness, the vetter will also flag a 'FSOP' (Field Stars in Other Pixel) whether a bright enough source falls within the aperture mask. This is done for the sake of completeness, but it is not a sufficient reason to rule out the target as a false positive. This check is time-consuming and does not need the critical faculties provided by human inspection. In the future we plan to provide the vetters with a simple tool that, by performing a GAIA DR3 query, returns all the stars within 5 pixels from the target. It will also mark those sources within the same TESS pixel (< 21 ) and colour each one according to their GAIA DR2 magnitude. The pipeline will then automatically flag any source inside and outside the target's pixel that is bright enough to cause the observed dips in the light curve. The background flux TESS is in a stable, highly elliptical high-Earth orbit in a 2:1 resonance with the Moon. This orbital path ensures maximum sky coverage while minimizing the number of obstructions during data acquisition (Gangestad et al. 2013). However this orbital path produces strong contamination in the TESS FFIs, mainly from zodiacal light and scattered light from solar system objects (Sullivan et al. 2015). ; the lower panels shows zoom-ins on the primary and secondary events, the odd and even primary events, along with any tertiary or positive events. The uppermost table displays the statistical significance of the aforementioned features, red-colored if the pipeline flags an issue as significant. The Modelshift shows a prominent V-shaped primary and a more than 6 significant odd-even difference. Hence, we rule out this TIC as a false positive. Hence, the background flux of TESS FFIs varies over the course of the ∼ 27-day observational window. To account for this, we inspect a 4-day long section of the background flux centered on the time of the transit. This helps the vetter determine whether the transit signal seen in the light curve coincides with any background events. In fact, if there is a sudden change in the background flux at or near the time of the transit, it may introduce spurious signals into the light curve mimicking or distorting the transit. Thus for each detected transit, we check both the light curve and the background flux in the vicinity of the transit time. If unusual features and/or discontinuities appear in the background during a particular transit, the vetter will flag it as a potential issue. A clear example of a false positive signal due to systematics in the flux background is shown in Fig. 5. Pixel Level light curve Inspired by the LATTE 3 pipeline developed within the Planet Hunters TESS project, we decided to include in our workflow a Pixel Level light curve (PLL) analysis. The PLL plot shows the light curve for each individual pixel of the corresponding target pixel file. For further Unlike the example shown in Fig.4 the is not suspicious of a BEER scenario because the modulation period is different from the orbital period, and the variability is likely caused by starspots. The aperture mask used for the light curve extraction includes a number of field stars that are bright enough to produce the modulation signal; one of these stars, TIC 294179389, is brighter than the target itself. Hence, the observed light curve modulation can be produced by a nearby field star. Importantly, the prominent stellar variability tricks the Modelshift (right panel) module into flagging the target as a false positive due to a nominal OED. The human vetters inspect the light curve, note the position of the transits with respect to the light curve modulations, and compare the out-of transit baseline level of panels "Odd" and "Even". After a comprehensive group discussion, we overrule the Modelshift OED disposition and mark the target as a genuine planet candidate. information we refer the reader to Eisner (2022). We inspect the light curve for each pixel in the field of view, and try to determine whether the transit occurs in the vicinity of the target or originates from another pixel that hosts another star -yet missed by DAVE's photocenter analysis. This additional layer of scrutiny has proven to be very useful in cases where DAVE's photocenter measurements were unreliable or difficult to interpret. For example, in some cases the scatter in the individual photocenter measurements can be so large that it is practically impossible to distinguish between reliable and spurious measurements. This usually occurs when the individual difference images exhibit a complex pattern (or simply look like random noise) instead of a single bright spot superimposed on a uniform dark background. This is often due to low SNR transits caused by either (i) the presence of nearby field stars that are much brighter than the target itself (and/or are highly variable); or (ii) when the true source of the signal is next to a much brighter target star. In these cases, the PLL analysis helps determine whether some of the detected transits are affected by systematic effects and/or artefacts, and ideally pinpoint the source of the signal. Figure 6 shows an example of such situation, highlighting how DAVE's measured photocenters for TIC 256886630 are unreliable due to the poor quality of many of the individual difference images (scenario (ii) above). Here, the PLL analysis immediately reveals that the true (and faint) source of the observed signal is near the edge of the aperture mask -such that some of its signal does enter the aperture -whereas the (much brighter) target shows no transit-like signal. As a result, this target has been ruled out as a false positive due to CO. Dispositions and comments According to the workflow described above, each of the 999 TOIs presented here was thoroughly examined by at least three vetters, including at least one member of the core science team. The purpose of this workflow is twofold: to distribute the total workload over a (right) transit. The first detected transit has low SNR and the background flux does not exhibit obvious discontinuities. In contrast, the second transit is much better defined, but the background flux shows a sudden spike at the time of the transit. Hence we conclude that this exoplanet candidate is a potential false positive caused by background systematics. large group of people, saving significant time and to also reduce the human bias that unavoidably affects inspection. Each vetter provides their evaluation (or disposition) of the TOI under scrutiny, according to the following prescriptions: (i) if the TOI shows no anomalies at both the flux and the pixel level then the signal is ranked as a Planetary Candidate, PC. We also classify the target as PC by default if any of the following cases are met: (a) the light curve has low SNR resulting in a very shallow dip and there are no indications for a centroid offset; (b) the photocenter analysis generates unreliable centroids (UC) and the light curve does not show any obvious systematics; (c) the phase-folded sector-bysector light curve shows no apparent transit signal (NT) and there are no known nearby sources bright enough to produce the transit depth. We note that an NT flag is not unexpected since DAVE analyses individual sectors instead of multi-sector data. As a result, low SNR and/or long-period candidates may not have sufficient per-sector SNR for DAVE's tests. (ii) if the TOI does not pass the vetting procedure then the signal is ranked as a False Positive, FP. A significant centroid offset (CO) represents one of the strongest clues for a FP scenario. A target is also classified as a FP when the phase-folded light curve exhibits a clear secondary eclipse (SS) or a significant OED. The latter is one of the most challenging features to distinguish as it is highly dependent on a quiet light curve; (iii) if DAVE generates a few red flags for a TOI but there are no clear indications of a false positive scenario, then the signal is ranked . The upper left panels shows difference images and the corresponding centroid measurements for 9 transits detected in the TESS light curve observed at 2-min cadence in the Sector 15. Most of the difference images show a complex pattern instead of a single bright spot on an otherwise dark background. The corresponding photocenter measurements alternate between two distinct locations -one near the target star and another few pixels above it. This makes interpreting the results from the photocenter module highly challenging. The PLL analysis on the right shows the first detected transit at 1713.20 TJD. Clear eclipses are seen in several pixels away from the target, near the upper edge of the aperture mask (red contour). We see the same pattern for all the transits detected within sectors 15 and 16 where the TIC has been observed. This candidate is thus ruled out as a false positive because of CO. as a probable False Positive, pFP. For example, a pFP may arise when the TESS light curve has a low SNR and at the same time we notice a potential secondary eclipse and/or the photocenter position seems to be slightly shifted towards a nearby field star. Long-period candidates are particularly difficult to analyze since the number of per-sector transits is small, and the measured photocenters might not be sufficient for a statistically-significant evaluation. Often, there are only one or two photocenter measurements. In cases like these, we flag the candidate as a pFP instead of FP even if the photocenter analysis indicates an offset. Automatic Disposition Generator In addition to the analysis described above, we also followed an additional procedure, that we named Automatic Disposition Generator (ADG), to automatically generate dispositions for TOIs based on the rankings of our vetters. For each TOI, we require dispositions from at least three vetters; the final disposition is determined by taking a weighted average of all vetters' dispositions. A critical step is to provide the ADG with a reliability indicator for each vetter via a user score ∈ [0, 1] to account for varying levels of expertise within our team. As the volunteers who contributed to this work are the same as those who contributed to the Paper I catalog, we used the latter's results to quantify the reliability of each vetter as follows. For each vetter, we constructed their own confusion matrix, as shown in Table 2, using the final group dispositions of Paper I as our knowledge base. In Paper I, the true PCs accounted for ∼ 71% of the total catalog over a total of TOT = 999 targets. To account for the unbalanced nature of the knowledge base sample, we used the weighted average precision as the metric to assess each vetter's level of reliability. Assume the -th vetter ranked a certain number of targets in Paper I obtaining ( ) number of correctly identified PCs, ( ) number of correctly identified FPs, ( ) number of incorrectly identified PCs and ( ) number of incorrectly classified FPs, then their score will be given by the following where PC = 709 is the number of PCs in the catalog of Paper I while FP = 290 represents the number of both FPs and pFPs within the same catalog. Certainly, not all vetters have given the same number of dispositions, which may result in a non-uniform efficiency computation, but we ignore this as first-order approximation. To calculate the weighted average of the overall disposition, we first convert labels into numbers, using the following convention: Hence, we define the overall disposition as the vector ì determined by the average of given dispositions weighted over the fidelity of vetters, where PC , pFP , FP are the number of vetters who voted for PC, pFP and FP scenario respectively while ≡ ℓ=1 ℓ . The final Paper Disposition is given using the following prescription: In Table 3 we reported the scores of each superuser who contributed to this work. ADG not only drastically reduces the time required to generate a uniformly vetted catalog but it also allows for the reduction of human bias via a rigorous scientific approach. In this regard ADG captures the ultimate essence of a Citizen Science Project. Cox et al. (2015) estimated that, on average and across all Zooniverse projects, citizen scientists inspected volumes of data equivalent to 34 years of full-time work by a single expert. For example, volunteers have discovered 41 new long-period (Long-P) planet candidates in the Kepler database (Wang et al. 2015) within the Planet Hunter TESS project (Fischer et al. 2012). In three years citizen scientists involved in the PHT project helped the scientific team to discover hundreds of new planet candidates (Eisner et al. 2022a) along with a large number of eclipsing binary systems (Eisner et al. 2021), including a hierarchical triple star system (Eisner et al. 2022b). Moreover, projects like the Visual Survey Group (Kristiansen et al. 2022) contributed to 69 peer-reviewed papers mainly focusing on exoplanets, multistellar systems and unusual variable stars. PLANET PATROL The Planet Patrol project was officially launched on the 29th of September 2020 by the Zooniverse platform. The first stage of the project was aimed at improving the reliability of DAVE's photocenter analysis by asking the trained users to evaluate the quality of the difference images generated by the centroid module. All users became acquainted with the workflow throughout brief vetting tutorials and F.A.Q. as well as numerous examples of false positives. More than 5, 600 volunteers examined ∼ 400, 000 difference images in just one month, achieving 95% of accuracy using as a knowledge base 198 classifications given by the science core team. After removing the difference images flagged as poor by the volunteers from DAVE's analysis, the photocenter uncertainty decreased by up to ∼ 30% for the majority of the candidates (Kostov et al. 2022). After the completion of the first stage project (November 2020), many eager volunteers (superuser) expressed an interest in getting further involved in the vetting work. The superusers played a fundamental role in creating our first TT9 Catalog and repeated the feat by vetting the 999 TESS candidates and assisting the core science team in producing the catalog presented here. Citizen scientists at work The main key to success of a citizen scientist project is having constant interaction between the science core team and the superusers. Hence, we hold live weekly meetings where we discuss the progress of the project and provide superusers the opportunity to discuss any difficulties they may have encountered throughout their task. Because our team is made of people from around the world, one of the superusers, HADL, recorded all meetings and posted them on a dedicated YouTube channel. These recordings (currently private) are useful for people not able to attend the specific meeting, and also provide a valuable resource for newcomers. Citizen Science has taught us that volunteers can not only offer invaluable assistance in the specific scientific task, but also, due to their diverse expertise, they could provide the scientific community important new ideas, resources and tools. For example, as the Google Sheets we used to keep track of our vetting process grew in content and complexity, it became difficult to find, create, analyze, and distribute the user dispositions and comments. To address this issue, one of us (RS) developed a custom vetting portal, Exogram (https://exogram.vercel.app), specifically designed to streamline the vetting process and facilitate group discussions. Exogram is hosted by Vercel, the database and authentication is handled by Firebase, and parts of the backend logic is written in Python. Exogram's homepage provides a user-friendly and intuitive interface that highlights targets that still need to be vetted by the user. It also allows the user to update their dispositions, search for TICs with specific dispositions or comments, and keep track of the overall vetting progress by all users. The website directly links each target to the DAVE-generated PDFs containing the vetting results and diagnostics stored on Google Drive. When creating dispositions, Exogram limits the user to three disposition options: False Positive (FP), Planet Candidate (PC), and Potential False Positive (pFP). The user comments section accepts both a pre-defined list of machine-readable vetting acronyms (e.g. "CO" for "Centroid Offset") and free text. In addition, Exogram allows the user to interactively inspect and manipulate the target's light curve. The website downloads all available QLP data on demand from MAST, displays the corresponding normalized flux, centroid motion, and background flux for one or multiple TICs, and highlights the recorded momentum dumps. This allows the vetter an additional layer of scrutiny beyond that provided by DAVE, a complementary comparison between light curves produced by two different pipelines (eleanor vs QLP), and enables the user to explore and examine in detail the light curves of nearby targets. Figure 7 represents one of the more interesting targets within our catalog for which the QLP's light curve is completely different from that generated by eleanor. THE CATALOG The 999 candidates analyzed in this work were drawn from the candidates provided by the ExoFOP TESS archive in the fall of 2020. They were selected by TIC number and do not overlap with our first TT9 catalog. Once each TOI had at least 3 dispositions, we ran ADG on the whole catalog. It generated 752 signals as PCs, 142 as FPs and 105 as pFPs. Thus, overall approximately one in three planet candidates is a false positive or a potential false positive, a rate similar to that of Paper I. The most common comments within our catalog are "FSCP" and "FSOP" which occurred 628 and 481 times respectively. This is expected, given that TESS targets are often contaminated by nearby background and/or foreground sources. We note that we only use these two flags as an extra layer of scrutinythey are not sufficient to mark a candidate as a false positive. Planet candidates Within our catalog 752 TOIs passed all DAVE tests and human inspections as planet candidates. In this sample there are 117 objects that have already been confirmed within the TESS scientific community or previously discovered by other exoplanetary surveys. Twelve of the 752 PCs can be regarded as bona-fide, high-quality candidates, as they passed the DAVE test showing a clear box-shaped transit and high-significance on-target centroid measurements. In Table 4 we summarize the main properties of these 12 likely genuine planets. None of these 12 candidates has been confirmed by follow-up observations yet. Apart from the "FSCP" comments that are quite spread all over the catalog, the most common comments for our PCs are "LowSNR" (270 times), "UC" (247 times), "LCMOD" (201 times) and "Vshape" (147 times). The first two comments are strongly correlated because DAVE often generates unreliable centroids for signals with low SNR, thus making the classification challenging. As already discussed, in these cases we automatically flag the target as PCs. The third most notable comment can either be caused by the inherent modulations of the targets under scrutiny or from the sources that contaminate the extracted light curve. Strong light curve modulations can also completely hide shallow transits that could be identified after careful detrending. Finally, the flag for V-shaped transit is not a conclusive evidence to support a false positive scenario. It only indicates that the two objects orbiting a common center of mass have comparable sizes. Although this happens more frequently for a binary star system, we can not rule out a giant planet transiting its host star with a non-zero impact parameter (e.g., Smalley et al. 2011). False positives Our analysis classified 142 candidates as FP. Of these, we ruled out 118 targets as false positives due to a clear "CO". While nearly 40% of false positives in Paper I was flagged as "CO", in this work the rate has increased to approximately 83%. The PLL analysis was likely essential for some targets that otherwise would have been flagged as PC because of poor centroid measurements. Furthermore, we believe that the observed percentage increase in the "CO" flag is also due to the volunteers' skill improvement after two years of training. The second most frequent false positive indicator is the presence of a significant secondary eclipse ("SS", 33 targets), followed by Odd-Even Difference ("OED", 34 targets). Both of these flags are often accompanied with a "Vshape" comment. All OED targets have been inspected for prominent modulations of the light curve. We note that we vet all the TOIs presented here regardless of their current disposition on ExoFOP as done in Paper I. In particular, in Paper I six confirmed planets were classified as FP due to a significant secondary eclipse at mid-transit. In this work, out of 142 targets, we labelled as FP TIC 427761355.01 and TIC 386259537.01 that have have confirmed as bonafide planets by follow-up observations. The Modelshift of TIC 427761355.01 (or TOI-1518 b) shows a V-shaped transit with a SS exactly at half period. At this level of significance we cannot distinguish a secondary eclipse from a planetary occultation, thus for consistency with our workflow we flag it as a "FP". We also labelled WASP-169 b as a FP since its centroids module depicts a clear and reliable offset of the light photocenter at the time of transit. After inspecting this target with PLL, we discovered that there is a deeper transiting feature in the nearby pixel on the same period of WASP-169 b, which causes the overall centroid to shift. Potential false positives We labelled 105 TOIs in our catalog as pFPs. Our concerns and difficulties towards these targets are reflected in the most notable comment, "potential-CO" (61 times). The prefix "potential" qualitatively indicates that we are not fully convinced there is a significant photocenter offset due to "LowSNR" (59 times) or prominent "LCMOD" (40 times) which complicated the centroid measurements. It often happens that among many unreliable centroid measurements there are a handful that show a hint for a CO. In these cases, we could not eliminate our concerns with the PLL analysis either. However, it did help us identify as pFP three targets which were previously classified as PC due to UC. The light curves of these targets usually do not show a clear transit ('LowSNR', 59 times) leading to 26 cases for which a potential secondary eclipse has been observed as well as 20 cases where an OED might be statistical significant. Individual targets of interest One of the most intriguing and worth noting planet candidate within our catalog is TIC 396720998.01, a sub-Jovian ( ≈ 0.35 ) object orbiting a white dwarf ( * ≈ 0.15 , * ≈ 0.5 and * ∼ 50, 000 ), according to the Tess Input Catalog. It has been observed by TESS in sectors 3,4,5,30,31 and 32. We also found additional transit-like features (≈ 6000 ppm) that may suggest a multiplanet system around this hot white dwarf as shown in Fig. 8. We flagged a V-shaped transit potentially due to the small size of the host star (≈ 0.15 ). This system could represent a perfect target to shed light on the evolution of a planetary system around Sun-like stars during the last stages of their evolution. Among the TOIs listed in our catalog, we also kept track for planet candidates orbiting within the so-called habitable zone of their host stars. For each given star with known radius * and mass * we calculated the inner and outer edges of its so called habitable zone as defined by Kopparapu et al. (2013). As to the inner edge we considered the runaway greenhouse at which the oceans evaporate entirely, while the outer edge was calculated considering the maximum greenhouse provided by a CO 2 atmosphere. We found two planet candidates that orbit the habitable zone of the stars TIC 271971130 and TIC 360156606. TIC 271971130.01 is a planet candidate with ≈ 1.6 ⊕ and ≈ 19.3 days detected by the SPOC pipeline. TESS observed the Figure 7. TIC 458856474.01 is a planet candidate orbiting its host star every 6.08 days. In the upper panel we show the light curves of TIC 458856474 observed by TESS in sector 37 generated by eleanor (black) and QLP (red). The grey-shaded bars represent each transit within the sector. We also show the pre-processed SPOC light curve (green) for completeness. The light curve generated by eleanor is completely different from that of QLP. The latter is compatible with a prominent ≈ 1−day eclipsing binary and a transiting object with a period of 6 days. In the lower panel the PLL analysis for the first transit in sector 37 manages to solve the conflict. In fact, it clearly shows that the 1−day eclipsing binary signal originates from a nearby pixel within the aperture mask used by QLP to extract its own custom light curve. Table 4. List of the 12 most promising planet candidates in our work. For each TOI we report the TIC and TOI identifiers, the DAVE input parameters, the radius of the transiting object , the stellar radius * along with its TESS magnitude and the final comments provided by the vetters. Figure 8. The light curve of planet candidate TIC 396720998.01 as observed by TESS in sector 3. The grey-shaded bar highlights the transit as detected by the SPOC pipeline. In addition we noted a potential secondary feature at ≈ 1399 BTJD. We found a correspondence in the ExoFOP archive which flagged this signal as the candidate TIC 396720998.02. Its reported orbital period is ≈ 777 days that may be an upper limit due to the lack of observations between sectors 5 and 30. target in sectors 1-13, 27, 29-37, and 39 at cadences of 2, 10 and 30 minutes. This TOI is marked in our catalog as a LowSNR candidate; in some sectors it is quite challenging to see the transits. It is a faint star (TESS mag = 13.5) for which we also flagged "FSCP" and "FSOP"'. Hence, the light curve is contaminated by nearby fainter sources (< 15 TESS mag ) within the aperture mask and the same pixel. As discussed in Sect. 2, in cases like this we consider the candidate as a PC by default. According to the TESS Input Catalog stellar parameters the host star is a red M dwarf with * ≈ 3187 , * ≈ 0.22 and * ≈ 0.20 . The candidate planet lies very close to the inner edge of the habitable zone of its star. TIC 360156606.01 is a planet candidate with ≈ 9 ⊕ and ≈ 27.36397 days. Its host star has been observed by TESS in sectors 11 and 12 at a cadence of both 2 and 30 minutes, and in sector 38 at 20 seconds, 2 and 10 minutes cadences. The planet candidate is marked in our catalog as a LowSNR signal. Its light curve shows prominent modulations which make the Modelshift analysis difficult. These modulations may originate from brighter sources that fall within the aperture mask. However the detected transit is above the noise and quite clear. As discussed above, due to its long orbital period the photocenter test from DAVE is inconclusive. According to the TESS Input Catalog Stellar Parameters the host star is a red M dwarf with * ≈ 3055 , * ≈ 0.446 and * ≈ 0.43 . This candidate has been recently confirmed by Mann et al. (2022) as TOI-1227 b within the TESS Follow-up Observing Program Working Group. Comparison to dispositions based on Machine Learning As we mentioned in the Introduction, Machine-learning based pipelines are also effective in providing dispositions for a large sample of TOIs. To date, there are two main algorithms based on deep learning that have been explicitly tested on TESS data: the ASTRONET versions described in Yu et al. (2019) and in Tey et al. (2023); and Exominer (Valizadegan et al. 2022, see their Section 10). Yu et al. (2019) describe five different networks, with different tasks, ranging from the "triage" model that only works on light curves and removes false positive signal produced by instrumental artifacts, to vetting models that can also take into account analyses centroids' positions and additional information. Their best vetting model achieves an average precision 4 of 69.3% and an accuracy of 97.8%. Their algorithm has recently been improved by Tey et al. (2023), reaching a 99.6% Predictions from the neural network in F23 recall at a precision of 75.7%. On the other hand, Exominer makes use of the unique elements of Kepler SOC/TESS SPOC data validation summary report in their original format. Exominer reaches a 88% precision at the recall value of 73% on TESS data. Following a similar approach to that of ASTRONET, Fiscale et al. (2021) and Fiscale et al. (2023, hereinafter F23), presented a deep-learning method to obtain dispositions from TESS data. By working only on the light curves, the model described in F23 achieves a precision of 87% at recall value of 81%. Note that applying Neural Network models described in the literature to an arbitrary dataset is not straightforward: it requires additional work, even when the code is publicly available (as in the case of e.g. ASTRONET), including preforming the training from scratch (see e.g. the discussion in Visser et al. 2022). Besides reaching such good performances, the F23 model has the additional advantage of being developed within our research group and therefore can be immediately applied to the catalogs discussed in this work. We first tested the F23 network on the Paper I catalog, in order to use the algorithm described in Sect. 2.3 to estimate its score, that is 0.73, hence similar or better than the one of a third of the superusers. Therefore, we can compare the independent dispositions obtained by the neural network with the catalog obtained by exploiting citizen science, in order to check for consistency. We show this comparison in the form of the confusion matrix in Fig. 9, where we consider our catalog's outcome as ground truth. Hence, true positives (TP), true negatives (TN), false positives (FP) and false negatives (FN) are computed with respect to our dispositions. Specifically, the TP and TN represent the fractions of TOIs classified by both our team and the network as PC and not PC respectively. The FP indicates the fraction of TOIs we labelled as not planet candidate (i.e. we classified as FP or pFP) while the network predicts as PC, and FN indicates the number of TOIs that are indicated as PC in our catalog but are not identified by the network. Furthermore, over half of the TOIs mislabelled as not planet by the network are flagged as LowSNR targets in this work, with half over all data with positive labels, and "accuracy" indicates the number of true positives and true negatives over the total number of samples. of these targets not showing any visible transit. In these cases, we decided to be conservative and pass the signal as a candidate if there are no other issue. The network however is trained on datasets where similar objects are not labelled as PC, hence it cannot provide the same disposition as us. Summarising, we find that machine-learning approaches are promising, but they still need to be complemented with the study of the ancillary data available (such us the photocenter position) in order to provide final dispositions and validation, especially in the case of low SNR light curves. DISCUSSION In Fig. 10 we show the distribution of the 999 TOIs within the ( , ) plane. The figure highlights the high rate of false positives at short periods and large planetary radius. A potential explanation for this result may indicate that the majority of false positive scenarios originate from close eclipsing binary systems. We also emphasize that our procedure automatically classifies all long period candidates (> 50 days) as PC because these objects have insufficient per-sector photocenter measurements and are usually flagged as 'LowSNR' candidate. We also applied a two-sample Kolmogorov-Smirnov test to the orbital period distributions of PCs and pFPs-FPs obtaining a -value less than 0.05. We repeated the test for the radii distribution between PCs and pFPs-FPs obtaining the same result. This suggests that the two samples come from different distributions within the fixed level of confidence. These trends are in agreement with those obtained in Paper I; in particular we did not find any statistical deviations in the and distributions between the same classes of the two catalogues. This was expected since the methodology underlying both catalogues is practically the same. Hence we merged the two catalogues in one sample containing 1998 uniformly-vetted TESS candidates. We performed a statistical analysis of this sample by taking into account the statistical correlation between the orbital period and the planetary radius in the planet rate occurrence (Hsu et al. 2019). Hereafter, we will use (p)FPs when we refer to the both FPs and pFPs contained in the sample. In Figure 11 we show the difference between the occurrences of PCs and (p)FPs within the ( , ) diagram. When considering the orbital period and the planetary radius at the same time, we observe that the (p)FPs still outnumber the PCs at short period ( 4 days) but the dependence on the planetary radius is more complex. In particular for ≤ 4 days, the PCs under-dense cells form a triangular shape region that overlaps the so-called Hot Neptune Desert. Demographic studies revealed a scarcity of discovered exoplanets within this region (Szabó & Kiss 2011;Beaugé & Nesvorný 2013). Hence, our analysis suggests that most of the planet candidate signals falling within the Hot Neptune Desert are consistent with false positives. This is also consistent with the results of Magliano et al. (2022) who classified a sample of Hot Neptune candidates using the same methodology. In particular, in their sample of TESS candidates with ≤ 4 days and 0.27 ≤ ≤ 0.44 , nearly 75% of the investigated candidates were flagged as (p)FP. The rate occurrences obtained here could be also used as priors to develop a Bayesian pipeline aimed at vetting a batch of TESS candidates. CONCLUSIONS We presented our second catalog of 999 uniformly-vetted transiting exoplanet candidates from TESS as part of the Planet Patrol citi- zen science project. We implemented new diagnostics within our workflow to help vetters scrutinizing the more challenging cases. We also introduced a more precise way of getting a final group classification based on vetter's reliability. We marked 752 TOIs as planet candidates, of which 117 are confirmed planets. We also identified 12 planet candidates which passed all the vetting diagnostics placing themselves as high-priority targets to be confirmed. 142 TOIs have been classified as false positives mainly due to a clear offset in the measured photocenter and/or a significant secondary eclipse. To be consistent with our workflow we found out that 2 targets labelled as false positives were true planets. Finally, 105 TOIs were flagged as potential false positives due to a potential centroid offset or secondary eclipse dominated by light curve modulations and/or systematics. Together with Paper I, this work creates a catalog of uniformly-vetted TOIs that can be further used to prioritize targets amenable for follow-up observations. Additionally, the two catalogs can be utilized as a training set for machine learning efforts aimed at full automation of the vetting process. This catalog is provided to the scientific community in the same format as the Table 5; the full table is available as supplementary material along with this manuscript. The files generated by DAVE are publicly-available on the Exogram platform and will be also made available on ExoFOP-TESS as part of the metadata associated with each TOI. Figure 11. Period-radius occurrence rates of the difference between planet candidates and false positives (including the potential false positives) for the whole sample of 1998 targets investigated in this work and in Paper I. The numerical values of the occurrence rates are expressed as percentages. We note that the bin sizes are not uniform. Blank cells are those that contain neither PCs nor (p)FPs.
2023-03-02T02:15:51.618Z
2023-03-02T00:00:00.000
{ "year": 2023, "sha1": "05fb3961ed3cd9497d4c237f10a89dfda93b6914", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "05fb3961ed3cd9497d4c237f10a89dfda93b6914", "s2fieldsofstudy": [ "Physics", "Geology" ], "extfieldsofstudy": [ "Physics" ] }
6538993
pes2o/s2orc
v3-fos-license
Men, Women, and Ghosts in Science Science suffers because, by favouring the self-confident of both sexes, we discriminate against women. A Taboo It is not easy to write or talk about this subject. If you say, for example, that women are on average more understanding of others, this can be interpreted as misogyny in disguise. If you state that boys on average are much more likely than girls to become computer nerds, people may react as if you plan to ban all women from the trading rooms of merchant banks. The Cambridge University psychologist Simon Baron-Cohen published research on the "male brain" in a specialist journal in 1997, but did not dare to talk about his ideas in public for several years [2]. One reason for this absurd taboo is that we cannot think objectively because our minds are full of wayward beliefs and delusions-"ghosts" (Box 1). And one of these ghosts is the dogma that all groups of people, such as men and women, are on average the same, and any genetic distinctions must not be countenanced. Such ghosts bias our perceptions and censor our thoughts. Boys and Girls Are Born Different and Remain So The chance that a woman will mug you tonight on the way home is somewhere around nil. That is a quirk specifi c to my gender. -Michael Moore [4] Baron-Cohen makes one point crystal clear: you cannot deduce the psychological characteristics of any person by knowing their sex. Arguing from the scientifi c literature that men and women typically have different types of brains, he nevertheless points out that "some women have the male brain, and some men have the female brain" [2]. Stereotyping is unscientifi c-"individuals are just that: individuals" [2]. Yet Baron-Cohen presents evidence that males on average are biologically predisposed to systemise, to analyse, and to be more forgetful of others, while females on average are innately designed to empathise, to communicate, and to care for others. Males tend to think narrowly and obsess, while females think broadly, taking into account balancing arguments. Classifying individuals in general terms, he concludes that among men, about 60% have a male brain, 20% have a balanced brain, and 20% have a female brain. Women show the inverse fi gures, with some 60% having a female brain. Many facts (see [2] for references) argue that these differences have their roots in biology and genetics. Here are some examples. First, it is hardly necessary to point out that distinguishing between the contributions of nature and nurture to animal or human behaviour has proved diffi cult. However, newborn infants (less than 24 hours old) have been shown a real human face and a mobile of the same size and similar colour. On average, boys looked longer at the mobile and girls looked longer at the face [5]. Second, such differences at birth must have developed earlier. One factor is the level of testosterone in the developing brain around three months of gestation, which is higher in males (due to the hormone being produced by the foetus itself). Many studies show that testosterone affects development and behaviour, not only in humans, but also in other mammals. Testosterone sponsors development of the male phenotype, and can infl uence behaviour even of animals of the same sex. For example, giving older men testosterone specifi cally improves their ability with those spatial tests on which males normally score higher than females [6]. Essays articulate a specifi c perspective on a topic of broad interest to scientists. Box 1. Ghosts "Mrs. Alving: I almost think we are all ghosts-all of us, Pastor Manders. It isn't just what we have inherited from the father and mother that walks in us. It is all kinds of dead ideas and all sorts of old and obsolete beliefs. They are not alive in us; but they remain with us none the less, and we can never rid ourselves of them. I only have to take a newspaper and read it, and I see ghosts between the lines. There must be ghosts all over the country. They lie as thick as grains of sand. And we're all so horribly afraid of the light" [3]. Third, autism spectrum conditions are genetically based, and have been described in detail [2,7]. People with these problems communicate poorly; they are unable to put themselves in another's place, and have diffi culties with empathising. They may treat others as objects. They often become obsessed and show repetitive behaviour. The less severely affected can become experts on recondite subjects, such as train timetables or ocean temperatures. Most relevant for our arguments is that autism spectrum conditions are largely sex-limited, being between four and nine times more frequent in males. From many studies, including psychology and neuroanatomy, Baron-Cohen argues convincingly that autism spectrum conditions are an extreme form of maleness [2,8]. It will not have escaped the notice of many scientists that some of their colleagues and maybe themselves have more than a hint of these "autistic" features. There is good evidence that this type of single-mindedness is particularly common in males [2]. Indeed, we might acknowledge that a limited amount of autistic behaviour can be useful to researchers and to society-for example, a lifetime's concentration on a family of beetles with more than 100,000 species may seem weird, but we need several such people in the world for each family. And most of these specialists will be men. (The Web pages of the Smithsonian Institute in Washington suggest that their systematists consist of about 30 women and 125 men.) It follows that if we search objectively for an obsessive knowledge, for a mastery of abstruse facts, or for mechanical understanding, we will select many more men than women. And if males on average are constitutionally better suited to be this kind of scientist, it seems silly to aim at strict gender parity. However, in professions that rely on an ability to put oneself in another's place, at which women on average are far superior, we should expect and want a majority of women. For example, among current student members of the British Psychological Society, there are 5,806 women to 945 men; and among graduate psychologists, 23,324 women to 8,592 men. Of those who practice as chartered psychologists, the ratio has fallen further (7,369 women to 4,402 men). Yet among Fellows of the Society, honoured largely for their research, there are 428 men to only 106 women! Representation of Men and Women in Science Among biomedical students in Europe and in the United States, there are similar numbers of males and females, suggesting perhaps that this subject is equally well suited to both sexes. But with higher and higher rank, the proportion of women falls inexorablyfull professors are only about 10% female [9]. Women drop out steadily, and many of them have demonstrated high ability. There is plenty of evidence for similar trends in different branches of science [9]. For example, at the Laboratory of Molecular Biology in Cambridge, UK, where I work, the gender ratio of graduate students is currently 43 male to 35 female, yet the ratio of group leaders is 56 male to 6 female. Are there social or practical reasons why we would like to maintain a more equal balance, especially where scientists have power over others? The short answer is yes, and here are three reasons: First, these top research jobs call for a mix of skills, which a mix of men and women can deliver best. Nowadays, holders of these jobs plan science projects, write grants and articles, and try to network their papers into the top journals. Their students and postdocs, mostly young and inexperienced, usually do all the bench work. These students need more than instructions; they also need mentors who are able to listen to them and teach them understandingly. Indeed, some individuals deserve freedom to work out their own ideas: for example, Einstein did not have his doctorate when he wrote four of six of his great papers. Not many students get such opportunities now-whatever their potential. Understanding individuals and working out how to make the best of their diverse abilities are, as we have seen, predominantly feminine qualities. Second, if we had a balanced mix of men and women in charge of our institutes, I believe we would have more contented and productive workplaces. We should not forget that the motivation to work hard and solve problems can come from supportive colleagues, as well as from competitiveness. Third, it is self-evident that scientifi c leaders should include a diversity of people from whom younger individuals can pick role models as they choose their careers. The present lack of top female scientists will divert young women from scientifi c ambition; it makes no sense to discourage a future Frances Crick. Many have turned their attention to explaining the fall out of women from science; it is traditionally ascribed to a mixture of discrimination and choice [9]. Regarding overt discrimination, in a lifetime in science, I have seen only little, and it has been both for and against women. Surely, gender discrimination cannot explain more than a tiny part of this trend. However, choice is certainly a main factor. Some choices are unavoidable; if there are to be children, women must bear them. However, after about six months or so, there is no reason, in principle, why the main carer of the children should not be the father. Later on, it could just as well be the father who takes time off work to look after a sick child. Yet partly because of the different priorities that on average men and women have, a much higher proportion of women put the needs of their children fi rst and climbing the career ladder second. But there is a different kind of discrimination that particularly damages creative pursuits such as science. There is good psychological evidence that aggression and lack of empathy are on average male characteristics, and we may agree with Baron-Cohen that for both sexes, "nastiness…. gets you higher socially, and gets you more control or power" [2,10,11]. Science should not be a military or a business operation, but nowadays it increasingly resembles one-for most, it is a vicious struggle to survive. In this struggle, men climb higher because they are on average more Science would be better served if we gave more opportunity and power to the gentle, the refl ective, and the creative individuals of both sexes. ruthless, and many women, as well as a gentle minority of men, shy away from competing with them [12]. And I think that our selection methods exacerbate this tendency. Job Searches in Academia About 100 years ago, Ibsen shed light on the secrets of contemporary life, and in doing so, championed women's rights. But since then, the feminist campaign for equality has helped build the belief that men and women, on average, have exactly the same aptitudes. It is time we exorcised this particular ghost, and if we do, it will help put more of the less aggressive members of society, most of whom are women, into positions of power. For example, in job searches and in considering people for promotions, we have been asking women to take tests, largely devised by men, that tend to overvalue masculine characteristics. If men and women on average were identical, no one would see fault in this, but if it is agreed that they are not, these tests become discriminatory-for they favour those many men and those few women with masculine behaviour. At present, in the competition for academic posts, we expect our candidates to go through a gruelling process of interview that demands self-confi dence. We are impressed by bombast and self-advertising, especially if we don't know the fi eld, and we may not notice annexation of credit from others, all of which on average are the preferred province of men. But we should also seek out able scientists who would care well for their groups, those who would mentor a distressed student and help her or him back into productive research. And if we did, we would choose more feminine women as well as more feminine men. And most important of all, could we try to select for the one characteristic we need most, scientifi c originality? Originality and creativity are all too rare, and I know of no evidence that these traits are more frequent in one sex [13]. As we busily compare candidates, adding up their papers and calculating impact factors, do we remember to look for these qualities? Instead of reading the papers, we count them. Counting rewards those who have had many papers accepted, and those who have worked their names into the author list. But is the editorial process of selecting papers an objective one? Certainly not; in the jungle where we fi ght to publish, salesmanship and pushiness pay off [14], and these tend to be masculine characteristics. Thus, if we were to read the papers of candidates and search for originality and insight, I believe we would select more women, as well as more men with feminine qualities. So I am not advocating overt positive discrimination; instead, I suggest we consciously try to see through showmanship and select the qualities we actually need. I have argued that reducing the premium we give to aggression would, in several different ways, lead to more women in science and also to better science. Even so, in this Utopia, I think that far less than 50% of top physicists would be women (and far less than 50% of top professors of literature would be men). But I don't think that would matter-we would be making better use of the diverse qualities of people. Both women and men might accept that although there is much overlap in the two populations, we are constitutionally different-a diversity we should be able to celebrate and discuss openly. Both women and men should be leading such discussions with pride. Science should not be a military or a business operation, but nowadays it increasingly resembles one-for most, it is a vicious struggle to survive.
2014-10-01T00:00:00.000Z
2006-01-01T00:00:00.000
{ "year": 2006, "sha1": "a86c7d87c426baa4f475f543a483c9df13503a28", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosbiology/article/file?id=10.1371/journal.pbio.0040019&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ca93e2f33ec9a5cc76c8df8229f99d0548617bc8", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
211170586
pes2o/s2orc
v3-fos-license
Opportunities and Challenges in HIV Treatment as Prevention Research: Results from the ANRS 12249 Cluster-Randomized Trial and Associated Population Cohort Purpose of Review The ANRS 12249 treatment as prevention (TasP) trial investigated the impact of a universal test and treat (UTT) approach on reducing HIV incidence in one of the regions of the world most severely affected by the HIV epidemic—KwaZulu-Natal, South Africa. We summarize key findings from this trial as well as recent findings from controlled studies conducted in the linked population cohort quantifying the long-term effects of expanding ART on directly measured HIV incidence (2004–2017). Recent Findings The ANRS TasP trial did not—and could not—demonstrate a reduction in HIV incidence, because the offer of UTT in the intervention communities did not increase ART coverage and population viral suppression compared to the standard of care in the control communities. Ten controlled studies from the linked population cohort—including several quasi-experimental study designs—exploit heterogeneity in ART exposure to show a consistent and substantial impact of expanding provision of ART and population viral suppression on reduction in HIV incidence at the couple, household, community, and population levels. Summary In this setting, all of the evidence from large, population-based studies (inclusive of the ANRS TasP trial) is remarkably coherent and consistent—i.e., higher ART coverage and population viral suppression were repeatedly associated with clear, measurable decreases in HIV incidence. Thus, the expanded provision of ART has plausibly contributed in a major way toward the dramatic 43% decline in population-level HIV incidence in this typical rural African population. The outcome of the ANRS TasP trial constitutes a powerful null finding with important insights for overcoming implementation challenges in the population delivery of ART. This finding does not imply lack of ART effectiveness in blocking onward transmission of HIV nor its inability to reduce HIV incidence. Rather, it demonstrates that large increases in ART coverage over current levels will require health systems innovations to attract people living with HIV in early stages of the disease to participate in HIV treatment. Such innovations and new approaches are required for the true potential of UTT to be realized. Introduction In 2018, approximately 38 million people worldwide were living with HIV [1]. About 80% of people living with HIV knew their status and nearly 80% of these people (23.3 million) accessed antiretroviral therapy (ART), a threefold increase from 2010. Despite the successful scale-up and access to HIV testing and treatment, HIV incidence remains high in many settings, with an estimated 1.7 million newly infected people in 2018. In particular, Eastern and Southern Africa remain the regions the most severely affected by the HIV epidemic [1]. The results from the landmark HPTN 052 trial in 2011 unequivocally showed that immediate initiation of ART was associated with a 96% reduction in HIV sexual transmission in sero-discordant stable couples [2]. To provide empirical evidence of the feasibility and effectiveness of a universal test and treat (UTT) strategy on reducing HIV incidence at the population level, four major community-based trials were initiated in Eastern and Southern Africa [3••, 4-6]. The first of these trials, the ANRS 12249 treatment as prevention (TasP) trial, was conducted in rural South Africa between 2012 and 2016 and offered home-based HIV testing and universal Art regardless of CD4 count in the intervention communities [3••, 7]. Three additional trials, BCPP (Botswana Combination Prevention Project) -Ya Tsie, PopART (Population Effects of Antiretroviral Therapy to Reduce HIV Transmission-HPTN071), and SEARCH (the Sustainable East Africa Research in Community Health) trials, were initiated in 2013 and completed between 2017 and 2018. The BCPP trial was a pair-matched community-randomized trial conducted in 30 communities and compared the standard of care in the control clusters with a combination prevention package in the intervention clusters (community mobilization, community-wide home-based and mobile HIV testing, targeted outreach testing men and women ≤ 25 years of age, active tracing and linkage to care support, increased access to male circumcision services, and expanded ART) [4]. PopART was conducted in 21 communities in Zambia and South Africa with three arms: Arm A: universal ART coupled with combination prevention intervention (door-to-door rapid HIV testing services, referral for voluntary medical male circumcision (VMMC) among uncircumcised HIV-negative men and antenatal care among HIV-positive pregnant women, screening and referral for tuberculosis (TB) and sexually transmitted infections (STIs), condom promotion and distribution) Arm B: ART provided according to local guidelines with combination prevention intervention and Arm C: the standard of care [5]. The SEARCH trial was a pair-matched cluster-randomized trial in 32 communities in rural Uganda and Kenya and included 2-week mobile, multi-disease, community health campaigns including rapid HIV testing, referral to HIV care, and home-based testing in all communities (i.e., both control and intervention arms) at baseline. Thereafter, the control communities received the standard of care (i.e., national guideline-restricted ART) while the intervention communities received annual repeat campaigns including HIV testing coupled with universal ART [6]. The outcomes and results of these trials have been well documented and described [8][9][10][11][12][13]. Briefly, two of the trials were able to demonstrate some evidence of moderate reduction in HIV incidence in intervention communities relative to the standard of care [4,5]. In the BCPP, the HIV incidence was approximately 30% lower in the intervention communities (0.59 per 100 person-years vs. 0.92 per 100 person-years in the control communities) [4]. In the PopART, the HIV incidence in the arm which received combination prevention packages with ART administered according to national treatment guidelines (1.06 per 100 person-years) was 30% lower than that in the control arm (1.55 per 100 person-years), but there was no difference in the arm which received combination prevention packages in addition to universal ART (1.45 per 100 person-years) [5]. However, collectively, the four trials were unable to demonstrate consistent and substantial population reductions in HIV incidence. Aside from issues such as sexual mixing of populations which are clearly important [14], the more fundamental reason for lack of consistency in these findings is that many of the trials were unable induce a substantially higher ART coverage in intervention communities over the duration of the trial. Without a strong gradient in ART coverage across the trial arms, the causal effect of ART on population incidence cannot be estimated. Achieving such a coverage differential was made particularly difficult by the rapidly evolving treatment guidelines over the course of the trials (which resulted in control communities adopting the treat-all approach in three of the four trials over the course of participant follow-up) and in many cases due to exemplary care packages being delivered in "control communities". Some of the trials (most notably SEARCH [6]) were highly successful in initiating large numbers of patients onto ART in both the intervention and control communities through an innovative community-based testing approach [6]. The ANRS TasP trial was conducted in the KwaZulu-Natal province of South Africa, a region considered by many to be at the epicentre of the HIV pandemic. The setting provides a remarkable opportunity to study the long-term impacts of ART scale-up on HIV incidence from within the same population because it also includes a linked population-based cohort which has been running for over 14 years. The population cohort is based on a very similar modus operandi to the ANRS TasP trial and uses the gold-standard approach of actively enrolling and following up a complete population observing individual HIV seroconversions in those who were initially observed to be HIV-uninfected. Here, we summarize key design features and results from the ANRS TasP trial as well as recent findings from ten controlled studies from the population-based cohort that directly quantified the long-term effects of expanding ART on directly measured HIV incidence. Several of the studies used strong quasi-experimental designs (such as regression discontinuity and instrumental variable designs), which, like randomized controlled trials, can control for both observed and unobserved confounding. Overview of the ANRS 12249 Cluster-Randomized Trial The design of the ANRS TasP trial has been described in detail elsewhere [15,16]. Briefly, the ANRS TasP trial evaluated the hypothesis that home-based HIV testing coupled with an immediate offer of ART would result in a decrease in populationlevel HIV incidence in a hyperendemic rural population. This hypothesis was tested in a two-arm cluster-randomized trial implemented between March 2012 and June 2016. Eleven control communities were offered ART according to standard of care (initially CD4 counts ≤ 350 cells/ml and then < 500 cells/ml from Jan 2015) and 11 intervention communities were offered ART regardless of CD4 count. The study was 80% powered to detect an overall 34% reduction in cumulative HIV incidence, with an estimated incidence of 2.25% per year in the control clusters over the trial period. The calculation explicitly considered the different lengths of follow-up time in the clusters, loss to follow-up, and the likelihood of re-testing of participants as well as the potential diluting effects of inter-cluster sexual mixing [15]. The ANRS TasP trial was the first of the four treatments as prevention trials and incorporated some novel features aimed at enhancing efficiency and delivery of the intervention in at least four areas are highlighting here. Firstly, other than expanded ART eligibility in the intervention arm, the interventions and mode of delivery were identical in both arms of the trial. In the later trials-BCPP, SEARCH, and PopART [4][5][6]-the makeup of the interventions differed from the control arms in ways other than just expanded ART eligibility, such that the trials evaluated the impact of a combination of interventions versus the standard of care rather than only the additional impact of UTT on population-level HIV incidence. In other words, these subsequent trials included additional or enhanced services in the intervention arm, besides universal ART. In the BCPP, these included enhanced community mobilization and expanded health prevention/screening, including male circumcision, distribution of condoms, and home-based HIV testing as well as HIV testing in mobile units during the community campaign [4]. In PopART study, specific mobile activities in the community, health screening for TB and STIs, and home-based HIV testing were offered in the intervention arms [5] while the SEARCH trial provided repeat annual community health campaigns or mobilization (including HIV testing at mobile sites, home-based HIV testing, and referral to HIV care) [6] for 3 years after the services were offered once in all intervention and control communities at baseline. Secondly, the ANRS TasP trial (along with the SEARCH trial) evaluated the primary endpoint of HIV incidence among the whole trial population as opposed to a nested sub-sample of individuals within each cluster. Thirdly, the ANRS TasP trial used explicit linkage to records from the pre-existing public sector ART and trial clinics to quantify trends in ART coverage. This enabled robust calculation and comparison of the ART coverage at baseline and over the course of the trial in a way unaffected by the biases commonly associated with treatment self-report. Finally, ART was provided to participants in trial-specific clinics located in each of the 22 clusters at convenient locations. Many of the existing public sector clinics required long travelling and waiting times to receive treatment and care. Thus, trial clinics provided relatively easy access to treatment as each trial participant lived within 45 min' walk of the clinic in their respective clusters. Results of the ANRS 12249 Cluster-Randomized Trial During the trial period, 26,518 of 28,419 (93%) eligible individuals were contacted. Overall, there were 503 seroconversions documented after 22,891 person-years of follow-up. The trial team conducted testing and follow-up for an average of 2.3 years in each cluster. Over the course of the trial, the incidence in the control clusters was almost identical to the incidence that had been assumed in the sample size calculations, but did not differ significantly across the two arms: 2.11 per 100 person-years (95% CI 1.84-2.39) in the intervention arm versus 2.27 per 100 person-years in the control arm (95% CI 2.00-2.54) (adjusted hazard ratio 1.01, 95% CI 0.87-1.17). During the trial, more than 90% of HIV-positive individuals became aware of their diagnosis. However, at the end of the trial, there were no significant differences in both ART coverage and population viral suppression between the intervention and control communities. At the end of the trial, ART coverage was 52.8% in the control communities versus 53.4% in the intervention communities [3••]. Similarly, population levels of viral suppression were 46.2% versus 44.2% in the intervention versus control communities, from the baseline of 23.5% and 26.0%, respectively [17]. Key Insights from the ANRS 12249 Cluster-Randomized Trial The outcome of this well-conducted trial constitutes a powerful null finding with important lessons for overcoming challenges in the population delivery of ART. We highlight three key insights below. Firstly, the concern by participants about inadvertent disclosure of HIV status by attending one of the trial clinics likely contributed to the relatively poor linkage to care observed in the trial. Poor linkage to care was associated with being newly diagnosed with HIV, being students, living farther away from the clinics, or having higher educational attainment [18,19]. The results brought into sharp focus the continued stigma around HIVand highlighted the critical need to normalize its treatment. A typical quote from a participant in this trial illustrates this point: There are those who are still not keen [to attend the TasP clinic]. They have a problem that they will be seen at the park home [TasP clinic] and they say that the park home is full of people who have HIV. You see it is something like that. You see there are people who go to the clinic not because they are going to check their own illnesses but they keep looking at the people who are going to the research clinic and they say we are even carrying babies who have HIV. Now when a lot of people think about that they think if you go to that clinic you are visible, they wish they can hide from others. (Female, 51 years) In this vein, the SEARCH trial model (described in detail elsewhere [20]) of taking a community-based, multi-disease approach for the management and treatment of HIV would seem to hold considerable promise. Secondly, the contact rate was significantly lower in men and younger individuals [3••, 21]; however, among those who received the community intervention, linkage to care was similar in both men and women [18]. It is therefore vital that novel methods are found to engage men and younger populations to facilitate increased and more rapid linkage to treatment and care. In response to these findings, a 2 × 2 factorial cluster-randomized community-based trial, Home-Based Intervention to Test and Start (HITS) [22], was initiated in the AHRI population cohort [22,23]. The HITS trial aims to establish the impact of small once-off financial incentives and a male-targeted HIV-specific decision support application on improving the uptake of HIV testing and linkage to care among men, with the ultimate aim of reducing population-level HIV incidence in (particularly young) women. Thus far, the HITS trial has demonstrated that a once-off financial micro-incentive of just $3 increased the uptake of HIV testing more than 50% among men [24]. Thirdly, the ANRS TasP trial identified individuals earlier in the course of their HIV infection, the majority of whom were asymptomatic. Competing priorities between livelihood sustenance, as seen by the high prevalence of food insecurity [25] in the trial population and time required to seek care, meant HIV-positive individuals likely delayed starting ART. Studies which have highlighted the individual benefits of early ART [26,27] and differentiated models of care, including same-day [28] and community provision of ART [29], could alleviate the burden of seeking ART, thus allowing patients to initiate treatment earlier potentially while in the acute phase of infection [30][31][32]. Overview of the AHRI Population-Based Cohort Since 2004, AHRI has conducted annual population-based HIV testing among all consenting adults aged 15 years or older in a community immediately adjacent to the ANRS TasP trial area [23]. The AHRI cohort constitutes one of the world's largest population-based longitudinal HIV cohorts and has measured the population trends in directly measured HIV incidence and quantified important socio-demographic, behavioural, and contextual determinants of newly acquired HIV infections [23]. Both the ANRS TasP trial and the AHRI population-based cohort share a very similar modus operandi based on the gold-standard approach of actively enrolling and following up a complete population and observing individual HIV seroconversions in those participants who were initially observed to be HIV-uninfected. The main difference between the two cohorts is that the AHRI population cohort conducts annual home-based testing, whereas the ANRS TasP trial conducted testing at 6-month intervals. The other major difference is that the period of follow-up is longer in the AHRI population cohort (> 14 years versus an average of 2.3 years in the ANRS TasP trial), encompassing the full period of ART scaleup. A major strength of population-based cohorts that have enrolled and prospectively followed complete populations over decades is that representative knowledge (both with respect to disease outcomes but also on a dynamic suite of socio-demographic-, societal-, and community-level risk factors) is gained on all participants over time irrespective of whether individuals attend care. Such designs provide a strong basis for causal inference as well as a good standpoint from which to quantify the population-level impacts of interventions. The findings are therefore not subject to many of the biases commonly inherent in clinical studies based on patients who choose (and are able) to attend clinic or on a pre-selected sample of individuals who might differ from the population in ways that are difficult to evaluate. Moreover, because changes in measures like household wealth and sexual behaviour are systematically measured over time for each individual, it means that these measures can be used to explicitly rule out alternative explanations of the relationships observed, and that any findings are not subject to the pitfalls of ecological fallacy. The rich data measured in these population-based studies have high external validity which also provides opportunities for quasi-experimental study designs, such as regression discontinuity and instrumental variable designs, to control for all unobserved confounding [33•, 34]. In the same way that the ongoing population-based cohorts like the Framingham Heart Study have been able to generate important insights into the underlying risk factors for cardio-vascular disease [35], so too have ongoing population cohorts such as the AHRI cohort in South Africa [23] and the Rakai cohort in Uganda [36,37] generated profound epidemiological insights into the risk factors, trajectory of epidemics, mechanisms and underlying causal risk factors, and pathways to HIV acquisition. By the end of 2017, the AHRI population-based cohort contained~105, 000 person-years of observation and3 500 directly observed HIV seroconversions [38]. These large sample sizes taken from a complete population followed longitudinally for well over a decade permit powerful statistical inference. This in turn can facilitate a deep and nuanced understanding of underlying causal risk factors and processes and a quantification of dynamic incidence patterns among different population sub-strata allowing for identification of particularly vulnerable sub-groups [39][40][41][42]. Table 1) summarizes the ANRS TasP trial results [3••] as well as ten recent, controlled studies from the ongoing populationbased cohort [38, 43•, 44••, 45-50]. These studies have meticulously quantified the real-world, long-term impacts of expanding ART provision on reduction in risk of new HIV infection across communities [44••, 45], within households [46,47], within couples [49], and across the general population [38, 43•, 48]. One recent study also quantified the risk of expansion of ART provision on newly diagnosed TB infection [50]. In the population-based cohort, the duration of follow-up encompasses the period immediately both before and after the scale-up of ART, allowing for strong experimental separation among different population sub-groups in respect of ART exposure (i.e., large differences in ART coverage). These studies have exploited this heterogeneity in ART exposure and viral suppression across individuals and communities, within couples and households, and over time and space, for robust causal inference. Using this variation, the studies demonstrate consistently strong evidence for the preventative benefits of ART, using diverse methods and statistical models and explicitly controlling for well-known predictors of HIV incidence (Fig. 1, Table 1). [45][46][47][48][49]. One study also quantified the risk of expanding ART provision on newly diagnosed TB infection [50]. Further details of these studies are provided in Table 1. The studies utilize one of the world's largest ongoing population-based cohorts that has measured the socio-demographic, behavioural, and contextual determinants of HIV incidence as well as the population trends over >14 years. The duration of follow-up of the population cohort encompasses the period both immediately before and after the scale-up of ART allowing for strong experimental separation in ART exposure (i.e., large differences in ART coverage) across time and space, within (and across) couples and households, as well as between different population sub-groups. The studies include quasi-experimental designs, such as regression discontinuity and instrumental variable designs For example, the first study from the population cohort quantifying the treatment as prevention effect found that a 1% increase in ART coverage in the surrounding community is independently associated with an average 1.4% decline in an individual's risk of acquisition of HIV infection (adjusted hazard ratio (aHR) = 0.986) [44••]. The results of the study and implications for treatment as prevention at the time are discussed in two commentaries [52,53]. Other findings from this population cohort demonstrate, for example, that within sero-discordant couples, use of ART is associated with a 77% decrease in HIV incidence [49]. Within households, an HIVuninfected individual in a household characterized by high opposite-sex ART coverage is 26% less likely to acquire HIV than someone living in a household with a low opposite-sex ART coverage [46] and compared with delayed ART initiation, immediate initiation of ART reduced HIV incidence in households by 47% [47]. At a community-level, every 1% increase in the proportion of an entire community ART, antiretroviral therapy; aHR, adjusted hazard ratio; aOR, adjusted odds ratio; IV, instrumental variable; IMR, incidence to mortality ratio; IPR, incidence to prevalence ratio; IRR, incidence rate ratio; VMMC, voluntary medical male circumcision Key Findings from the AHRI Population-Based Cohort These studies have quantified the real-world, long-term impacts of expanding ART provision on reduction in risk of new HIV infection across different communities, within households, within sero-discordant couples, and in the general population having a detectable virus is independently associated with a 6.3% prospective increase in risk of HIV acquisition for HIVnegative individuals living in that community [48]. At a population level, overall HIV incidence between 2012 and 2017 declined dramatically by 43% (Fig. 2) [43•]. Consistent with treatment as prevention playing a major role in this population-level reduction, HIV incidence declined among both circumcised and uncircumcised men. Moreover, men experienced earlier and larger HIV incidence declines than women, consistent with higher ART coverage in women. Specifically, male incidence declined by 59%, from 2.5 to 1.0 sero-conversion events per 100 person-years, which coincided with female ART coverage surpassing 35% in 2012 and VMMC scale-up in 2009. There was a 37% reduction in female HIV incidence between 2014 and 2017, from 4.9 to 3.1 sero-conversion events per 100 person-years, which occurred after male ART coverage reached 35% [43•]. While overall progress is off track to meet the 2020 reduction targets set by the UNAIDS [51], a recent paper documented impressive progress toward HIV epidemic control in this population [38]. Among men, the incidence to mortality ratio peaked at 4.1 in 2013 before dropping to 3.1 in 2017 (a 24% reduction) while the female incidence to mortality ratio climbed to as high as 6.4 in 2013 before dropping to 4.3 in 2017 (a 33% reduction). Between 2012 and 2017, the male-incidence to female-prevalence ratio declined from 0.05 to 0.02. Compared with men, however, the female-incidence to male-prevalence ratio was markedly higher and fell from 0.24 to 0.13 during the same period [38]. This result, when coupled with the higher HIV incidence, incidence to mortality ratio, and HIV prevalence among women, confirms the disproportionate burden of HIV being experienced by women relative to men in sub-Saharan Africa. Treatment for HIV is also associated with secondary preventative benefits for TB infection and shows a 34% reduction in the risk of newly diagnosed TB infection to an individual living in a community with ≥ 50% ART coverage (adjusted odds ratio (aOR) = 0.66, 95% CI 0.49-0.88) [50]. The results of these studies (Table 1, Fig. 1) are epidemiologically plausible and clear, measurable reductions in HIV incidence and incidence-derived metrics were consistently found across all studies. The findings were robust to different model specifications, different age-eligibility criteria, differing methods of constructing "communities," and the inclusion of differing control variables (including being robust to changes in sexual behaviour, for example). Further, methods to impute the date of HIV seroconversion were systematically investigated and the results were found to be robust to participant selfselection associated with missed test dates and drop-out [54,55]. It is thus unlikely that any collection of systematic biases could consistently and simultaneously explain the findings across the different studies conducted within households, couples, communities, population sub-groups, genders, and using differing outcome metrics (and in one case, the outcome of a different disease-i.e., newly diagnosed TB infection). Nevertheless, the possibility of the existence of such a pervasive unidirectional residual confounding effect-however unlikely-should be acknowledged. To rule out the possibility of residual confounding, two quasi-experimental study designs [45,47] were implemented using instrumental variable (IV) and regression discontinuity (RD) designs (Table 1). They do this by quasi-randomly assigning individuals to intervention vs. control groups, leveraging randomness induced by policy, practice, or natural events [56][57][58][59][60][61]. The quasi-experimental studies [45,47] confirmed a large real-world treatment as prevention effect that could not have been explained by the influence of any observed or unobserved factors. The Wirth et al. [45] analysis not only confirmed the previous findings (and therefore demonstrated that the result was robust to the effect of unmeasured confounding) but also suggested that the effect of community-level ART coverage on HIV incidence may be even greater than previously estimated in the paper published in Science [44••]. Conclusion All of the evidence from large, population-based studies (inclusive of the ANRS TasP trial) in this setting is remarkably consistent-i.e., higher ART coverage and population viral suppression were repeatedly associated with large, measurable decreases in HIV incidence. Despite increases in population levels of viral suppression in both arms, the offer of UTT in ANRS TasP trial did not induce differences in viral suppression between intervention and control communities and thus the trial could not demonstrate a relative reduction in HIV incidence in the intervention communities. As one of the world's largest ongoing HIV incidence cohorts and spanning the period both immediately before and after the scale-up of antiretroviral therapy, the AHRI population cohort allowed for strong experimental separation in ART exposure (i.e., large differences in ART coverage) and viral suppression between different population sub-groups. The studies summarized in this commentary exploited this heterogeneity in ART exposure across individuals and communities, within couples and households, and over time and space, for a robust quantification of the treatment as prevention effect in a real-life setting. In summary, the recent evidence from controlled, population-based studies in this typical rural African population demonstrates that expanded provision of ART has substantially and consistently reduced the risk of onward transmission at multiple levels, which has plausibly contributed in a major way toward the dramatic 43% decline in populationlevel HIV incidence. Going forward, however, the incremental gains in incidence reduction are likely going to be harder to achieve. The outcome of the ANRS TasP trial constitutes a powerful null finding with important lessons for overcoming implementation challenges in the population delivery of ART. This finding does not imply lack of ART effectiveness in preventing the onward transmission of HIV nor the inability to reduce population-level HIV incidence. Rather, it demonstrates that large increases in ART coverage over current levels will require health systems innovations to attract people living with HIV in early stages of the disease to participate in HIV treatment. Such innovations and new approaches are required for the true potential of UTT to be realized. Attaining epidemic control will require overcoming existing implementation barriers to the continued expansion of ART accompanied by the provision of other primary prevention measures.
2020-02-20T00:43:56.384Z
2020-02-18T00:00:00.000
{ "year": 2020, "sha1": "bf95b6f1298b01f64afda813b473ff777ae86efb", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11904-020-00487-1.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "bf95b6f1298b01f64afda813b473ff777ae86efb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
240268619
pes2o/s2orc
v3-fos-license
Radial TiO2 Nanorod-Based Mesocrystals: Synthesis, Characterization, and Applications Radial TiO2 nanorod-based mesocrystals (TiO2-NR MCs) or so-called “sea-urchin-like microspheres” possess not only attractive appearance but also excellent potential as photocatalyst and electrode materials. As a new type of TiO2-NR MCs, we have recently developed a radial heteromesocrystal photocatalyst consisting of SnO2(head) and rutile TiO2 nanorods(tail) (TiO2-NR//SnO2 HEMCs, symbol “//” denotes heteroepitaxial junction) with the SnO2 head oriented in the central direction in a series of the studies on the nanohybrid photocatalysts with atomically commensurate junctions. This review article reports the fundamentals of TiO2-NR MCs and the applications to photocatalysts and electrodes. Firstly, the synthesis and characterization of TiO2-NR//SnO2 HEMCs is described. Secondly, the photocatalytic activity of recent TiO2-NR MCs and the photocatalytic action mechanism are discussed. Thirdly, the applications of TiO2-NR MCs and the analogs to the electrodes of solar cells and lithium-ion batteries are considered. Finally, we summarize the conclusions with the possible future subjects. Introduction Among various photocatalyst materials, TiO 2 is the most promising one in terms of suitability and safety in environmental purification and anti-bacterial applications due to strong photoinduced oxidation ability, extreme stability, non-toxicity, and inexpensiveness [1,2]. While the photocatalytic activity of TiO 2 particles is sensitive to the crystal phase, crystallinity, and dimension [3], the assembled structure of TiO 2 particles or the mesocrystal (MC) structure can have a great effect on the photocatalytic activity [4,5]. The geometries of the TiO 2 -based MCs so far reported can be classified into two-dimensional (2D) and three-dimensional (3D) types (Scheme 1). Majima and co-workers prepared 2D-TiO 2 MCs consisting of TiO 2 plates with 3-5 µm size and 100-300 nm thickness linking through the edges (TiO 2 -NPL MCs) using a simple impregnation method (2D-type in Scheme 1) [5]. Gold nanoparticle-loaded TiO 2 (Au/TiO 2 ) works as a visible-light responsive photocatalyst under excitation of the localized surface plasmon resonance [6]. The Au/TiO 2 -NPL MC plasmonic photocatalyst was shown to exhibit significantly higher organic pollutant degradation activity than the usual Au/TiO 2 particles due to the effective charge separation via the anisotropic interparticle electron transfer. The development of 3D-TiO 2 MCs has been triggered by the study of dye-sensitized solar cells [7]. The cell performance was dramatically enhanced by using a TiO 2 nanocrystalline film electrode in which anatase TiO 2 nanoparticles (NPs) with 20-30 nm diameter are interconnected randomly to yield a mesoporous structure and a large surface area (3D-type I in Scheme 1). More recently, radial homomesocrystals consisting of rutile TiO 2 nanorods (TiO 2 -NR HOMCs), so-called "sea-urchin-like microspheres" (3D-type II in Scheme 1), have attracted much interest due to features including a high light harvesting ability due to multiple light scattering between the reflective rutile TiO 2 NRs [8,9] and large surface area comparable with that of NPs. HOMCs), so-called "sea-urchin-like microspheres" (3D-type II in Scheme 1), have attracted much interest due to features including a high light harvesting ability due to multiple light scattering between the reflective rutile TiO2 NRs [8,9] and large surface area comparable with that of NPs. These unique nano/micro-sized properties render the radial TiO2-NR HOMCs fascinating as a photocatalyst material. Unexpectedly, the photocatalytic activity for aerobic oxidation of organics still remains low, probably because of the poor ability of rutile TiO2 for oxygen reduction reaction (ORR) [10,11]. Thus, if the abilities of effective charge separation and electrocatalytic activity for ORR can be endowed with radial TiO2-NR HOMCs, the photocatalytic activity would be enhanced, and their applications should be greatly extended. To achieve this, we have recently developed a radial heteromesocrystal photocatalyst consisting of SnO2(head) and rutile TiO2 nanorods (tail) (TiO2-NR//SnO2 HEMCs, "//" denotes heteroepitaxial junction) (3D-type III in Scheme 1) and studied the photocatalytic activity for aerobic oxidation of organics. This review article describes the synthesis, characterization, and the photocatalytic activity of TiO2-NR//SnO2 HEMCs and other recent TiO2-NR HOMCs. The photocatalytic action mechanism of TiO2-NR//SnO2 HEMCs is also discussed by comparison with TiO2-NR HOMCs. Further, the applications of TiO2-NR HOMCs and the analogs to the electrodes for solar cells and lithium-ion batteries are considered. Finally, the conclusions are summarized with the possible future subjects. Synthesis and Characterization In 2006, Yang and Gao synthesized sea-urchin-like TiO2 nanostructures (~1 μm) (3Dtype II in Scheme 1) for the first time using a sol-solvothermal method from a waterbenzene solution of TiCl4 and Ti(OBu)4 without using any template or surfactant [12]. Several review papers on the synthesis and characterization of TiO2-NR HOMCs have already been reported [8,9,13]. Here, we explain the method for synthesizing TiO2-NR//SnO2 HEMCs (3D-type III in Scheme 1). TiO2-NR//SnO2 HEMCs were synthesized using a hydrothermal method in the presence of SnO2 seeds with particle size of 22-43 nm [14]. HCl (6 M, 30 mL) and Ti(OBu)4 (0.4 mL) were mixed in a reaction vessel made of Teflon (50 mL), and the solution was slowly stirred for 0.5 h at ambient temperature. SnO2 nanocrystals (0.01 g) were dispersed into the mixed solution by ultrasonic irradiation and stirred for 24 h. The reaction vessel was sealed in a stainless-steel cylinder, and then it was placed in an oil bath, the temperature of which was maintained at 150 °C for various reaction times (tHT). Solid products produced after the reaction were collected by centrifugation. The same synthetic procedures were conducted using HNO3 and H2SO4 in the place of HCl. This review article describes the synthesis, characterization, and the photocatalytic activity of TiO 2 -NR//SnO 2 HEMCs and other recent TiO 2 -NR HOMCs. The photocatalytic action mechanism of TiO 2 -NR//SnO 2 HEMCs is also discussed by comparison with TiO 2 -NR HOMCs. Further, the applications of TiO 2 -NR HOMCs and the analogs to the electrodes for solar cells and lithium-ion batteries are considered. Finally, the conclusions are summarized with the possible future subjects. Synthesis and Characterization In 2006, Yang and Gao synthesized sea-urchin-like TiO 2 nanostructures (~1 µm) (3Dtype II in Scheme 1) for the first time using a sol-solvothermal method from a waterbenzene solution of TiCl 4 and Ti(OBu) 4 without using any template or surfactant [12]. Several review papers on the synthesis and characterization of TiO 2 -NR HOMCs have already been reported [8,9,13]. Here, we explain the method for synthesizing TiO 2 -NR//SnO 2 HEMCs (3D-type III in Scheme 1). TiO 2 -NR//SnO 2 HEMCs were synthesized using a hydrothermal method in the presence of SnO 2 seeds with particle size of 22-43 nm [14]. HCl (6 M, 30 mL) and Ti(OBu) 4 (0.4 mL) were mixed in a reaction vessel made of Teflon (50 mL), and the solution was slowly stirred for 0.5 h at ambient temperature. SnO 2 nanocrystals (0.01 g) were dispersed into the mixed solution by ultrasonic irradiation and stirred for 24 h. The reaction vessel was sealed in a stainless-steel cylinder, and then it was placed in an oil bath, the temperature of which was maintained at 150 • C for various reaction times (t HT ). Solid products produced after the reaction were collected by centrifugation. The same synthetic procedures were conducted using HNO 3 and H 2 SO 4 in the place of HCl. The product morphology is strongly affected by the kinds of acids used in the hydrothermal reaction (Figure 1a-c). In the case of HCl, 3D-radial microspheres with a diameter of~3 µm are produced at t HT = 8 h. The specific surface area of the particles was determined by the Brunauer-Emmett-Teller (BET) method to be 41.1 ± 0.2 m 2 g −1 , which is 1.75 × 10 2 times larger than the value for the spherical particle with a diameter of 3 µm. The product morphology is strongly affected by the kinds of acids used in the hydrothermal reaction (Figure 1a-c). In the case of HCl, 3D-radial microspheres with a diameter of ~3 μm are produced at tHT = 8 h. The specific surface area of the particles was determined by the Brunauer-Emmett-Teller (BET) method to be 41.1 ± 0.2 m 2 g −1 , which is 1.75 × 10 2 times larger than the value for the spherical particle with a diameter of 3 μm. On the one hand, the use of HNO3 or H2SO4 produces irregularly shaped aggregates. In the XRD pattern of the sample prepared with HCl, there are sharp peaks at 2θ = 27.38°, 36.04°, 41.18°, and 54.28° indexed as the diffraction from the (110), (101), (111), and (211) crystal planes of rutile TiO2 (ICDD 00-021-1276), respectively, in addition to the peaks at 2θ = 26.5°, 33.78°, and 51.74° assignable to the diffraction from the (110), (101), and (211) planes of SnO2, respectively (ICDD 01-075-2893) (Figure 1d). Transmission electron microscopy (TEM) analyses of the particles generated at the early stage of reaction provide information about the crystal growth mechanism and the state of the junction between SnO2 and TiO2. The TEM image of a particle generated at tHT = 1 h shows a single TiO2 NR grown from a SnO2 seed crystal with the mean lengths of short axis (~13 nm) and long axis (~90 nm) (Figure 2a). An atomically commensurate junction is also formed between the SnO2 seed and TiO2 NR, whose lattice spacings near the interface are close to the values of the (110) crystal planes. A plausible interface model is proposed based on the analysis of the high-resolution (HR)-TEM images (Figure 2b,c), where the TiO2-NR and SnO2 are connected with an orientation of (001)TiO2//(001)SnO2, and the TiO2 NR grows toward the [001] direction. Consequently, anisotropic one-dimensional (1D) TiO2-NR//SnO2 particles are formed during the initial process of reaction. Density functional theory (DFT) simulations indicated that Clions act as a habit modifier in this reaction, i.e., their preferential adsorption on the oxygen-defect sites on the rutile TiO2(110) plane induces the anisotropic growth of TiO2 in the [001] direction yielding the NR with the {110} side walls [15]. Transmission electron microscopy (TEM) analyses of the particles generated at the early stage of reaction provide information about the crystal growth mechanism and the state of the junction between SnO 2 and TiO 2 . The TEM image of a particle generated at t HT = 1 h shows a single TiO 2 NR grown from a SnO 2 seed crystal with the mean lengths of short axis (~13 nm) and long axis (~90 nm) (Figure 2a). An atomically commensurate junction is also formed between the SnO 2 seed and TiO 2 NR, whose lattice spacings near the interface are close to the values of the (110) crystal planes. A plausible interface model is proposed based on the analysis of the high-resolution (HR)-TEM images (Figure 2b,c), where the TiO 2 -NR and SnO 2 are connected with an orientation of (001) TiO2 //(001) SnO2 , and the TiO 2 NR grows toward the [001] direction. Consequently, anisotropic one-dimensional (1D) TiO2-NR//SnO2 particles are formed during the initial process of reaction. Density functional theory (DFT) simulations indicated that Cl-ions act as a habit modifier in this reaction, i.e., their preferential adsorption on the oxygen-defect sites on the rutile TiO2(110) plane induces the anisotropic growth of TiO2 in the [001] direction yielding the NR with the {110} side walls [15]. To clarify the orientation of the 1D-TiO2-NR//SnO2 particles in the 3D-microsphere, scanning transmission electron microscopy (STEM)-energy dispersive spectroscopic (EDS, Figure 3a) elemental mapping was performed on a particle generated at tHT = 8 h. Many 1D-TiO2-NR//SnO2 particles are self-assembled to form a radial 3D-microsphere ( Figure 3a). Ti and O are uniformly distributed over the microsphere (Figure 3b,c). On the contrary, Sn is unevenly present near the center ( Figure 3d). Clearly, each 1D-TiO2-NR/SnO2 particle is oriented with the SnO2 head in the central direction of the microsphere. To clarify the orientation of the 1D-TiO 2 -NR//SnO 2 particles in the 3D-microsphere, scanning transmission electron microscopy (STEM)-energy dispersive spectroscopic (EDS, Figure 3a) elemental mapping was performed on a particle generated at t HT = 8 h. Many 1D-TiO 2 -NR//SnO 2 particles are self-assembled to form a radial 3D-microsphere ( Figure 3a). Ti and O are uniformly distributed over the microsphere (Figure 3b,c). On the contrary, Sn is unevenly present near the center ( Figure 3d). Clearly, each 1D-TiO 2 -NR/SnO 2 particle is oriented with the SnO 2 head in the central direction of the microsphere. To clarify the orientation of the 1D-TiO2-NR//SnO2 particles in the 3D-microsphere, scanning transmission electron microscopy (STEM)-energy dispersive spectroscopic (EDS, Figure 3a) elemental mapping was performed on a particle generated at tHT = 8 h. Many 1D-TiO2-NR//SnO2 particles are self-assembled to form a radial 3D-microsphere (Figure 3a). Ti and O are uniformly distributed over the microsphere (Figure 3b,c). On the contrary, Sn is unevenly present near the center (Figure 3d). Clearly, each 1D-TiO2-NR/SnO2 particle is oriented with the SnO2 head in the central direction of the microsphere. The formation mechanism of TiO 2 -NR//SnO 2 HEMCs can be explained as follows (Scheme 2). Initially, Ti(OBu) 4 undergoes hydrolysis-polycondensation in HCl solution with SnO 2 seed nanocrystals, where HCl suppresses the hydrolysis-polycondensation to inhibit the homogeneous particle formation [16]. At t HT ≤ 0.5 h, the SnO 2 surface is covered by an amorphous TiO 2 layer (SnO 2 @amorphous-TiO 2 ). At 0.5 h < t HT < 1 h, rutile TiO 2 nuclei occur on the SnO 2 surface to grow in the [001] direction with the most stable {110} facets exposed at the side planes. In this process, the adsorption of Cl-ions on the TiO2{110} planes restricts their growth to induce the anisotropic growth along the [001] direction yielding 1D-TiO 2 -NR//SnO 2 particles [15]. The self-assembling to the radial 3D-microsphere can arise from the balance between the repulsion and attraction forces between the 1D-TiO 2 -NR//SnO 2 particles. Since the points of zero charge of rutile TiO 2 and SnO 2 are~5 and~3.5, respectively [17], the SnO 2 head in the 1D-TiO 2 -NR//SnO 2 particle has smaller positive surface charge than the TiO 2 tail under the strong acidic conditions. Additionally, the van der Waals attractive force for SnO 2 particles would be larger than that for rutile TiO 2 particles since the former has a Hamaker constant of 5.5 × 10 −20 J, larger than the latter of 4 × 10 −20 J [18]. Consequently, the smaller repulsive and larger attractive forces between the SnO 2 heads of 1D-TiO 2 -NR//SnO 2 particles induce the formation of the radial 3D-microsphere with the heads oriented toward the central direction. The formation mechanism of TiO2-NR//SnO2 HEMCs can be explained as follows (Scheme 2). Initially, Ti(OBu)4 undergoes hydrolysis-polycondensation in HCl solution with SnO2 seed nanocrystals, where HCl suppresses the hydrolysis-polycondensation to inhibit the homogeneous particle formation [16]. At tHT ≤ 0.5 h, the SnO2 surface is covered by an amorphous TiO2 layer (SnO2@amorphous-TiO2). At 0.5 h < tHT < 1 h, rutile TiO2 nuclei occur on the SnO2 surface to grow in the [001] direction with the most stable {110} facets exposed at the side planes. In this process, the adsorption of Clions on the TiO2{110} planes restricts their growth to induce the anisotropic growth along the [001] direction yielding 1D-TiO2-NR//SnO2 particles [15]. The self-assembling to the radial 3D-microsphere can arise from the balance between the repulsion and attraction forces between the 1D-TiO2-NR//SnO2 particles. Since the points of zero charge of rutile TiO2 and SnO2 are ~5 and ~3.5, respectively [17], the SnO2 head in the 1D-TiO2-NR//SnO2 particle has smaller positive surface charge than the TiO2 tail under the strong acidic conditions. Additionally, the van der Waals attractive force for SnO2 particles would be larger than that for rutile TiO2 particles since the former has a Hamaker constant of 5.5 × 10 −20 J, larger than the latter of 4 × 10 −20 J [18]. Consequently, the smaller repulsive and larger attractive forces between the SnO2 heads of 1D-TiO2-NR//SnO2 particles induce the formation of the radial 3D-microsphere with the heads oriented toward the central direction. TiO2-Nanorod Homomesocrystals The most outstanding feature of the TiO2-NR MCs is the high light harvesting ability due to the multiple light scattering between TiO2 NRs, which should be more effective for rutile TiO2 (refractive index, nE//c = 2.616, nE┴c = 2.903) than anatase TiO2 (nE//c = 2.554, nE┴c = 2.493) [19]. The photocatalytic studies of radial rutile TiO2-NR HOMCs (3D-type II in Scheme 1) and the analogs are less than expected from the excellent potential. In this section, some of the studies performed over the last decade are introduced. TiO 2 -Nanorod Homomesocrystals The most outstanding feature of the TiO 2 -NR MCs is the high light harvesting ability due to the multiple light scattering between TiO 2 NRs, which should be more effective for rutile TiO 2 (refractive index, n E//c = 2.616, n E⊥c = 2.903) than anatase TiO 2 (n E//c = 2.554, n E⊥c = 2.493) [19]. The photocatalytic studies of radial rutile TiO 2 -NR HOMCs (3D-type II in Scheme 1) and the analogs are less than expected from the excellent potential. In this section, some of the studies performed over the last decade are introduced. Zhao and co-workers prepared rutile TiO 2 -NR HOMCs (1~3 µm) with the specific surface area of~40 m 2 g −1 using a solvothermal method [20]. P-25 (specific surface area =~50 m 2 g −1 , rutile/anatase = 30:70, Evonik) is known to exhibit a high level of photocatalytic activity for various reactions, being used as a benchmark photocatalyst. The photocatalytic activity of the TiO 2 -NR HOMCs (1~3 µm) and P-25 for degradation of methylene blue (MB) was studied under UV-visible irradiation. The TiO 2 -NR HOMCs (1~3 µm) show higher photocatalytic activity than P-25, which was ascribable to the efficient light absorption of the former. Further, the photocatalytic activity decreases with an increase in the diameter of the TiO 2 -NR HOMCs from~1 to~3 µm, although the reason is unclear. The same group prepared Au NP (2-10 nm) and Ag NP (~20 nm)-loaded rutile TiO 2 -NR HOMCs (1~2 µm) with a specific surface area of~40 m 2 g −1 using a chemical reduction method [20]. The photocatalytic activity for MB degradation was examined under UV-visible irradiation. Unmodified TiO 2 -NR HOMCs show photocatalytic activity comparable with that of P-25. Further, loading Ag and Au NPs significantly increases the photocatalytic activity. This is probably because of the enhancement of charge separation due to the interfacial electron transfer from TiO 2 to the metal NPs. The authors proposed that visible-light irradiation induces the hot-electron transfer from the metal NPs to TiO 2 to yield one-electron ORR on rutile TiO 2 causing the MB degradation, although no evidence is provided. Xu, Li, and co-workers prepared rutile TiO 2 -NR HOMCs (2-3 µm) with a very large specific surface area of 224 m 2 g −1 using a solvothermal method [21]. The diameter and length of TiO 2 NRs are 5-8 nm and~0.2 µm, respectively. Under sunlight irradiation of the sample, Cr 6+ ions are reduced to Cr 3+ ions with the yield of~100% at the concentration below 53.7 ppm. The removal capacity under irradiation of sunlight for 3 h was reported to reach~1 g g −1 . In this case, it is worth noting that the reduction with a very positive standard electrode potential (E 0 (Cr 2 O 7 2-/Cr 3+ ) = +1.36 V) is thermodynamically permitted [22]. Next, the studies of anatase TiO 2 -NR HOMCs are described. Zhang and co-workers synthesized an anatase analog of 3D-rutile TiO 2 -NR HOMCs for the first time [23]. It is known that anatase TiO 2 shows higher photocatalytic activity than rutile TiO 2 in most aerobic oxidation reactions [24]. Wang and co-workers reported the large-scale synthesis of urchin-like mesoporous TiO 2 hollow spheres (UMTHS) (~0.45 µm) surrounded by singlecrystal anatase nanohorns with a diameter of 10-20 nm and a length of 40-60 nm [25]. The synthesized sample with a large specific surface area of 129 m 2 g −1 and excellent light harvesting efficiency exhibits photocatalytic activity for removal of gaseous nitric oxide (NO) superior to P-25. Xu and co-workers prepared hierarchical golden wattle-like microspheres consisting of rutile TiO 2 NRs with a diameter of 40-60 nm and a length of 400-500 nm using a solvothermal method using a reaction solution of acetone (20 mL) containing titanium n-butoxide (4 mL) and HCl (x mL, 36-38 wt%) [26]. The HCl concentration in the reaction solution plays an important role in controlling the size and morphology of products. The photocatalytic activities of the samples and P-25 for the degradation of phenol were assessed under UV-light irradiation (λ ex = 365 nm). The photocatalytic activity strongly depends on x, and the sample prepared at x = 2 mL exhibits photocatalytic activity comparable to that of P-25. The authors attributed the high photocatalytic activity of the sample to the suppression of the recombination by smooth electron transportation through the 1D-TiO 2 NRs with high crystallinity, efficient light harvesting ability, and large surface area or large number of adsorption sites for phenol. Li, Liu, Wang, and co-workers synthesized ultrathin nanobelt-assembled urchin-like anatase TiO 2 nanostructures (~0.25 µm) with a large specific surface area of 171 m 2 g −1 using a one-step hydrothermal route [27]. The length of the nanobelts is several µm, and the width and thickness are in the ranges of 50 to 100 and 23 to 30 nm, respectively. The sample was shown to exhibit photocatalytic activities significantly higher than commercial anatase TiO 2 NPs and P-25 for the degradation of methyl orange and phenol under irradiation of UV-light (λ ex = 365 nm). Einaga and co-workers partially reduced sea-urchin-like TiO 2 microspheres (~0.25 µm) consisting of anatase TiO 2 NRs by heating at various temperatures (T c ) under vacuum [28]. The photocatalytic activity of the samples for benzene degradation depends on T c with a maximum at T c = 250 • C. The optimal sample shows significantly higher activity than P-25 for the decomposition of benzene to CO 2 with good stability. Photoluminescence (PL) spectra of TiO2-NR//SnO2 HEMC, TiO2-NR HOMC, and rutile TiO2 NPs for comparison were measured (Figure 4b) to evaluate the relative charge separation efficiency in the photocatalytic process [36]. Rutile TiO2 NPs have three signals around 410 (B1), 520 (B2), and 800 nm (B3) assigned to the interband emission and the emissions by the recombination at shallow [37] and deep vacancy sites [38], respectively. In the spectra of TiO2-NR//SnO2 HEMCs and TiO2-NR HOMCs, the B2 band almost disappears. It is also worth noting that the emission intensity of TiO2-NR//SnO2 HEMCs is weaker than that of TiO2-NR HOMCs. Thus, the charge separation is suggested to occur more effectively in the TiO2-NR//SnO2 HEMC system than the TiO2 NP and TiO2-NR HOMC systems through the interfacial electron transfer from TiO2 to SnO2 [39]. Photocatalytic Action Mechanism The key to boosting the photocatalytic activity of the radial rutile TiO2-NR MCs for aerobic oxidation of organics is imparting them to the electrocatalytic activity for multiple ORR in addition to the charge separation enhancement [10,11,40]. Photocatalytic two-electron ORR has received much attention as a "green" route for producing H2O2 [41], and so far, highly active electrocatalysts such as Au NPs [42][43][44] and Pd NPs [45] have been reported. The development of the electrocatalyst for four-electron ORR is a major challenge in proton exchange membrane fuel cells, and Pt-based catalysts have mainly been studied Photoluminescence (PL) spectra of TiO 2 -NR//SnO 2 HEMC, TiO 2 -NR HOMC, and rutile TiO 2 NPs for comparison were measured (Figure 4b) to evaluate the relative charge separation efficiency in the photocatalytic process [36]. Rutile TiO 2 NPs have three signals around 410 (B 1 ), 520 (B 2 ), and 800 nm (B 3 ) assigned to the interband emission and the emissions by the recombination at shallow [37] and deep vacancy sites [38], respectively. In the spectra of TiO 2 -NR//SnO 2 HEMCs and TiO 2 -NR HOMCs, the B 2 band almost disappears. It is also worth noting that the emission intensity of TiO 2 -NR//SnO 2 HEMCs is weaker than that of TiO 2 -NR HOMCs. Thus, the charge separation is suggested to occur more effectively in the TiO 2 -NR//SnO 2 HEMC system than the TiO 2 NP and TiO 2 -NR HOMC systems through the interfacial electron transfer from TiO 2 to SnO 2 [39]. Photocatalytic Action Mechanism The key to boosting the photocatalytic activity of the radial rutile TiO 2 -NR MCs for aerobic oxidation of organics is imparting them to the electrocatalytic activity for multiple ORR in addition to the charge separation enhancement [10,11,40]. Photocatalytic twoelectron ORR has received much attention as a "green" route for producing H 2 O 2 [41], and so far, highly active electrocatalysts such as Au NPs [42][43][44] and Pd NPs [45] have been reported. The development of the electrocatalyst for four-electron ORR is a major challenge in proton exchange membrane fuel cells, and Pt-based catalysts have mainly been studied [46]. Abe, Ohtani, and co-workers previously showed that loading of Pt NPs on WO 3 drastically increases the photocatalytic activity for aerobic oxidative decomposition of organics [47]. Recently, some metal oxides such as SnO 2 [48] and CoFe 2 O 4 [49] have been shown to exhibit electrocatalytic activity for multiple-electron ORR, and consequently, the coupling with TiO 2 can increase the photocatalytic activity for aerobic oxidation of organics. The striking photocatalytic activity of TiO 2 -NR//SnO 2 HEMCs (3D-type III in Scheme 1) for the partial oxidation of ethanol can stem from the following features (Scheme 3). Firstly, the radial TiO 2 -NRs of several microns in length enable efficient light absorption due to the multiple light scattering between the highly reflective rutile TiO 2 NRs exciting the electrons in the valence band (VB) to the conduction band (CB). Secondly, the CB-electrons in the TiO 2 NRs are transported in the [001] direction due to the highest conductivity direction [50] and effectively transferred to the CB of SnO 2 through the high-quality interface [39]. Thirdly, the heteroepitaxial junction-induced CB-band bending in SnO 2 enhances the charge separation [39]. The CB-edge potentials of rutile TiO 2 [51] and FTO (or SnO 2 ) [52] are located around +0.11 and +0.48 V (vs. SHE at pH 0), respectively. Fourthly, the electrons collected in SnO 2 can induce two-electron ORR (E 0 (O 2 /H 2 O 2 ) = +0.695 V) due to the electrocatalytic activity [48], whereas one-electron ORR (E 0 (O 2 /O 2 − ) = −0.33 V) [22] is thermodynamically difficult on TiO 2 NRs. Thus, TiO 2 -NR//SnO 2 HEMCs exhibit much higher photocatalytic activity than TiO 2 -NR HOMCs [32]. Fifthly, the reaction field of the oxidation by the VB holes in rutile TiO 2 is limited to the near-surface [53], and also, the adsorptivity of rutile TiO 2 for acetaldehyde is weak [54]. Consequently, the over-oxidation of acetaldehyde can be effectively inhibited. Catalysts 2021, 11, x FOR PEER REVIEW 8 of 12 [46]. Abe, Ohtani, and co-workers previously showed that loading of Pt NPs on WO3 drastically increases the photocatalytic activity for aerobic oxidative decomposition of organics [47]. Recently, some metal oxides such as SnO2 [48] and CoFe2O4 [49] have been shown to exhibit electrocatalytic activity for multiple-electron ORR, and consequently, the coupling with TiO2 can increase the photocatalytic activity for aerobic oxidation of organics. The striking photocatalytic activity of TiO2-NR//SnO2 HEMCs (3D-type III in Scheme 1) for the partial oxidation of ethanol can stem from the following features (Scheme 3). Firstly, the radial TiO2-NRs of several microns in length enable efficient light absorption due to the multiple light scattering between the highly reflective rutile TiO2 NRs exciting the electrons in the valence band (VB) to the conduction band (CB). Secondly, the CBelectrons in the TiO2 NRs are transported in the [001] direction due to the highest conductivity direction [50] and effectively transferred to the CB of SnO2 through the high-quality interface [39]. Thirdly, the heteroepitaxial junction-induced CB-band bending in SnO2 enhances the charge separation [39]. The CB-edge potentials of rutile TiO2 [51] and FTO (or SnO2) [52] are located around +0.11 and +0.48 V (vs. SHE at pH 0), respectively. Fourthly, the electrons collected in SnO2 can induce two-electron ORR (E 0 (O2/H2O2) = +0.695 V) due to the electrocatalytic activity [48], whereas one-electron ORR (E 0 (O2/O2 -) = -0.33 V) [22] is thermodynamically difficult on TiO2 NRs. Thus, TiO2-NR//SnO2 HEMCs exhibit much higher photocatalytic activity than TiO2-NR HOMCs [32]. Fifthly, the reaction field of the oxidation by the VB holes in rutile TiO2 is limited to the near-surface [53], and also, the adsorptivity of rutile TiO2 for acetaldehyde is weak [54]. Consequently, the over-oxidation of acetaldehyde can be effectively inhibited. Electrochemical Applications Besides the photocatalysts, the radial TiO 2 -NR HOMCs (3D-type II in Scheme 1) and the analogs can be suitably applied to the electrodes for the solar cells and lithium-ion batteries by taking the unique geometrical, optical, and electrochemical properties. In this section, some of the studies reported over the last decade are described. Solar Cells Jang and co-workers synthesized radial TiO 2 HOMCs with a size of 4-7 µm consisting of rutile TiO 2 NRs with a diameter of~50 nm and length of few micrometers using a simple solvothermal route [55]. A CdS/CdSe/ZnS quantum dot-sensitized solar cell was constructed using a base electrode including the rutile TiO 2 HOMCs and anatase TiO 2 NPs with a diameter of~20 nm. The solar cell provided a conversion efficiency of 4.2% with a short-circuit photocurrent of 18.2 mA cm −2 and an open-circuit voltage of 531 mV, while the conversion efficiency of the reference cell using the TiO 2 NP electrode without rutile TiO 2 HOMCs was 3.5%. The superior performance of the cell made of the hybrid photoanode of rutile TiO 2 NR HOMCs and anatase TiO 2 NPs was ascribable to the high light harvesting and charge collection properties of the rutile TiO 2 NR HOMCs. Wang and co-workers fabricated a dye(Z907)-sensitized solar cell using UMTHS as an active layer of photoanode with Co(bpy) 3 3+/2+ electrolyte [25]. The solar cell provided an impressive power conversion efficiency of 5.5% under one-sun irradiation (AM-1.5). The excellent cell performance was ascribable to the large surface area and high light-scattering property of UMTHS. Zhou and co-workers synthesized flower-like and sea-urchin-like TiO 2 NR HOMCs using a solvothermal method [56]. The geometrical effect of the TiO 2 photoanode on the performance of a dye(N719)-sensitized solar cell was studied. The conversion efficiency increases in the order of sphere-like (0.82%) < flower-like (3.61%) < sea-urchin-like (8.04%). The authors suggested that the superior performance of the cell with the sea-urchin-like TiO 2 photoanode partly results from the 1D-channel of electron transport in the rutile TiO 2 NR enhancing the charge separation. Peng and co-workers formed a film consisting of rutile TiO 2 NR HOMCs with a diameter of 5-6 µm on Ti foil (TiO 2 NR MC/Ti) using a hydrothermal method [57]. A quasi-solid-state dye(N719)-sensitized solar cell was fabricated, and a high conversion efficiency of 7.27% was achieved by using the film as an underlayer of a nanosized anatase TiO 2 film. A similar effect was obtained in a dye(N719)-sensitized solar cell using the TiO 2 NR MC/Ti covered with anatase nanotubes as a photoanode [58]. Interestingly, in these systems, the combination of 3D-type I and 3D-type II in Scheme 1 remarkably enhances the cell performance due to their large surface area and effective light scattering property. Lithium-Ion Batteries Archer, Lou, and co-workers synthesized TiO 2 nanosheet hierarchical spheres (TiO 2 NSHSs) with an average size of~1 µm using a hydrothermal method [59]. TiO 2 NSHSs consisting of (001)-faceted anatase TiO 2 nanosheets with a thickness of~3 nm and size of 100-300 nm has a mesoporous structure with a very large specific surface area of 170 m 2 g −1 . Consequently, TiO 2 NSHSs manifest an unusual high Coulombic efficiency for lithium extraction, excellent capacity retention over 175 mA h g −1 even at 100 charge-discharge cycles, and superior rate of insertion-release in batteries. Han and co-workers prepared radial rutile TiO 2 HOMCs using a hydrothermal method, and the geometry was maintained after annealing at 300 • C with a specific surface area of 115.3 m 2 g −1 and a pore size of 2.26 nm [60]. The electrochemical properties were examined in a 1M LiPF 6 electrolyte solution of ethylene carbonate and dimethylcarbonate (1:1 v/v) for the application to lithium-ion batteries. The radial rutile TiO 2 HOMC electrode shows outstanding energy storage behavior, with a high capacity of 457 mA h g −1 at the first discharge cycle, a reversible and high rate charge-discharge capability, high rate performance, and good cycling stability. Conclusions and Future Subjects This review article highlights the synthesis, characterization, and photocatalytic activity of TiO 2 -NR//SnO 2 HEMCs. TiO 2 -NR//SnO 2 HEMCs surpass TiO 2 -NR HOMCs in photocatalytic activity. The striking photocatalytic activity of TiO 2 -NR//SnO 2 HEMCs can stem from the following features: (1) Incident light is efficiently absorbed by to the multiple light scattering between TiO 2 NRs, (2) smooth 1D-electron transport along the [001] direction due to the large electron mobility, (3) efficient interfacial electron transfer from TiO 2 to SnO 2 can occur through the high-quality interface, (4) effective charge separation can be achieved by the heteroepitaxial junction-induced CB-potential gradient in SnO 2 , and (5) the electrocatalytic activity of SnO 2 for multiple ORR can complete the catalytic cycle because the holes in the VB of TiO 2 have strong oxidizing ability. Consequently, the development from the HOMCs to HEMCs can extend the applications of the TiO 2 NR-based MC photocatalysts to various chemical transformations. In the meantime, TiO 2 NR-based MCs possess various features including large charge capacity, excellent electron-transport, electron-collecting properties, and robustness in addition to efficient light harvesting ability. As a result, TiO 2 NR-based MCs can also be a very promising electrode material for solar cells and lithium-ion batteries. Research to improve the cell performance by optimizing the geometry and physicochemical properties of TiO 2 NR-based MCs is ongoing. There are two important subjects for the TiO 2 NR-based MCs. The first one relates to the sample production. In the hydrothermal synthesis of TiO 2 -NR//SnO 2 HEMCs, the yield remains <50% even at t HT = 8 h. The development of the technique enabling the synthesis of TiO 2 -NR MCs in a shorter reaction time with a higher yield would accelerate research and raise feasibility. The second one is concerned with the application to photocatalysts. Since rutile TiO 2 NR-based MCs with an absorption edge of~410 nm mainly absorb UV-light occupying only 3% of solar energy, endowing them with visible-light responsiveness and simultaneous enhancement of UV-light activity are crucial for applications to efficient solar-to-chemical transformations. So far, the study on the visible-light activation of the TiO 2 -NR MCs is limited. Rodriguez and co-workers have recently reported the visible-light activation of TiO 2 -NR MCs by Ru-doping for H 2 generation from methanol aqueous solution [61]. Finally, it has recently been revealed that heteroepitaxial junctions are impossible in the bulk system due to the significant lattice mismatch that can be formed in the nanohybrid systems [62]. We anticipate that the new approach of interface control at an atomic level can widely contribute to enhancement in the performance of nanohybrids as photocatalysts and other functional materials.
2021-10-31T15:13:53.776Z
2021-10-28T00:00:00.000
{ "year": 2021, "sha1": "3abd0884a3e2c964e820206a3c0c2105e8c79059", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4344/11/11/1298/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "c7324b690633b5ee2fa1a81e3947cca17a79a52e", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [] }
258441616
pes2o/s2orc
v3-fos-license
Perspective: lessons learned from the COVID-19 pandemic concerning the resilience of the population Background A vital stakeholder in the successful management of the COVID-19 pandemic is the public. The degree of involvement of the population in managing the pandemic, and the leadership perception of the public, had a direct impact on the resilience of the population and level of adherence to the issued protective measures. Main body Resilience refers to the ability to ‘bounce back’ or ‘bounce forward’ following adversity. Resilience facilitates community engagement which is a crucial component of combating the COVID-19 pandemic. The article highlights six insights recognized in studies conducted in Israel during and following the pandemic concerning the resilience of the country’s population. (1) Contrary to varied adversities in which the community serves as an important support system to the individuals, this type of support was substantially impaired during the COVID-19 pandemic, due to the need to maintain isolation, social distancing, and lockdowns. (2) Policy-making during the pandemic should be based on evidence-based data, rather than on assumptions made by decision-makers. This gap led the authorities during the pandemic to adopt measures that were ineffective, such as risk communication based on ‘scare tactics’ concerning the virus, when the highest risk perceived by the public was political instability. (3) Societal resilience is associated with the public’s behavior, such as with vaccine hesitancy and uptake. (4) Factors that affect the levels of resilience include, among others, self-efficacy (impacts individual resilience); social, institutional, and economic aspects as well as well-being (impact community resilience); and hope and trust in the leadership (impact societal resilience). (5) The public should be perceived as an asset in managing the pandemic, thus becoming a vital part of the ‘solution’. This will lead to a better understanding of the needs and expectations of the population and an applicable ‘tailoring’ of the messages that address the public. (6) The gap between science and policymaking must be bridged, to achieve optimal management of the pandemic. Conclusions Improving preparedness for future pandemics should be based on a holistic view of all stakeholders, including the public as a valued partner, connectivity between policymakers and scientists, and strengthening the public’s resilience, by enhancing trust in authorities. Background The COVID-19 pandemic impacted most aspects of life in communities worldwide including health, economic, security, societal, and additional facets. Understanding the consequences and impacts not only of the virus itself but also of the management of the pandemic is vital to ensure optimal preparedness and response to the next waves of this pandemic as well as of other emerging or Page 2 of 6 Adini and Kimhi Israel Journal of Health Policy Research (2023) 12:19 potential communicable diseases. Diverse stakeholders were involved in the management of the pandemic, at both national and international levels, including governance systems, healthcare organizations, economic entities, service providers, pharmaceutical companies, and more. A vital stakeholder in the successful management of the pandemic, though often not recognized as such, is the public. The degree of involvement of the population in designing and implementing the strategies for managing the pandemic, and the perception of the public by each country's leadership, had a direct impact on the resilience of the population as well as their level of adherence to each of the directives that were issued by the governance systems [27]. Engagement of the public in policy-making during the COVID-19 pandemic has been achieved through varied modes, such as virtual seminars, meetings, academic studies, interviews, or through the social media [3,20]. What is 'resilience' of the population and why is it a vital component of managing pandemics? Numerous definitions have been applied to the concept of 'resilience' [30], including "the process of effectively negotiating, adapting to, or managing significant sources of stress or trauma" [35], "healthy, adaptive, or integrated positive functioning over time in the aftermath of adversities" [29], or "a dynamic capability which can allow people to thrive on challenges given appropriate social and personal contexts" [16]. Despite the vast diversity of definitions, most definitions consist of three common components: an occurrence of an emergency or dangerous conditions, buffered by protective elements, that lead to a better outcome than anticipated under such circumstances [30]. While in earlier definitions, resilience was believed to be the ability to 'bounce back' following adversity, current descriptions refer to the capacity to 'bounce forward' after challenging events, resulting in post-traumatic growth rather than distress [7]. Resilience of the population is classified most frequently to three categories: individual, community, and societal resilience. Individual resilience is "the capacity of its members to work together on communal solutions to shocks or adverse circumstances" [4]. Community resilience is "a process linking a set of adaptive capacities to a positive trajectory of functioning and adaptation" [25]. Societal resilience is the ability of groups or societies to "cope with external stresses and disturbances as a result of social, political and environmental change" [1]. Resilience is perceived by most researchers as a dynamic construct, as it may fluctuate among individuals, communities, or societies, over time and circumstances, according to the accessibility and use of assets (competencies or coping capabilities) and resources (protective mechanisms) [13,26]. During the COVID-19 pandemic, access to both assets and resources was frequently compromised, due to the measures that were decreed by the authorities, such as the regulation to maintain a complete lockdown for prolonged periods. Even the less "extreme" measures, such as the need to maintain social distancing, wearing masks, or home isolation, affected the ability of individuals to seek support from significant others. Resilience of the population is an important component of managing pandemics as it facilitates the response to difficult challenges and enables adaptation to the changing situation. It enables people, communities, and societies to draw on inner strengths, maintain hope in the face of difficulty, and remain optimistic about the future. Resilience of the population helps to cope with the effects of the pandemic, from social distancing restrictions to economic hardship. It can also reduce feelings of fear and anxiety and help people stay focused on the present instead of projecting their worries into the future. Resilience is not the solution for the pandemic, but it may be a powerful tool to help promote better health outcomes, develop better support systems, and withstand the challenges encountered in all facets of life. Furthermore, resilience is a positive tool for achieving community engagement [28], which is a crucial component of combating the COVID-19 pandemic. It is vital to identify the lessons that can be learned from the current COVID-19 pandemic. This article focuses on six insights that were recognized during and following the varied waves of the pandemic concerning the resilience of the Israeli population, which should be considered by the different governance systems when planning the response to future pandemics. Insight 1: The community is an important support system during adversities, but 'disappears' during pandemics Following varied adversities, it has been extensively demonstrated that the community serves as an essential support system. Following adversities that resulted from both nature-induced events (such as earthquakes, floods, or storms), or human-made conflicts (such as the conflicts or wars in Ukraine or Azerbaijan), it was found that resilience stems mainly from the formal and informal support that the community provides its residents, enhancing their identity, perceived well-being, hope and a sense of fellowship and mutual support [2]. This crucial element of community support was substantially impaired during the management of the COVID-19 pandemic, due to the need for individuals to isolate and distance themselves from the people around them. This frequently led to a lack of support, as physical and emotional connections were broken, and individuals could not physically interact with others in their communities [9,36]. Furthermore, those that contracted COVID often felt social exclusion, as others feared any contact with them, even after they recovered from the virus [19]. This 'stigmatization' was shown to be problematic even beyond the lack of social support, as it led some individuals to avoid testing for COVID or refuse any treatment because they were concerned that it would result in their being shunned from society [6,11,12]. Therefore, it is important to find alternative ways of connecting with people during pandemics, such as virtual gatherings, phone calls, online discussions, or other forms of virtual support. Maintaining social connectivity, even remotely, is key to finding comfort in adversity and supporting all individuals, while respecting the required protective measures. Insight 2: Don't assume! look for evidence-based data When making decisions regarding the response to the pandemic, it is important not to base the response plan on assumptions, but rather on confirmed and updated data. The COVID-19 pandemic has presented that even when it seems that the assumptions are based on solid and well-founded beliefs, the reality that materialized was different. Following are just a few examples that were found during studies that were conducted among the Israeli population during the pandemic. Throughout the pandemic, the messages relayed to the public were mostly focused on the assumption that the population is concerned with the virus and its potential consequences on health status. In contrast, longitudinal studies have shown that the highest concern of the population throughout the pandemic was the political instability that characterized the Israeli society rather than the health threat [10,24]. Another misconception was based on the conception that as the elderly population was considered (and accurately so) as the most vulnerable to the virus (along with other populations of special needs), it is also the least resilient sector in society. Conversely, it was shown that older age (≥ 61 years) is associated with lower levels of anxiety and stress, as well as a decreased level of perceived danger, compared to younger populations [21]. The age group that showed the highest levels of distress symptoms, and lower levels of community and societal resilience, was the younger population (aged 31-40 years). Considering that this population is an important pillar of society (as many of them are in midcareer, have children in the school system, etc.) this should impact policymakers when designing measures to combat the pandemic. Another phenomenon to be noted is the comparison of the mean levels of resilience among students in academic institutions to the general public. Whilst many decision-makers believed that students are 'less impacted' by the pandemic, studies showed that the mean level of individual resilience among students is lower compared to that of the general population, while their level of distress was higher [10]. Relying on knowledge concerning the characteristics and consequences of the pandemic is instrumental in developing an effective plan of action that contributes to containing and preventing pandemics and/or their outcomes. Therefore, when planning a response to a pandemic, it is important to look for evidence-based data concerning the needs, expectations, and attitudes of the public, that will assist in making informed decisions and facilitate the design of appropriate and relevant response mechanisms. Insight 3: Societal resilience is associated with behavior Societal resilience is used to describe a society's ability to withstand and adapt to various challenges, including the COVID-19 pandemic [34]. Associated behaviors for societal resilience include community organization, communication, collaboration, risk assessment and management, anticipating potential risks, and developing or utilizing existing resources. Not less important is that this type of resilience was found to be associated with attaining the adherence of the population to the measures recommended by the governance systems as part of the campaign to manage the pandemic and resume the full functionality of the Israeli society [5,23]. This was well demonstrated during the ongoing campaigns that were launched to encourage the population to be vaccinated against COVID-19. Studies have shown that societal resilience is negatively associated with vaccine hesitancy (the higher the societal resilience, the lower vaccine hesitancy) and positively correlated with vaccine uptake (the higher the societal resilience, the higher levels of vaccine uptake, i.e. number of vaccines and/or boosters that were taken) [23]. Trust has been presented as a major component of societal resilience and it impacts on the willingness of individuals to adhere to guidance and directives of the varied authorities. Investing efforts in enhancing the trust of the population in its leadership as well as in strengthening the social integration of all sectors of society, can substantially impact public attitudes and influence their compliance with measures that are recommended by the State's leadership [11,12]. Insight 4: Factors that impact the levels of resilience The levels of resilience are dependent on several factors that can vary from person to person. These factors can have a significant impact on the resilience of an individual, organization, or system. The COVID-19 pandemic presented a unique opportunity to identify varied factors that impact the resilience of individuals, communities, and society at large. An important insight that should be emphasized is that the levels of resilience are not constant and may vary according to the changed conditions (levels of infectivity, political governance systems, concurrent emergencies, etc.), the length of time of the pandemic itself, as well as the containment measures that are adopted (such as for example, the duration of each lockdown) [21]. The first factor that can affect levels of individual resilience is the level of self-efficacy. Self-efficacy, or the belief in one's ability to handle adversity and act, is an important determinant of resilience. Those who have a higher level of self-efficacy are more likely to be more resilient in the face of the challenging COVID-19 situation [37]. Community resilience during COVID-19 was impacted by the following five main aspects [31]: 1. Social aspects, including a joint identity, social integrity, and effective communication. 2. Institutional aspects, including effective leadership and planning systems. 3. Endurable economic capacities. 4. Functional and accessible vital services. 5. Well-being and quality of life. The major factors that were found to impact societal resilience included hope, trust in the country's leadership as well as the social power of finding meaning in life [22,33]. Insight 5: The public should not be perceived as the 'problem' but rather as the 'solution' The COVID-19 pandemic has been a highly stressful period for humanity in communities worldwide. In response, many governments have implemented a variety of social distancing, contact tracing, and other measures to help protect public health. Unfortunately, rather than looking at the general public as an asset in solving this problem, many governments have chosen to blame them for potentially worsening the situation, criticizing their non-adherence to protective measures that were issued, such as being vaccinated, maintaining social distancing, or strictly adhering to lockdowns [8]. The reality is that the public can provide vital solutions to the great challenge COVID presents. People are the most important asset when it comes to slowing the spread of the virus, and their compliance is essential for containing outbreaks and keeping infection rates low. By strictly complying with guidelines from trusted authorities, such as wearing face masks, reducing social contact, or staying at home during isolation, the public can become the most important part of combating the pandemic [32]. To attain such a partnership with the public, it should be perceived as an important stakeholder in fighting the pandemic, rather than a 'problem' that should be dealt with. This alternative perception of the public as a key chain in managing any pandemic will lead to a better understanding of the needs and expectations of the population and a more applicable 'tailoring' of the messages and guidelines that address the public. The risk communication strategy that was frequently adopted in Israel well reflects this need. For prolonged periods, the messages relayed to the public focused on the risk the virus poses for the life and health of all individuals, and 'scare tactics' were used to encourage people to adhere to the protective measures that were relayed by the authorities [15]. Threats of severe fines, the use of the police to enforce adherence to regulations, and the implementation of a "green tag" to differentiate between vaccinated and non-vaccinated people are just some of the measures that were implemented to increase compliance of the population. In contrast, studies have shown that such measures did not substantially increase the adherence of the public to these measures [18]. The elements that were reported as effective in achieving such compliance were rather the understanding that adoption of such measures will safeguard the health and well-being of loved ones or oneself [14]. Perceiving the population as an integral component in designing the appropriate response to the pandemic, adopting and presenting positive and empowering actions that are based on solidarity and mutual responsibility, rather than efforts to instill fear or other deterring measures, prove to be more instrumental in gaining the cooperation and compliance of the public [18]. Recruiting the public's compliance is dependent on the transparency of actions, sharing information and insights, enhancing trust in the governmental entities and leadership, and addressing the public's concerns. Insight 6: The gap between science and policymaking must be bridged The substantial effects of the COVID-19 pandemic have demonstrated that science and policymaking must work together to ensure the appropriate management of future waves of the current pandemic as well as other emerging pandemics. Too often the gap between scientific facts and public beliefs or policies does not allow for addressing global health issues effectively. To bridge this gap, a closer collaboration between scientists, public health officials, and policymakers is needed. First and foremost, public policies need to be informed by scientific research and built on evidence-based data. In times of crisis, policymakers must have access to the most updated scientific findings and evidence-based approaches. Pandemics frequently elicit diverse theories of conspiracy concerning their causes and consequences, and scientific findings can be used to refute those that are not based on any founded data [11]. The COVID pandemic highlighted several issues concerning the bridge between science and policymakers. One issue is that many decision-makers were slow to act on scientific advice regarding the perceptions and concerns of the public regarding the pandemic. For example, the assumption in Israel [10] was that the public is mostly concerned with health issues rather than political or economic consequences. This resulted in a focus on the potential damage that can be caused by the virus, leading to a decline in the public's trust in the leadership and a consecutive decrease in adherence to guidelines issued by the authorities. There was also a lack of public trust in both scientists and politicians, leading to further confusion and mistrust [17]. Furthermore, there was an inadequate flow of information between scientific communities and policymakers, making it difficult for both groups to make informed decisions. Science offers a wealth of knowledge about the consequences of the pandemic itself and each strategy that is adopted in the effort to successfully manage it, but without proper policy action, this knowledge cannot truly impact. Simultaneously, policymaking that is not based on recognition and in-depth understanding of the needs, expectations, and motivations of the public, cannot be effective and the objectives are thus not met. The lack of an effective bridge between scientific research and governmental decision-making has caused a schism between the two, leading to a suboptimal achievement of ways to best manage the pandemic. Bridging this gap isn't an impossible feat, as there are a few steps that could be taken by both sides to formulate and ensure effective collaboration. The first step in bridging this divide is for scientists to be better communicators in terms of their findings, publishing them not only in scientific journals but also in reports and publications that are open and accessible to both policymakers and the public at large. At times, there may be disagreements or controversies among varied scientists, but these too should be shared with varied audiences, despite the confusion or frustration that such inconsistencies may cause. Concurrently, policymakers should be aware, updated, and seek scientific findings that will enable them to understand phenomena and trends that characterize and concern the population, and accordingly consider these elements in their decision and policy making. Conclusions The COVID-19 pandemic has provided a unique opportunity to learn lessons concerning the resilience of the population, its impact on the effectiveness of the management strategies that were adopted by decision and policymakers, and ways to improve coping with future pandemics. A major change that is recommended is a holistic view of all stakeholders that should be involved in the management of adversities, including the recruitment of the public as a valued partner, rather than as a 'problem' that should be contended with. Enhancing the connectivity between policymakers and scientists and basing the decision-making process on evidence-based data rather than on basic assumptions, is expected to substantially improve the adherence of the population to governmental guidelines and containment measures. Enhancing the resilience of the population, by strengthening trust in the authorities and governance systems, is vital to the successful management of all types of adversities, among them future pandemics.
2023-05-03T13:44:56.029Z
2023-05-02T00:00:00.000
{ "year": 2023, "sha1": "f2a85a5ee1dc962d973ec7665a33f658db5eced6", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "f2a85a5ee1dc962d973ec7665a33f658db5eced6", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
8571598
pes2o/s2orc
v3-fos-license
Pushed beyond the brink: Allee effects, environmental stochasticity, and extinction To understand the interplay between environmental stochasticity and Allee effects, we analyse persistence, asymptotic extinction, and conditional persistence for stochastic difference equations. Our analysis reveals that persistence requires that the geometric mean of fitness at low densities is greater than one. When this geometric mean is less than one, asymptotic extinction occurs with high probability for low initial population densities. Additionally, if the population only experiences positive density-dependent feedbacks, conditional persistence occurs provided the geometric mean of fitness at high population densities is greater than one. However, if the population experiences both positive and negative density-dependent feedbacks, conditional persistence only occurs if environmental fluctuations are sufficiently small. We illustrate counter-intuitively that environmental fluctuations can increase the probability of persistence when populations are initially at low densities, and can cause asymptotic extinction of populations experiencing intermediate predation rates despite conditional persistence occurring at higher predation rates. Introduction Populations exhibit an Allee effect when at low densities individual fitness increases with density [2,43]. Common causes of this positive density-dependent feedback include predator-saturation, cooperative predation, increased availability of mates, and conspecific enhancement of reproduction [7,8,18,19,29,43]. When an Allee effect is sufficiently strong, it can result in a critical density below which a population is driven rapidly to extinction through this positive feedback. Consequently, the importance of the Allee effect has been widely recognized for conservation of at risk populations [4,8,11,42] and management of invasive species [27,31,44]. Population experiencing environmental stochasticity and a strong Allee effect are widely believed to be especially vulnerable to extinction as the fluctuations may drive their densities below the critical threshold [4,7,8,12]. However, unlike the deterministic case [10,13,14,22,[24][25][26]34,37,48], the mathematical theory for populations simultaneously experiencing an Allee effect and environmental stochasticity is woefully underdeveloped (see, however, Dennis [12]). 188 G. Roth and S.J. Schreiber To better understand the interplay between Allee effects and environmental stochasticity, we examine stochastic, single species models of the form (1) where X t ∈ [0, ∞) is the density of the population at time t, f (x, ξ) is the fitness of the population as a function of its density and the environmental state ξ , and the environmental fluctuations ξ t are given by a sequence of independent and identically distributed (i.i.d.) random variables. Here we determine when these deterministic and stochastic forces result in unconditional stochastic persistence (i.e. the population tends to stay away from extinction for all positive initial conditions with probability one), unconditional extinction (i.e. the population tends asymptotically to extinction with probability one for all initial conditions), and conditional stochastic persistence (i.e. the population persists with positive probability for some initial conditions and goes extinct with positive probability for some, possibly the same, initial conditions). Section 2 describes our standing assumptions. Section 3 examines separately how negative-density dependence and positive-density dependence interact with environmental stochasticity to determine these different outcomes. For models with negative density-dependence (i.e. f (x, ξ) is a decreasing function of density x), Schreiber [40] proved that generically, these models only can exhibit unconditional persistence or unconditional extinction. For models with only positive-density dependence (i.e. f (x, ξ) is an increasing function of density x), we prove that all three dynamics (unconditional persistence, unconditional extinction, and conditional persistence) are possible and provide sufficient and necessary conditions for these outcomes. Section 4 examines the combined effects of negative-and positive-density dependence on these stochastic models. We prove that conditional persistence only occurs when the environmental noise is 'sufficiently' small. Throughout all of the sections, we illustrate the main results using models for mate-limitation and predatorsaturation. Section 5 concludes with a discussion of the implications of our results, how these results relate to prior results, and future challenges. Models, assumptions, and definitions Throughout this paper, we study stochastic difference equations of the form given by Equation (1). For these equations, we make two standing assumptions Uncorrelated environmental fluctuations: {ξ t } ∞ t=0 is a sequence of independent and identically distributed (i.i.d) random variables taking values in a separable metric space E (such as R n ). Fitness depends continuously on population and environmental state: the fitness function f : R + × E → R + is continuous on the product of the non-negative half line R + = [0, ∞) and the environmental state space E. The first assumption implies that (X t ) t≥0 is a Markov chain on the population state space R + . While we suspect our results hold true without this assumption, the method of proof becomes more difficult and will be considered elsewhere. The second assumption holds for most population models. Our analysis examines conditions for asymptotic extinction (i.e. lim t→∞ X t = 0) occurring with positive probability and persistence (a tendency for populations to stay away from extinction) with positive probability. Several of our results make use of the empirical measures for the where δ x denotes a Dirac measure at the point x i.e. δ x (A) = 1 if x ∈ A and 0 otherwise. For any interval [a, b] of population densities, t ([a, b]) is the fraction of time that the population spends in this interval until time t. The long-term frequency that (X t ) t≥0 enters the interval [a, b] is given by lim t→∞ t ([a, b]), provided the limit exists. As these empirical measures depend on the stochastic trajectory, they are random probability measures. Results for negative-density dependence For models with only the negative density dependence (i.e. fitness f is a decreasing function of density), Schreiber [40] proved that the dynamics of the model (1) exhibit one of three possible behaviours: asymptotic extinction with probability one, unbounded population growth with probability one, or stochastic persistence and boundedness with probability one. Closely related results have been proven by Chesson [5], Ellner [15], Gyllenberg et al. [21], Fagerholm and Högnäs [16] and Vellekoop and Högnäs [46]. Prior to stating this result, recall that log + x = max{log x, 0}. In the case of stochastic persistence, Theorem 3.1 implies that the typical trajectory spends most of its time in a sufficiently large compact interval excluding the extinction state 0. To illustrate Theorem 3.1, we apply it to stochastic versions of the Ricker and Beverton-Holt models. For the stochastic Ricker model, the fitness function is f (x, ξ) = exp(r − ax) where ξ = (r, a). Stochasticity in r t and a t may be achieved by allowing r t to be a sequence of i.i.d. < 0, then asymptotic extinction occurs with probability one. Results for positive-density dependence In contrast to models with only negative-density dependence, models with only positive-density dependence exhibit a different trichotomy of dynamical behaviours: asymptotic extinction for all initial conditions, unbounded population growth for all positive initial conditions, or conditional persistence in which there is a positive probability of the population going asymptotically extinct for some initial conditions and a positive probability of unbounded population growth for some, possibly the same, initial conditions. To characterize this trichotomy, we say {0, ∞} is accessible from the set B ⊂ (0, ∞) if for any M > 0, there exists γ > 0 such that for all x ∈ [M , ∞) and all y ∈ (0, m]. Moreover, if {0, ∞} is accessible, then P lim for all x ∈ (0, ∞). To illustrate Theorem 3.2, we apply it to stochastic versions of models accounting for matelimitation and a predator-saturation. For many sexually reproducing organisms, finding mates becomes more difficult at low densities. For instance, pollination of plants by animal vectors becomes less effective when patches become too small because lower densities result is reduced visitation rates by pollinators [20]. Alternatively, fertilization by free spawning gametes of benthic invertebrates can become insufficient at low densities [28,32]. To model mate-limitation, let x be the density of females in the population. Assuming a 50-50 sex ratio (i.e. x also equals the density of males in the population), Dennis [11], McCarthy [35], Scheuring [36] modelled the probability of a female finding a mate by the function where h is a half-saturation constant, i.e. the male density at which 50% of the females find a mate. If λ is the number of daughters produced per mated female, then the fitness function is asymptotic extinction for some initial conditions with positive probability. Theorem 3.2 implies that asymptotic extinction occurs for all initial conditions with probability one Figure 1 illustrates how the probability of persistence for the mate-limitation model depends on initial condition and the level of environmental stochasticity. Interestingly, higher levels of environmental stochasticity promote higher probabilities of persistence when initial population densities are low. Interestingly, when the population is below the 'Allee threshold', environmental stochasticity provides opportunities of escaping the extinction vortex. Another common Allee effect occurs in species subject to predation by a generalist predator with a saturating functional response. Within such populations, an individual's risk of predation decreases as the population's density increases. For example, in field studies, Crawley and Long [9] found that per capita rates of acorn loss of Quercus robur L. to invertebrate seed predators were greatest (as high as 90%) amongst low acorn crops and lower (as low as 30%) on large acorn crops. To model Allee effects due to predator-saturation, Schreiber [37] used the following fitness function: where r is the intrinsic rate of growth of the focal population, P is the predation intensity, and h is a half-saturation constant. Stochasticity may be achieved by allowing r t to be normally distributed and h t , P t be log-normally distributed. Theorem 3.2 implies that unbounded growth occurs for all initial conditions whenever < 0 implies asymptotic extinction with probability one for all initial conditions. Conditional persistence occurs when both of these inequalities are reversed. Positive-and negative-density dependence For populations exhibiting positive-and negative-density dependence, the fitness function f (x, ξ) can increase or decrease with density. For these general fitness functions, we prove several results about asymptotic extinction and persistence in the next two subsections. Extinction We begin by showing that assumptions implies asymptotic extinction occurs with positive probability for populations at low densities. Furthermore, we show this asymptotic extinction occurs with probability one for all positive initial conditions whenever the extinction set {0} is 'accessible', i.e. there is always a positive probability of the population density getting arbitrarily small. More specifically, we say {0} is Theorem 4.1 Assume A1 and A2. Then for any δ > 0, there exists > 0 such that There are two cases for which one can easily verify accessibility of {0}. First, suppose that f (x, ξ) = g(x)ξ . If (ξ t ) t≤0 is a sequence of log-normal or gamma-distributed i.i.d. random variables and x → xg(x) is bounded (i.e. there exists M > 0 such that xg(x) ≤ M for all x). Then, it follows immediately from the definition of accessibility that {0} is accessible from [0, ∞). Hence, in this case E[log f (0, ξ t )] < 0 implies unconditional extinction. Since log-normal random variables and gamma random variables can take on any positive value, we view this case as the 'large noise' scenario, i.e. there is a positive probability of the log population size changing by any amount. Alternatively, for sufficiently, small noise, there are a set of simple conditions for accessibility of {0}. Define F : For any x ∈ R, define x + = max{0, x}. A system (1) satisfying the following hypotheses for ε > 0 is an ε-small noise system: H3 for all x ∈ R + and all Borel sets U ⊂ [(F 0 (x) − ε) + , F 0 (x) + ε] with positive Lebesgue measure, there exist α > 0 and γ > 0 such that 193 The first assumption ensures that the unperturbed dynamics remain uniformly bounded. The second assumption implies that the noise is ε-small, while the third assumption implies the noise is locally absolutely continuous. has no positive attractor. Then there exists a decreasing function ε : R + → R + such that, for any M > 0, there exists an invari- As a direct consequence of Theorem 4.1 and Proposition 4.2, we have (1) is an ε-small noise system for ε ≤ ε 0 , the dynamics induced by F 0 has no positive attractor, and assumptions A1-2 hold, then Persistence When E[log f (0, ξ 1 )] > 0 and there is only negative-density dependence, Theorem 3.1 ensured the system is stochastically persistent. The following theorem shows that this criterion also is sufficient for models that account for negative-and positive-density dependence. In the case of small noise, the following proposition implies the existence of a positive attractor for the unperturbed dynamics is sufficient for the existence of a positive invariant set. In particular, conditional persistence is possible when E[log f (0, ξ 1 )] < 0. Proposition 4.7 Assume that A ⊂ (0, ∞) is an attractor for the difference equation x t+1 = F 0 (x t ). Then there exists a bounded positive invariant set K whenever the system (1) satisfies H2 for ε > 0 sufficiently small. Mate-limitation and predator-saturation with negative-density dependence To illustrate Theorems 4.1, 4.4 and Propositions 4.2, 4.7, we apply them to models accounting for negative-density dependence and positive-density dependence via mate-limitation or predatorsaturation. The deterministic version of these models were analysed by Schreiber [37]. To account for negative-density dependence, we use a Ricker-type equation. In the case of the mate-limitation model, the fitness function becomes where r is the intrinsic rate of growth in the absence of mate-limitation, a measures the strength of infraspecific competition, and h is the half-saturation constant as described in Section 3.2. In the absence of stochastic variation in the parameters r, a, h, the dynamics of persistence and extinction come in three types [37]. To account for environmental stochasticity, we assume, for illustrative purposes, that r t is uniformly distributed on the interval [r − , r + ] with r > 0 and 0 < < r. Furthermore, we assume that a = 1 and h > 0. As E[log f (0, ξ t )] = −∞, Theorem 4.1 implies that lim t→∞ X t = 0 with positive probability for initial conditions X 0 sufficiently close to 0. When the deterministic dynamics support a positive attractor (i.e. F(F(C)) > M ) and the noise is sufficiently small (i.e > 0 sufficiently small), Proposition 4.7 implies that the density X t for the stochastic model remains in a positive compact interval contained in (M , ∞). Alternatively, if the deterministic dynamics exhibit essential extinction and the noise is sufficiently small, Proposition 4.2 implies lim t→∞ X t = 0 with probability one for all initial densities despite the deterministic dynamics having an infinite number of unstable periodic orbits. Finally, when is sufficiently close to r (i.e. the noise is sufficiently large), Theorem 4.1 implies that lim t→∞ X t = 0 with probability one for all positive initial conditions. This later outcomes occurs whether or not the deterministic dynamics support a positive attractor. Each of these outcomes is illustrated in Figure 2. For the predator-saturation model, we use the fitness function where h and P are the half-saturation constant and the maximal predation rate, respectively, as described in Section 3.3. The dynamics of persistence and extinction for this model without stochastic variation come in four types [37]. If f (0, ξ) > 1, then there is a positive attractor whose basin contains all positive initial densities. If f (x, ξ) < 1 for all x ≥ 0, then all initial conditions To account for stochasticity, we assume for simplicity that P t is uniformly distributed on the interval [P(1 − ), P(1 + )] for some P > 0 and 0 < < 1. Furthermore, we assume that a = 1, r > 0, and h > 0. When E[log f (0, ξ t )] = r − P > 0, Theorem 4.4 implies the system is stochastically persistent. Alternatively, when E[log f (0, ξ t )] = r − P < 0, Theorem 4.1 implies that lim t→∞ X t = 0 with positive probability for initial conditions X 0 sufficiently close to 0. Assume r < P. If the deterministic dynamics support a positive attractor (i.e. F(F(C)) > M ) and the noise is sufficiently small (i.e. > 0 sufficiently small), Proposition 4.7 implies that the density X t for the stochastic model remains in a positive compact interval contained in (M , ∞). Hence, the population exhibits conditional persistence. Alternatively, if the deterministic dynamics exhibit essential extinction and the noise is sufficiently small, Proposition 4.2 implies lim t→∞ X t = 0 with probability one for all initial densities. Finally, when is sufficiently close to 1 (i.e. the noise is sufficiently large) and P > r, Theorem 4.1 implies that lim t→∞ X t = 0 with probability one for all positive initial conditions. Each of these outcomes is illustrated in Figure 3. Discussion A demographic Allee effect occurs when individual fitness, at low densities, increases with population density. If individuals on average replace themselves at very low densities, then the population exhibits a weak Allee effect. Alternatively, if there is a critical density below which individuals do not replace themselves and above which where they do, then the population exhibits a strong Allee effect. It is frequently argued that environmental stochasticity coupled with a strong Allee effect can increase the likelihood of a population falling below the critical threshold, rendering them particularly vulnerable to extinction [7,43]. While this conclusion is supported, in part, by mathematical and numerical analyses of stochastic differential equation models [12,30,49], these earlier analyses are specific to a modified Logistic growth model with Brownian fluctuations in the log population densities. Here, we analysed discrete-time models allowing for general forms of density-dependent feedbacks and randomly fluctuating vital rates. Our analysis demonstrates that environmental stochasticity can convert weak Allee effects to strong Allee effects and that the risk of asymptotic extinction with strong Allee effects depends on the interaction between density-dependent feedbacks and environmental stochasticity. When environmental fluctuations (ξ t ) drive population dynamics ( , an Allee effect is best defined in terms of the geometric mean G(x) = exp(E[log f (x, ξ t )]) of fitness. If the geometric mean G(x) is an increasing function at low densities, an Allee effect occurs. If this geometric mean is greater than one at low densities (G(0) > 1), then we proved that the Allee effect is weak in that the population stochastically persists: the population densities spends arbitrarily little time at arbitrarily low densities. When the geometric mean is less than one at low densities (G(0) < 1), the stochastic Allee effect is strong: for populations starting at sufficiently low densities, the population density asymptotically approaches zero with positive probability. Since the geometric mean G(0) in general does not equal the intrinsic fitness f (0, E[ξ t ]) at the average environmental condition, environmental stochasticity can, in and of itself, shift weak Allee effects to strong Allee effects and vice versa. For example, a shift from a weak Allee effect to a strong Allee effect can occur when a population's predator has a fluctuating half-saturation constant. Specifically, for the predator-saturation model considered here, the geometric mean at low densities equals G(0) = r − E[P/h t ] where r is the intrinsic rate of growth of the focal population, P is proportional to the predator density, and h t is the fluctuating half-saturation constant of the predator. As Jensen's inequality implies that G(0) < r − P/E[h t ], fluctuations in h t can decrease the value of G(0) from > 1 to < 1 and thereby shift a weak Allee effect to a strong Allee effect. In the absence of negative density-dependent feedbacks, we proved that there is a dynamical trichotomy: asymptotic extinction for all initial densities, unbounded population growth for all positive initial conditions, or a strong Allee effect (i.e. G(0) < 1 but G(x) > 1 for sufficiently large x). When a strong Allee effect occurs and environmental fluctuations are large (i.e. the support of log f (x, ξ t ) is the entire real line for all x > 0), populations either go asymptotically to extinction or grow without bound with probability one. Moreover, both outcomes occur with positive probability for all positive initial conditions. Liebhold and Bascompte [33] used models with only positive-density dependence to examine numerically the joint effects of Allee effects, environmental stochasticity, and externally imposed mortality on the probability of successfully exterminating an invasive species. Their fitness function was where C is the deterministic Allee threshold, γ is the 'intrinsic rate of natural increase', and ξ t are normal random variables with mean 0. Since G(0) = exp(−γ C) < 1 and lim x→∞ G(x) = +∞ for this model, our results imply both extinction and unbounded growth occur with positive probability and, thereby, provide a rigorous mathematical foundation for Liebhold and Bascompte's [33] numerical analysis. Consistent with our simulations of a stochastic mate-limitation model, Liebhold and Bascompte's [33] found that the probability of persistence increases in a sigmoidal fashion with initial population density. In particular, environmental stochasticity increases the probability of persistence for populations initiated at low densities by pushing their densities above the deterministic Allee threshold. Conversely, for populations initiated at higher densities, environmental stochasticity can increase the risk of asymptotic extinction by pushing densities below this threshold. Indeed, we proved that the probability of asymptotic extinction approaches zero as initial population densities get large and the probability of asymptotic extinction approaches one as initial population densities get small. Since populations do not grow without bound, negative density-dependent feedbacks ultimately dominate population growth at higher population densities [23,45,47]. While stochastic persistence never occurs with a strong Allee effect, extinction need not occur with probability one. Whether or not extinction occurs for all positive initial densities with probability one depends on a delicate interplay between the nonlinearities of the model and the form of environmental stochasticity. A sufficient condition for unconditional extinction (i.e. extinction with probability one for all initial conditions) is that the extinction set {0} is 'attainable' from every population density state. Attainability roughly means that the population densities become arbitrarily small at some point in time with probability one. For populations whose densities remain bounded from above, we proved a dichotomy: either there exists a positive invariant set for the process or {0} is attainable in which case there is unconditional extinction. Whether this dichotomy extends to unbounded population state spaces remains an open problem. When environmental stochasticity is weak and there is a strong Allee effect, the 'unperturbed' population dynamics determines whether extinction occurs for all initial conditions or not. By 'weak' we mean that the unperturbed dynamics F are subject to small, compactly supported random perturbations (i.e. x t+1 − F(x t ) lies in an interval [−ε, ε] for ε > 0 small). The existence of a positive attractor is necessary for conditional persistence in the face of weak environmental stochasticity. This result confirms the consensus in the mathematical biology community, that the existence of a positive attractor ensures that population trajectories can remain bounded away form extinction in the presence of small perturbations [38]. For populations exhibiting a strong Allee effect and conditional persistence at low levels of environmental stochasticity, there is always a critical level of environmental stochasticity above which asymptotic extinction occurs with probability one for all initial population densities. Mathematically, there is a transition from the extinction set {0} being inaccessible for part of the population state space at low levels of environmental stochasticity to {0} being accessible for the entire population state space at higher levels of environmental stochasticity. We have illustrated this transition in stochastic models of mate-limitation and predator-saturation with negative-density dependence. Surprisingly, for the predator-saturation models, our numerical results show that environmental stochasticity can lead to asymptotic extinction at intermediate predation rates despite conditional persistence occurring at higher and lower predation rates. This effect, most likely, is due to the opposing effects of predation on overcompensatory feedbacks and the Allee threshold resulting in a larger basin of attraction for the extinction state at intermediate predation rates. While our analysis provides some initial insights into the interactive effects of Allee effects and environmental stochasticity on asymptotic extinction risk, many challenges remain. Many populations exhibit spatial, ontogenetic, social, or genetic structure. Proving multivariate analogues to the results proven here could provide insights on how population structure interacts with the effects considered here to determine population persistence or extinction. Furthermore, all populations consist of a finite number of individuals whose fates are partially uncorrelated. Hence, they experience demographic as well as environmental stochasticity [1]. In accounting for bounded, finite population sizes in stochastic models, extinction in finite time is inevitable. However, these models often exhibit meta-stable behaviour in which the populations persist for long periods of time despite both forms of stochasticity and Allee effects. This meta-stable behaviour often is associated with quasi-stationary distributions of the finite-state models. Studying to what extent these distributions have well definite limits in an 'infinite-population size' limit is likely to provide insights into these metastable behaviours [17] and provide a more rigorous framework to evaluate the joint effects of stochasticity and Allee effects on population persistence and ultimately their consequences for conservation and management. We consider the trajectory space formed by the product = R N + equipped with the product σ -algebra B N . For any x ∈ R + (viewed as an initial condition of trajectory), there exists a probability measure P x on satisfying for any Borel sets A 0 , . . . , A k ⊂ R + , and P x [{ω ∈ : ω 0 = x}] = 1. The random variables X t are the projection maps For the proof of Theorems 4.1 and 3.2, we consider the space E N of the environmental trajectories equipped with the product σ -algebra E N , and the probability measure Q on E N satisfying for any Borel sets E 0 , . . . , E k ⊂ E. For now on, when we write e ∈ E N , we mean e = (e t ) t≥0 . Since E is a Polish space (i.e. separable completely metrizable topological space), the space E N endowed with the product topology is Polish as well. Therefore, by the Kolmogorov consistency theorem, the probability measure Q is well defined. In this setting, the random variable ξ t is the projection map We use the common notation E (resp. E x ) for the expectation with respect to the probability measure Q (resp. P x ). Let x ∈ R + and x = {ω ∈ : ω 0 = x} be the cylinder of the trajectories starting at x. The continuous function ϕ : E N → x defined component-wise by log f (0, e s ) + log x = α > 0, Q-almost all e ∈ E N . Therefore which completes the proof of the first assertion. To prove the last part, assume that {0, ∞} is accessible from R + and fix δ > 0. By Equations (A3) and (A4), there exist m, M > 0 such that A.4. Proof of Proposition 4.2 The proof consists of combining two deterministic arguments with a probabilistic argument. The three of them use the concept of (ε, T)-chain introduced by Conley [6]. An (ε, T)-chain from x to y in R, for a mapping F 0 : R + → R + , is a sequence of points x 0 = x, x 1 , . . . , x T−1 = y in R + such that for any s = 0, . . . , T − 2, |x i+1 − F 0 (x i )| < ε. x chains to y if for any ε > 0 and T ≥ 2 there exists an (ε, T)-chain from x to y. The following propositions are the deterministic ingredients of the proof and are proved in [38]. Proposition A.2 Let A be an attractor with basin of attraction B(A) and V ⊂ U be neighbourhoods of A such that the closure U of U is compact and contained in B(A). Then there exists T ≥ 0 and δ > 0 such that every δ chain of length t ≥ T starting in U ends in V . Proposition A. 3 If F 0 : R + → R + satisfies H1 and has no positive attractor, then for all x ∈ R + , ε > 0 and T > 0 there exists an ε chain from x to 0 of length at least T. The probabilistic ingredient is an adaptation of Proposition 3 in [39] to our framework.
2014-05-07T18:15:54.000Z
2014-01-01T00:00:00.000
{ "year": 2014, "sha1": "e745d28c4584b1644449b19dbd2428a636903882", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1080/17513758.2014.962631", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3b1ca7ac30f53d91553750bbcd281ba60ef929ae", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Mathematics", "Medicine" ] }
119168057
pes2o/s2orc
v3-fos-license
On the Uniqueness Problem for Notations of Recursive Ordinals In the article 'Ordinal Logics and the Characterizations of the Informal Concept of Proof', Georg Kreisel poses the problem of assigning unique notations to recursive ordinals, and additionally suggests that the methods which are developed for its solution will be non-constructive in character. In this paper we develop methods in which various uniqueness results for notations of recursive ordinals can be obtained, and thereafter apply these results to investigate the problems surrounding the hierarchical classification of the computable functions. Introduction In [15, pg. 292] Kreisel addresses the problem of assigning unique notations to recursive ordinals, and suggests that the method applied to assigning these notations will be non-constructive. However, the main difficulties which appear in many attempts to resolve this problem are deeply related to various non-uniqueness results for hierarchies of computable functions indexed by recursive ordinals. Moreover, it has been pointed out by Feferman [6] that one of the higher goals of obtaining a satisfactory hierarchy of computable functions is to elucidate how to canonically classify any arbitrarily defined decision procedure with respect to some fixed level in the hierarchy-a hierarchy which we intuitively believe to be linearly ordered and everywhere defined. 1 Another hope in achieving a unique hierarchy, as described in a remark by Wainer [1, pg. 150] on classes of "verifiably" provable functions, is that if such a hierarchy were indexed by unique notations for all recursive well-orderings, then the arithmetical information described in the structure of the hierarchy would shed light on the question of what it means to possess a "natural" well-ordering of ω, especially when the focus of its uniqueness is placed on the constructive information contained in classifying the provably recursive functions of a formal theory. Perhaps more subtly, a deeper conceptual problem arises when attempting to reach a purely hierarchical understanding of how assigning notations to recursive ordinals can be used to constructively generate a class of computable functions which is closed under relative computability. Naturally, this conceptual problem is explicitly encountered when attempting to distinguish the relative complexity of any pair of distinct, arbitrarily defined computable functions with respect to the decision problem of whether a recursive relation defines a well-ordering. Thus, when properly taken in the context of the apparent absoluteness 2 of the concept of mechanical procedure, these hopes illustrate the fundamental gaps which are 1 Cf. Kanamori [11, pg. 256] 2 Gödel [8, pg. 151] claims the "absoluteness" of the notion of computable function by citing the fact that it is invariant under adjoining higher types to any formal theory containing arithmetic with respect to diagonalization. encountered in any attempt to clarify questions regarding the concept of mechanical procedure and its formalization in any hierarchical manner. However, before supplying further details, a brief overview of the intuitions involved in the direction of obtaining unique notations will be given. It is a well-known difficulty, in view of the negative results of various authors 3 , that many hierarchies of computable functions indexed by recursive ordinals "collapse" at the first limit ordinal ω, so that no unique arithmetical information concerning the structure of the hierarchy can be measured beyond ω. Further, by defining a one-one map from notations into the recursive ordinals via the system O of ordinal notations developed by Kleene [13,14], a canonical difficulty can be immediately singled out. In particular, the arithmetically definable relation < e which stands between indices of recursive ordinals cannot be reduced to the relation < O which stands between the notations for these recursive ordinals. That is, if |α|, |β| ∈ O and α, β are recursive ordinals, then (1.1) α < e β =⇒ |α| < O |β| but the converse does not necessarily hold, owing to the fact that < O is a partially ordered Π 1 1 -relation. Therefore, it is in this sense these hierarchies "collapse" because any recursive limit ≥ ω can receive up to 2 ω notations in O, and thus one cannot give a non-trivial classification of the computable functions beyond the ωth level by appealing to the order-type of the length of their termination proofs. Essentially, this failure of classification stalls any attempt at providing a unique, constructive meaning to the closure of the class of computable functions under diagonalization. From these facts, the problem of assigning unique notations to recursive ordinals can be redressed as the problem of constructing an order-preserving relation which holds between distinct notations and does not suffer from the definability issues as sketched above. By studying the properties of O, it becomes clear that any proposed system of notations that is constructed to overcome these issues cannot resemble the Π 1 1 -complete structure of O in any outward way. Consequently, an immediate obstacle for defining an order-preserving relation which holds for all distinct notations is that one must jointly succeed in constructing a system O * of notations such that α < e β ⇐⇒ |α| < O * |β|, where O * is arithmetically definable and all properties which we intuitively believe to hold for a natural hierarchy of computable functions (such as linearity and being everywhere defined) can be formally characterized within the structure of O * . What is a Natural Well-Ordering? To begin to discuss the question of what it means to possess a "natural" or "canonical" well-ordering of the integers, it becomes expedient to survey the conceptual issues at hand which are directly encountered in the attempt to make these matters more tractable. For Turing [25], it is left as a matter of intuition to verify, on the basis of mechanical inferences, whether an arbitrary recursive relation defines a well-ordering. However, independently of the difficulties involved in carrying out an effective verification (i.e., independent of one's intuition), it is important to see that the constructive problems of supplying an effective verification find their origin within the lacuna of distinguishing between extensional and intensional measures of ordinal complexity for computable functions. In particular, the standard definition of a recursive ordinal is given within a purely constructive context; that is, effectively specifying a mechanical procedure which decides, in a computable number of steps, whether the recursive relation in question is a well-ordering, since any effectively enumerable set is computable if and only if its characteristic function is. Curiously, however, almost all constructive issues of this type appear to not depend on the property that any given recursive ordinal has a canonical representation. More concisely, the intensional nature of verifying that a recursive well-ordering is well-founded and the arithmetic statement 4 of the terminating procedures used to specify them seem to have no external relationship to one another. As a result, to resolve the conceptual disagreement between these extensional and intensional concerns, one may attempt to separate out those effectively specified procedures which decide the totality of a recursive relation from those which enumerate the procedures according to their ordinal complexity. Consequently, we place our concern not on the extensional nature of these procedures, but rather on their purely intensional aspects. As a starting point, if we wish to clarify the conceptual issues involved, we intend to achieve some canonical description of the ordinal complexity of the effectively specifiable procedures involved in our enumeration, with the hope that such a description will mirror the complexity of verifying the totality of an arbitrarily given recursive relation. Unfortunately, an immediate stumbling-block in this direction is that the usual method of measuring the complexity of a computable function by means of constructively defined ordinals does not extend far enough to encompass a canonical classification of the class of everywhere defined decision procedures. As a consequence, what one requires of a canonical description of the ordinal complexity of these procedures is that the description should reflect the complexity of verification with the "largeness" of the order-type of the well-ordering that would be defined. Thus the question to be resolved can be stated as follows: If one were able to obtain a canonical description of the complexity of any effectively specifiable procedure, then is there a method of associating this description with the recursive ordinals in a way that naturally reflects the order-type of the well-orderings that are to be defined? In essence, since the constructive concern of verifying the totality of a recursive relation appears to be independent of the requirement of having a "natural" representation of the ordinal, can one attempt to clarify the issue of obtaining canonical notations in a way that is intensionally related to the ordinal complexity of verifying whether the decision procedure in question is everywhere defined? On closer inspection, it seems that the lack of agreement between our constructive concerns and the intensional ambiguity of our analysis of the ordinal complexity of an arbitrarily defined decision procedure leads to a positive direction in which these questions can be resolved. In particular, because one is capable of excluding the constructive need for an effective verification from the want of an optimal 4 That is, the Π 0 2 -condition which expresses that the procedure is everywhere defined does not depend on the Π 1 1 -condition that the computaton tree of the procedure is well-founded. Thus, this well-foundedness condition is not arithmetically definable even if one allows arbitrarily long recursions of length ≥ ω to define the procedure in question, implying that the problem of determing whether the computation tree is well-founded is not equivalent to having a witness to the statement that the procedure is everywhere defined at or before ω. Cf. [22]. measure for the ordinal complexity of a computable function, one can appeal to a certain non-constructive intuition that is implicit in Turing's analysis of effective calculability 5 . That is, this non-constructive intuition is simply the belief that, independently of constructively verifying the totality of a recursive relation, there is the plausibility in the existence of a hierarchy of effectively specifiable mental procedures that are actualized in our experience of carrying out computations of varying degrees of difficulty. More directly, we have the evidence that, prior to judging the constructiveness of verifying that a (possibly non-terminating) procedure is total for deciding whether an arbitrary recursive relation defines a well-ordering, we are able to intuit that any method of "measuring" the complexity of this effective verification would be highly non-constructive, especially if one chooses to depend on the intuition that this hierarchy of decison procedures is everywhere defined. Consequently, we see that the origin of this hierarchical intuition can be interpreted in a purely non-constructive manner, and in this light, we are free to resolve any descriptive problem of analyzing the ordinal complexity of an arbitrary computable function without any dependence on the possible constructive nature of our evidence that it is everywhere defined. However, one may demand that if such an intuition is to suit our difficulties, then one must characterize it in a manner that not only leads to a canonical measure of the ordinal complexity for arbitrary sequences of effectively specifiable procedures, but also require that it is capable of clarifying the logical definition of these procedures with respect to a certain formal theory. Thus one may recognize that, in order to develop this intuition for its possible use in a formal system, it is worthwhile to first cultivate its meaning in a way that is independent of a specified system of axioms. In this light, one may come to understand the development of this intuition not as a way of mechanically producing more evident theorems, but as a way of supplementing the concept of mechanical procedure and enriching the comprehensiveness of our non-constructive methods. On these few points it would seem that, to all appearances, the conscious application of such an intuition would inevitably consitute an appeal to evidence of a different and more decisive kind. Applications of Diagonally Non-Computable Majorizing Functions We shall rely heavily on the insightful texts of Jockusch and Soare [9] on diagonally non-recursive (DNR) functions and Sacks [23] on the theory of constructive ordinals and their applications. Let σ ∈ 2 <ω denote a primitive recursive sequence number (under a Gödel numbering) and refer to 2 <ω as the set of all finite binary strings. Additionally, let T OT := {e : φ e is total}. According to [10], we say that a function F ∈ DN R 2 is a {0, 1}-valued diagonally non-computable function if and only if for all e ∈ Dom(φ), we have F (e) = φ e (e) ↓ such that φ e (e) is the eth partial computable function on its eth input. Furthermore, it will common practice throughout to refer to φ e (e) as the diagonal function. Additionally, we will have the following notation f ≃ g to stand between two computable functions f, g which mutually depend on the others definability. Finally, it will be important to note that, when provided with a suitable Gödel numbering, we are able to represent F ∈ DN R 2 as an infinite Π 0 1 -class. Continuing in this direction, we aim to motivate this section by exploring some conceptual ideas related to diagonal functions and the class of arbitrary partial computable functions. As described by Feferman [6], there is a fundamental recursiontheoretic problem of finding a canonical classification for the non-constructively defined class of partial computable functions, and thereafter the concept of a majorizing function is introduced to supplement this idea, for which we have the following definition: A direction in which a solution of this classification problem can be attained begins by analyzing how one might determine "from above" whether every computable function ψ which maps ω into ω and defines an arbitrary well-ordering of ω is everywhere defined. Essentially, we intend to determine if ψ is everywhere defined by defining an increasing enumeration of a computable sequence {φ ei } i∈ω which "globally" reflects if ψ is total computable, and aim to use the global information obtained by executing this enumeration to decide if any member contained in the sequence defines an infinite descending sequence on ω for all minimal elements 6 . Informally, we carry out this global enumeration by constructing a non-computable function F which majorizes all members defined in {φ ei } i∈ω under some arbitrary indexing of the total computable functions. That is, we construct F in a manner such that that all computable functions defined on the entire domain of a binary computable relation can be uniformly determined "from above" to be everywhere defined with respect to a simultaneous enumeration of a particular class of functions defined on all well-founded inital segments. By using this enumeration to diagonalize out of this class of functions, then one defines a certain "almost everywhere" diagonalization against all total functions if F disagrees with the partial map e −→ φ e (e) ↓ up to a constant for all e ∈ ω, granted that the numbers in the domain of F act as instances of fixed-points for the program which computes the well-ordering in a well-defined sense. In what follows, we aim to characterize the idea of determining that an arbitrary computable sequence is everywhere defined by developing the notion of a "diagonally non-computable" majorizing function for a class of everywhere defined decision procedures. We begin with a brief overview of the elementary properties of {0, 1}-valued computable functions and the definition of a computation tree. Define the Kleene T -predicate as T n (e, x, t) such that, for every n ∈ ω, we have that the relation T n is used to define the triple (e, x, t), with e the index of a computable function φ e (x), the numbers (x 1 , . . . , x n ) = x is a sequence of inputs, and the number t is the Gödel number which codes the finite sequence σ 0 , σ 1 , . . . , σ n of configurations of the computation yielding φ e (x) = y by recursion on e. Again, let t denote the computation tree of some computable function φ e (x). Thus, if φ e (x) is total, then it is total for some e ∈ ω by perfoming a recursion on the index such that φ e (x) = y if we let φ e (x) ↓ mean that ∃y, R e (x, y) holds as total recursive relation. By definition, φ e (x) ∈ TOT ⇐⇒ ∀x∃y such that φ e (x) = y ⇐⇒ ∀x∃t∃y such that T (e, x, t) holds and U (e, x, t) computes ϕ(e, x) for all x ∈ ω for which φ e (x) ↓. Theorem 3.2. (Normann [18]) For every n ∈ ω, we have: (1) T n is primitive recursive. (2) There exists a primitive recursive function U such that, if t ∈ ω is the computation tree of φ e (x), then U (t) returns the output of the corresponding computation (i.e., the terminal node of the halting computation yielding U (e, x, t) = y). Note that the proof of clause (1) requires that, if t is the Gödel number which codes the finite sequence of total configuration states in the computation of φ e (x), then the monotonicity of the Gödel numbering of t is given by a partial computable function which is uniformly computable in the index which enumerates φ e (x) for all instances. This requirement helps one obtain the characteristic function of T n via recursion on e as previously stated. Now, given the following theorem: We will choose to keep the function ψ(e) to be an arbitrarily defined total computable function so as to formulate the following lemma. Thus φ(e) is partial computable if φ e (e) is defined. Assume φ(e) is computable. Since ψ(e) is total, there exists an index e ∈ ω such that ψ(e) = φ e (e) ↓ holds. By definition, we see that the diagonal function φ e (e) cannot be computable by values for all e ∈ Dom(φ) if we have φ(e) ↓= φ e (e). Therefore, we obtain the inequality φ(e) ↓ = φ e (e) ↓= ψ(e), for if otherwise, then we may demonstrate that φ(e) is total (that is, it would define an enumeration of T OT for all e ∈ ω, such that ψ(e) = φ e (e)) which contradicts our assumption that φ(e) is computable. It will be important to note that the proof of Lemma 3.4 introduces the existence of a partial function φ(e) which, in a special sense, defines a diagonalization against the total computable functions for some input e ∈ ω on which a given total function ψ(e) depends. Definition 3.5. Let A, B ⊆ ω be disjoint sets. We say that A, B are computably seperable if there is some computable set S such that A ⊆ S and B ∩ S = ∅. If S is non-computable, we say that A, B are computably inseperable and S is a non-seperating set for the pair (A, B). Proposition 3.6. There exists a pair (A, B) ⊆ ω of computably enumerable, computably inseperable sets. We are now in the position to formulate the concept of a diagonally non-computable majorizing function for an arbitrary class E of total computable functions: Definition 3.7. Suppose ψ is an arbitrary computable function belonging to a class E of functions defined on all well-founded initial segments of Dom(R e ). Let {φ ei } i∈ω denote an increasing computable sequence on ω. We write ψ ≪ ∞ F and say "F diagonally majorizes ψ ′′ if there is some k ∈ ω for all e ∈ Dom(φ) such that counted as the number k ≥ 2 of prime factors of each i ∈ ω listed in the computable sequence {φ ei } i∈ω whenever e + 1 ≤ k holds. (2) F computes an index e > 0 which seperates Dom(φ) into a pair of finite, computably inseperable subsets of ω which are uniformly computable in Again, we recall that min |t| is used to denote the minimum length of the Gödel number t ∈ ω which codes the finite sequence σ 0 , σ 1 , . . . , σ n of configuration states for our arbitrary computable function ψ when simulated by some universal Turing machine U , where t = 2 n · n−1 i=0 p σn i+1 , the number p i is the ith prime, and 0 ≤ i < n [18, pp. 61]. In what follows, we will choose express any function belonging to our monotone enumeration of E as a uniformly computably enumerable set S ⊆ ω of (possibly unordered) pairs (e, i) ∈ 2 <ω × ω. Furthermore, we intend to make the convention that, for any i ∈ ω, the computable functions φ ei : ω −→ Dom(R e ) are defined on all well-founded initial segments of R e (x, y) if the characteristic function for this arbitrary well-ordering is indeed computable. By adopting this convention, one is able to think of the sequence {φ ei } i∈ω as a class of total functions which is everywhere defined with respect to well-foundedness of the computable relation in question. However, in order to explicate this notion of an arbitrary computable function ψ which is everywhere defined with respect to a well-founded total computable predicate, we rely on the following Theorem: [19]) Let R e (x, y) be a well-founded, total computable predicate. Take ψ(e) ∈ T OT , and suppose that for every i ∈ ω and for every x ∈ Dom(R e ): Then we have that ∃e ∈ ω such that Proof. We follow the proof given in [19]. By appealing to the following proposition: Proposition 3.9. (Sacks [24]) Suppose R e (x, y) is a well-founded, total computable predicate. If ψ(e) is a total function and e ∈ ω: (1) φ e (y) ≃ φ ψ(e) (x) for all y < x ∈ Dom(R e ). (2) φ ψ(e) is defined on every minimal element in the domain of R e (x, y). Then we have that there is an index e i ∈ ω such that φ ei = φ ψ(ei) and φ ei is defined on all well-founded initial segments of Dom(R e ). Suppose that if the function φ ψ(e) were not defined on all Dom(R e ), then it would be so for some minimal element y ∈ Dom(R e ). Apply Theorem 3.3 to obtain φ ei ≃ φ e . Then, by induction on e i for the minimally defined function φ ei (y) shows that φ e (x) ↓, and thus φ ψ(e) is defined on all the domain. Now, provided with a recursive equation of the form (3.10) ∀n ∈ ω, ψ e (n) = Φ(ψ ↾ n) we will refer to the index e ∈ ω as a recursive extension [23]. That is, we can acquire the index of a function φ ψ(e) (x) which is defined on all Dom(R e ) as an extension from φ e (x) through another index e i ∈ ω of a function φ ei (y) which is defined on all well-founded initial segments of Dom(R e ). Additionally, we see that φ ψ(e) (x) depends on φ ei (y) to be defined with respect to the well-foundedness of the domain in question. For example, let ψ ↾ n denote ψ's restriction to the set {y : y < x} if we let x = n so that, given x, y ∈ Dom(R e ), the relation y < e x defines an arbitrary well-ordering of ω when ψ e (n) ≺ ψ e (n) + 1 for all e, n ∈ ω and ≺ is linear. Now, if some e ∈ ω serves as an index for the effective composition Φ(ψ ↾ n), then by Theorem 3.3 we know that there exists a fixed-point e ∈ ω for such an index. Consequently, we define the map e −→ ψ e (n) ↓ as the unique solution for each Φ, n in the recursive equation for an arbitrary well-ordering of ω. With these ideas in mind, if we can futher suppose that Φ acts as an index for ψ ↾ n and ψ e (n) is its fixed-point for each n ∈ ω, then we say that an arbitrary computable function ψ is defined by effective transfinite recursion if it constitutes a unique solution of (3.10). there exists a function F ∈ DN R k such that F diagonally majorizes ψ in the sense of Definition 3.7, and F is unique up to degree-isomorphism with any member belonging to DN R 2 . Remark 3.12. Before we demonstrate the lemma, we shall briefly expound on the ideas involved in its formulation. As we have seen from Proposition 3.9, if we are given any effectively enumerable class of functions, there exists a fixed-point e i ∈ ω which can be applied to obtain a unique class of partial functions which are defined on all well-founded initial segments of the domain. Moreover, because it is not necessary that we restrict our attention to "standard" well-orderings of ω via the relation R e (x, y), we will make it a convention to allow R e (x, y) to define an arbitrary well-ordering of ω by setting n = x for all x > y belonging to the field of R e (x, y) if ψ e (n) is defined. Within this perspective, we aim to define a of the class of all functions defined by effective transfinite recursion with respect to the fact that, given any arbitrary class E of computable functions, it is possible to effectively enumerate E and obtain a computable function which majorizes every member belonging to E in the sense of Definition 3.1. In this way, we intuitively think of any unique, diagonally majorizing function as a "global", non-computable majorization of any computably bounded class of functions defined on all wellfounded initial segments of R e (x, y). That is, our goal is to construct a computable function ψ which is diagonally majorized by some unique F ∈ DN R k and defined on all Dom(R e ) in the sense of Theorem 3.8. To this end, we follow a construction of F which is analogous to the construction of a partial function which is distinct from all total functions under an arbitrary indexing for a finite, non-empty set of arguments as accomplished in Lemma 3.4. Define DN R := {F ∈ 2 ω : ∀e ∈ ω, F (e) = φ e (e) ↓= ψ(e)} be a non-empty class of total functions, and suppose we are given some F ∈ DN R such that F (e) < min |t| whenever min |t| = the number k ≥ 2 of prime factors of each i ∈ ω. Thus, for arbitrarily large i, the function F (e) is a k-bounded DN R function when the number of factors of i ∈ ω are counted with multiplicity. From the lemma, we let E denote the class of one-one computable functions ψ : ω −→ ω where R e (x, y) is a well-founded, total computable predicate. We say that E is computably bounded if there is a computable function φ e = φ ψ(e) which majorizes every member of E, and E satisfies the following conditions: (i) There is a y ∈ Dom(R e ) for every x ∈ ω such that, if y < e x induces an arbitrary well-ordering of ω, then there is some e ∈ ω where φ ei (y) ↓ =⇒ φ ψ(e) (x) ↓. (ii) For any i ∈ ω occuring in our enumeration, there is a computable sequence {φ ei } i∈ω which is uniformly computably enumerable in φ ei (e i ) ↓. Assume that E is computably bounded so that, for every x > y belonging to the field of R e (x, y), we have that φ ei (x) ≪ φ ψ(e) (x) from Definition 3.1. If we identify any computable set with the characteristic function defined on its elements, then we may set φ ψ(e) (x) ≃ ψ e (n) if we have n = x for any x > y, and so φ ei (n) ≪ ψ e (n). Fix some i ∈ ω such that 0 ≤ i ≤ n. From here on, we view the number i as an index which codes the computable function ψ that defines a computable linear-ordering ≺ of ω uniformly in some e ∈ ω such that y < e x = n implies that ψ e (n) ≺ ψ e (n) + 1 for every n ∈ ω. Now, suppose we are given some ∈ DN R 2 which computes a number that seperates Dom(φ), and let F (e) ∈ DN R k be F (e) = (φ e (x, y)). We now distinguish two cases which we shall use to justify the claim that the finite initial segments of Dom( ) contain exactly the indices of a computable well-ordering of ω which is defined by R e (x, y) by fixing n = x for all x > y belonging to the field of R e (x, y). Case 1: (k = 2) If no ψ ∈ E defines an infinitely descending sequence {φ ei } i∈ω for some i ∈ ω such that i = φ e (x, y) ↓, then F (e) ∈ DN R k+1 and there is some ∈ DN R 2 such that F (e) = φ ei (e i ) ↓ and F (e) ≥ (i) for all e ∈ ω. Case 2: (k > 2) If no ψ ∈ E defines an infinitely descending sequence {φ ei } i∈ω for any i ∈ ω such that i = φ e (x, y) ↓ and i ∈ [0, n], then F (e) ∈ DN R k for all k ≥ 2 and there is some ∈ DN R 2 such that F (e) = φ ei (e i ) ↓ and F (e) ≥ (i) for all e, i ∈ ω with the exception of some fixed i ∈ ω such that φ(e i ) = φ ei (e i ) ↓= ψ(e i ) by Lemma 3.4. It is immediate from both cases that DN R k is upwards closed with respect to the degree of (i) if no ψ ∈ E defines an infinitely descending sequence on ω and E is computably bounded, so if Deg(F ) ≥ Deg( ), then Deg(F ) is contained in DN R k for any k ≥ 2. Now it remains to show that, if there is some F (e) ∈ DN R k which seperates Dom(φ) into a finite, non-empty subset S that contains the index for computing R e (x, y), then Dom( ) contains a number i ∈ ω which serves as the index of a computable well-ordering of ω. Assume that ψ ∈ E defines an increasing computable sequence {φ ei } i∈ω on ω in stages, and each stage is numbered by some n ∈ ω. Then, for any stage ≤ n, we fix the requirement that, given a (possibly infinite) computably enumerable sequence {S e } e∈ω , there are at least finitely many sets of computable inseperable pairs which are simultaneously listed in our increasing sequence, and e S e ∈ {φ ei } i∈ω for all e, i ∈ ω. Suppose we are given some F ′ (e) ∈ DN R k+1 for all k ≥ 2 such that F ′ (e) = (φ(e i )). Now, for any k > 2, because we have excluded some fixed index i ∈ ω such that φ(e i ) = ψ(e i ), then we cannot effectively seperate any i ∈ S e which is distinct from any number in {φ ei } i∈ω that is uniformly computable in φ ei (e i ) ↓. Set (i) = (φ e (x, y)) and fix φ e (x, y) ↓= ψ e (n) when 0 ≤ i ≤ n. Now, let (i) denote the characteristic function which seperates Dom(φ) into a finite subset S := {(e, i) : ∀i ∈ [0, n], φ(e i ) = φ ei (e i ) ↓} which is disjoint from {i ∈ ω : ψ(e i ) ↓} if (φ(e i )) = φ φ(ei) (φ(e i )) ↓ and Deg( (φ(e i ))) ≡ Deg( (i)). On the hypothesis that the class DN R k+1 is upwards closed for any k ≥ 2, then F ′ (e) acts as the characteristic function of Dom(φ) if Deg(F ′ (e)) ≥ Deg( (i)). Now fix F (e) = S e for all e ∈ Dom(φ) such that S e ⊇ S e+1 for e + 1 ≤ k. By induction on e when e + 1 is counted as the number of prime factors for any stage ≤ n, we see that F (e) ∈ DN R k+1 seperates Dom(φ) with respect to the degree of (i) if a nonseperating set S = e S e of indices i ∈ ω which code a computable well-ordering of ω coincides with the finite initial segments of Dom( ) infinitely often. Inductively, we have verified that Deg(F ′ (e)) ≡ Deg(F (e)) if F ′ (e) is bounded by a constant number k for any i ∈ ω listed in our computable sequence {φ ei } i∈ω , so F (e) ∈ DN R k+1 is unique up to degree for k = 2. On the hypothesis that DN R k is upwards closed for all k ≥ 2, we observe that the eth computably enumerable set S e in our sequence is seperated by F (e) infinitely often up to the degree of (i) for any stage ≤ n. Thus, for any i ∈ [0, n], we see i ∈ Dom( ) implies that i ∈ S e for all e ∈ Dom(φ) such that F (e) = φ ei (e i ) ↓, and our claim follows as desired. Finally, in order to demonstrate the uniqueness of F (e) ∈ DN R k for all k ≥ 2 with respect to some (i) ∈ DN R 2 , we rely on the following theorem: Theorem 3.13. (Jockusch and Soare [10 pg. 195]) For each k ≥ 2, the degrees of members belonging to DN R k coincide with the degrees of members belonging to DN R 2 up to degree-isomorphism. We can now claim that Deg(F ) ≡ Deg( ) for any k ≥ 2. Thus, we see that ψ(e) ≪ ∞ F (e) holds when Deg(F (e)) ≡ Deg( (ψ(e))) for all e ∈ Dom(φ). Therefore, F (e) ∈ DN R k is unique up to degree-isomorphism with (i) ∈ DN R 2 by induction on the pair of e and the length of i when 0 ≤ i ≤ n and n = x. This concludes the proof of Lemma 3.11. Π 0 1 -Classes of Diagonally Non-Computable Majorizing Functions We now turn to concept of a Π 0 1 -class. Recall that such classes are closed under initial substring when provided with an appropriate Gödel numbering [5]. Formally, we say that a set H ⊆ 2 ω is a Π 0 1 -class if we are able to put H in the following form where V is a computable relation. Following [9], we are able to extend our intuition about these classes in a way that allows us to think of a Π 0 1 -class as the set of infinite paths through a computable subtree of 2 <ω . To make this applicable to our purposes, we form a set of strings (under a suitable Gödel numbering) which include the diagonally majorizing functions that were introduced in Definition 3.7. By Lemma 3.11 there is a non-computable function F (e) ≥ (i) which is constructed so that the string (i) satisfies the properties of a diagonally majorizing function for a class E of computable functions defined on all well-founded initial segments of some arbitary computable well-ordering of ω. Owing to the uniqueness of F (e) ∈ DN R k up degree-isomorphism with some F ′ (e) ∈ DN R k+1 , if Deg(F ′ (e)) ≡ Deg(F (e)) and DN R k is upwards closed for all k ≥ 2, then (i) is of computably enumerable degree. Most importantly, however, is that fact that DN R 2 can be viewed as a nonempty, computably bounded Π 0 1 -class of functions which possess a computably enumerable degree. Additionally, given any Π 0 1 -class DN R 2 , one can construct out of DN R 2 a set C which is closed under initial substring and computably enumerable in a member of low degree 7 . Finally, if the binary relation V defined on C is computable, then C can be thought of as a computable tree containing the class of all infinite paths through DN R 2 . With these ideas in mind, we aim to construct a canonical system O N for notations of recursive ordinals with respect to the uniqueness of the diagonally majorizing functions. In particular, we will pay special attention to the fact that these majorizing functions are applied to a class of computable functions which are defined on all well-founded initial segments of some arbitrary well-ordering of ω and compute indices of computable well-orderings of R e (x, y) up to degree-isomorphism. That is, by the fact that, for any ψ which constitutes a unique solution of (3.10) and belongs to the class E, there is a function F (e) ∈ DN R k such that F (e) diagonally majorizes ψ(e), then we can demonstrate that F (e) is unique up to degree-isomorphism with some string (i) ∈ DN R 2 . Now, with respect to the string (i), we construct O N as the set of infinite paths through DN R 2 which uniquely coincide with our diagonally majorizing function F ∈ DN R k up to degreeisomorphism. Accordingly, we provide a formal definition of O N as follows. Let ↾ e denote (i)'s restriction to the set of numbers which compute a recursive well-ordering R e (x, y) of ω with < e := R e . Then we define O N as follows: where L ⊆ 2 <ω × ω is a computable binary relation and y < e x defines an arbitrary recursive well-ordering of ω if x = n whenever ψ e (n) is defined from equation (3.10) as the characteristic function for the well-ordering in question. Furthermore, by reference to the fact that the finite initial segments of the domain of the string ∈ DN R 2 are precisely the numbers contained in the set W of indices of recursive ordinals, then we find that O N is a Π 0 1 (W)-class, in the sense that the relation L( ↾ e, n) is computable in W by the reducibility relation ≤ T , and we have that W is Π 1 1 -complete [3, pg. 4]. 7 For example, since we have that DN R 2 is computably bounded, then the computable function which majorizes each member belonging to DN R 2 will be computable in a member of low degree. Naturally, it is desirable to define the notion of what it means for a given recursive ordinal to possess a notation in O N which reflects the intuitive idea of an ordinal notation belonging to Kleene's O. Let |α| ∈ O N denote the notation for a recursive ordinal α < ω CK 1 . To build on our intuition regarding the ordinal notations in Kleene's O, we will make the convention to view any recursive ordinal α as a certain well-founded tree which is order-isomorphic to some recursive well-ordering defined by R e (x, y) of length α. Hence, in pursuit of the idea that we form O N as the class of infinite paths through DN R 2 with respect to the uniqueness of our diagonally majorizing function F (e) ∈ DN R k , we define the concept of a path P through O N as a subset of O N such that P is closed under initial substring and computable in a member ∈ DN R 2 of computably enumerable degree. Formally, P := { ∈ 2 ω : ∀e∀y < e x, L( ↾ e, y)} . Consequently, if we let y < e x with x = n be an arbitrary recursive well-ordering defined by R e (x, y) as before, then the path P ⊆ O N induces a well-ordering defined by L ⊆ 2 <ω × ω if L is uniformly computably enumerable in some finite subset of W. Moreover, for each finite initial segment of Dom( ), the well-ordering defined by R e (x, y) is coded by some fixed index i ∈ ↾ e. In particular, since the binary relation L( ↾ e, y) is computable in W, then we say that a path P represents a unique notation |α| ∈ O N for some recursive ordinal α < ω CK 1 if the linear-ordering of all intitial substrings in P possesses a standard well-ordered copy |α| which is definable over L and uniformly computably enumerable in some finite subset of W, and |α| is order-isomorphic with α via a one-one map 8 from the field of < e into L. (2) A binary computable relation < O N ⊆ 2 <ω × ω. Given two distinct recursive well-orderings α, β obeying the relation α < e β, we let the paths P α and P β denote distinct, linearly ordered subsets of O N such that |α| < O N |β| implies that |α| ∈ P α and |β| ∈ P β for any α < e β. From the distinctness of |α|, |β| it follows that any |α| ∈ O N uniquely corresponds with the recursive ordinal α < ω CK 1 under the ordering of < e . Let ↾ e be as above, where 's restriction to e satisfies the binary relation L( ↾ e, y) for all y ∈ Dom(< e ) such that y < e x implies that ψ e (x) ≺ ψ e (x) + 1 is a computable well-ordering of < e coded by some i ∈ ω. Define, respectively, the path representations for ∅ and α + 1: Now, for the case that we have a path P γ which represents a unique notation for a recursive limit ordinal γ < ω CK 1 , P γ := { ∈ 2 ω : ↾ 3 e ∈ P α ⇐⇒ i ∈ ↾ 3 e ∧ (∀i < e 3 e , L( ↾ 3 e , i))} Proof. Suppose γ is the least recursive ordinal that does not have a unique notation |γ| ∈ O N . By definition, a notation for γ is a uniformly computably enumerable well-ordering which is represented by the path P ⊆ O N of length γ. Throughout, we use γ as an arbitrary limit ordinal defined as lim n→∞ α n . Suppose we are given some path P γ := { ∈ 2 ω : ∀e∀i < e 3 e , L( ↾ 3 e , i)} which represents |γ| ∈ O N as a computable well-ordering of 2 <ω × ω. Assume R e (x, y) defines a well-ordering with order-type γ, the number 3 e ∈ W denotes the index for γ, and φ e (x, y) ↓= R e (x, y). Essentially, our argument adapts the proof of Corollary 5.5 in [24, pg. 20]. [23]) Let < e be a well-founded partial ordering of some set Dom(R e ) ⊆ N, and Q = 2 <ω × ω a binary relation. Assume there is a partial function φ such that, for every n = x ∈ Dom(R e ) and e ∈ ω: ∀y < e n, Q(y, ϕ e (y)) =⇒ Q(n, φ(e, n)). On the hypothesis that ϕ ∈ T OT for each y < e n belonging to the field of < e , fix n = x for all x ∈ Dom(< e ). Now let φ e (x, y) ↓= φ(e, n). If y < e n implies that ψ e (n) ≺ ψ e (n) + 1, then by well-founded induction on < e for all y < e n, Q(y, ϕ e (y)) =⇒ Q(n, φ e (x, y)). Let W denote the set of indices of recursive ordinals and let WF(R e ) denote {e ∈ ω : R e (x, y) is well-founded} when R e (x, y) defines a recursive well-ordering with order-type γ. Without loss in generality, we assume that W ≤ m WF(R e ) since WF(R e ) is Π 1 1 -complete. To derive our contradiction, we now rely on the notion of the height of a computably enumerable, well-founded relation [24, pg. 16]. Assume the relation L defines a well-ordering of ω and the height [R e ] of R e (x, y) is equal to γ. Suppose further that [L] ≤ [R e ] so that we may consider L as a uniformly computably enumerable ordering based on our well-foundedness assumptions. Let [L] = [Q] and have that [R e ] ≤ [3 e ]. By the fact that every wellordering ≤ [3 e ] is recursive, then we may claim that [L] is computable in a uniform manner, since the ordering induced by L is computable in W by definition. Now, let i ∈ ↾ 3 e if, for all e ∈ ω such that [L] ≤ [R e ], we can decide that ↾ 3 e ∈ P γ on the hypothesis that L is uniformly computable in each subset X ⊆ W. [24, pg. 20]) WF(R e ) / ∈ Σ 1 1 . Let γ(x) be an ordinal variable and let ϕ ∈ T OT be a computable function which many-one reduces X ⊆ W to WF(R e ). If γ is infinite and 3 e ∈ X, then we have the relation ∀γ∃y, R(3 e , y, γ(x)) holds with R computable and X the projection of R. Suppose ϕ(e) = ϕ e (y) ↓ if and only if ∀γ∃y, R(3 e , i, y, γ(x)), and let ϕ e (y) ∈ T OT for all y < e x = n. Now, we can deduce from the assumption that γ lacks a unique notation for all e ∈ W the fact that i ∈ ↾ 3 e if ϕ e (y) ↓ and ∀γ∀y, R(3 e , i, y, γ(n)) holds for some e ∈ ω such that [R e ] ≤ [3 e ]. That this implies our claim follows from the condition that L is computable in W, so ↾ 3 e ∈ P γ if 3 e ∈ X and hence [L] ≤ [3 e ] by transitivity of ≤. Therefore, ϕ e (y) ↓ if ϕ reduces ∀γ∃y, R(3 e , y, γ(n)) to WF(R e ) for some e ∈ W such that [L] ≤ [3 e ]. This demonstrates that ϕ(e) ∈ W F (R e ) if and only if WF(R e ) ∈ Σ 1 1 , on the assumption that γ is infinite and is not uniquely associated to some notation |γ| ∈ O N represented by P γ . Hierarchical Classification of the Computable Functions on Paths Through O N As shown, to make the claim that one is able to construct a natural system of notations for recursive ordinals is merely to exploit the intuitive belief that, relative to their complexity, certain decision procedures are of a higher difficulty than others. That is, our belief in the existence of any "natural" hierarchy of computable functions should be grounded in the ability to precisely distinguish, at any fixed level of the hierarchy, the ordinal complexity of any two distinct, arbitrarily given decision procedures with respect to some verifiably computable binary relation L ∈ Π 0 1 . By using canonical notations in O N for the recursive ordinals, we are able to preserve the intuitiveness of the idea that any hierarchy of computable functions is linear and everywhere defined if our indexing of the hierarchy define "natural" well-orderings of ω. Owing to this outline, one is able to claim that the notations in O N are precisely "natural" in the sense that they provide an optimal measure of the ordinal complexity of the computable functions with respect to verifying the totality of R e (x, y) as an arbitrary well-ordering of ω. In particular, we are now in the position to analyze the properties that a path P through O N should possess. We follow the outline [22, pg. 7]: (1) P is linearly ordered and closed under predecessors with respect to < O N . Thus we see that Q is Σ 0 1 -definable and monotone, so < ′ O N is inductively definable over Q for some |α| ∈ P and φ ∈ T OT . Owing to the fact that L is computable in O, then a fortiori, it should not be the case that there exists a Σ 1 1 -definition of P with respect to < ′ O N being inductively definable over Q, which is actually the case if one assumes that α is infinite by Theorem 4.2: Let L be as above, and suppose that L ′ represents the extension of L with respect to < O N . Now, if L, L ′ range over "natural" orderings of ω induced by Consequently, we observe that our intended classification of E ∞ on paths through O N is indeed a "natural" classification of E ∞ by generating recursions over the orderings induced by < O N to succesively define larger classes of unnested L-computable functions of increasing complexity. However, because one has that the relation < O N is arithmetical, one may suspect that there exists a certain "absolute" classification of E ∞ on paths through O N with respect to the property that E ∞ is closed under relative computability. To see this, we may suppose that Dom(< O N ) belongs to the class of sets which are inductively definable from a computable binary relation L and comptuable in Π 0 n for any n ≥ 1. By reflecting on the definition of < O N and taking the convention that any successive n-fold extension of the relation < O N is ∆ 0 n+1 -complete with respect to the binary relation L being Turing-reducible in Kleene's O, one can sketch how extensions of < O N corresponds with arithmetical definability at some stage. In particular, we may aim to associate any ∆ 0 n+1 -definable extension of < O N in a manner that reflects a successive n-fold jump iteration ∅ (n) of the computable sets ∅ (0) , which we define inductively as follows: ∅ (0) := ∅, ∅ (n+1) := (∅ (n) ) ′ Naturally, any n th -iteration of the jump is Σ 0 n -definable by a direct induction on n ∈ ω. Therefore, because we have made the convention to have < O N ∈ ∆ 0 n , then any extension < ′ O N is ∆ 0 n+1 -complete if and only if it is computable in Σ 0 n . Therefore, it appears worthwhile to investigate the various arithmetically definable closure properties that such an "absolute" classification of E ∞ would possess with respect to iterating the jump operator through the effective transfinite for each unique notation. Moreover, because it follows from clauses (A-C) that there exists a "natural" classification of the computable functions indexed by unique notations in O N , then one may suspect that there exists a level of collapse which is obtained by defining a transfinite iteration of the jump operator along paths through O N . In a sense, the existence of a collapsing level in the classification of the computable functions on paths through O N would mirror the fact that in Kleene's O, there exists a Π 1 1 -definable subset X of O which is linearly ordered by < O and of order-type ω CK 1 [23, pg. 10]. However, as is known, the system O is infinitely branching at recursive limits ≥ ω, and so such a classification with respect to X fails in any absolute manner. Now if we wish to iterate the jump operator into the effective transfinite for any α < ω CK 1 , our hope is to rely on the fact that every recursive ordinal possesses a unique notation in O N with respect to the relation < O N being inductively definable over a computable binary relation L, such that L is reducible in ∅ (α) . We now sketch our means of defining this transfinite iteration of the jump operator along paths through O N . Suppose λ < ω CK 1 is a recursive limit ordinal which possess a canonical notation |λ| ∈ O N , and this notation is represented by a binary relation L on a path P ⊆ O N which is computable in some X ∈ 2 ω such that < O N is inductively definable over L. Then we define ∅ (λ) as the λth jump of X by iterating the Turing jump at successor stages α + 1 and taking an effective limit at stage λ if there is some φ ∈ T OT such that φ = φ |λ| for some unique |λ| ∈ P. Note that this would follow from (III) if we have that φ is an unnested L-computable function. Furthermore, since we may appeal to the fact that the jump operator is degree invariant with respect to L being reducible in ∅ (λ) , then it is clear that the naturalness of iterating the jump operator along paths through O N is also preserved at limit stages in the order-theoretic sense, and therefore one can classify E ∞ on paths through O N independently of the "naturalness" of the recursions defined over the orderings induced < O N of length ≥ ω. Now, in order to provide a more formal setting for discussing the meaning of a collapsing result for an absolute classification of E ∞ , we provide the following definition: Definition 5.2. Let P ⊆ O N be a path through O N which is closed under initial substring. We say P is maximal with respect to < O N if every initial segment of O N is contained in P under the ordering of < O N with respect to predecessors. By proceeding inductively through the class of constructive ordinals, one might conjecture that this collapsing result would occur at the degree ∅ (ω CK 1 ) of Kleene's O, in the sense that there exists a maximal path P ⊆ O N such that, for any φ ∈ T OT , we have that φ = (φ |λ| ) |λ|∈P for some limit λ, and the binary computable relation L on P is reducible to ∅ (ω CK Concluding Remarks. I would like to thank Professor Solomon Feferman for reviewing an earlier draft of this manuscript and providing encouragement in the process of its completion. Additionally, I would like to thank Professor Jeffry Hirst for looking over the typesetting and exposition.
2017-03-16T16:49:57.000Z
2017-02-16T00:00:00.000
{ "year": 2017, "sha1": "c10bc8da9a1cd28ea834c3221d57fdd412471bf9", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "c10bc8da9a1cd28ea834c3221d57fdd412471bf9", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
88506620
pes2o/s2orc
v3-fos-license
Analysis of Yellow Water in Liquor Fermentation with Sensor Array Yellow water is a by-product of liquor in the solid state fermentation process, and contains a large amount of nutrients, such as acids, esters, alcohols and aldehydes produced by fermentation. The components in the yellow water reflect the fermentation information to a certain extent, so the fermentation process can be monitored by detecting the yellow water component online. A sensor array detection device is designed for detecting yellow water. In addition, chemical titration is used to obtain data such as acidity, reducing sugar and starch of yellow water. Principal component analysis and discriminant function analysis were performed on the data; and a multivariate linear regression was used to establish a prediction model for the data. The results showed that the prediction bias for acidity and alcohol was small, 0.39 and 0.43, respectively. Introduction Yellow water is a by-product of the brewing process of Luzhou-flavor liquor.It is also known as yellow pulp water.Its characteristics are: brownish yellow viscous liquid, and rich in beneficial microorganisms, such as alcohols, aldehydes, acids, esters etc. [1] [2].At present, the main research and utilization of yellow water in the wine industry is as follows: 1) Judging the quality of fermentation by using the content of yellow water; 2) Esterifying liquid for fermentation; 3) for mixing new earthworms after mixing with mud and mother grains; 4) direct distillation to obtain base wine [3] [4] [5]. The content of yellow water not only reflects the information of fermentation; in addition, it also contains extremely rich organic matter, which has a very high utilization value.The detection and analysis of the yellow water component is beneficial to more comprehensive monitoring of the fermentation process and to promote the full utilization of yellow water. At present, for the yellow water detection, chemical titration and liquid chromatography [6], gas chromatography and mass spectrometry are mainly used [7].Their disadvantages are that other solvents need to be configured, the detection time is long, and the sample pretreatment is cumbersome [8]. The sensor detects information about a specific object, such as temperature, light, pressure, etc., and reacts them into electrical signals according to certain rules.When the detected object changes, it will cause a change in the resistance value of the sensor, and the output voltage signal will also be different.In addition, the response signals of different sensors are also different, so it is possible to construct a sensor array, such as an electronic tongue [9] [10].The sensor's response signal is conditioned and output, and the relationship between the sample object and the response electrical signal is analyzed through signal processing and pattern recognition, thereby quickly detecting the sample.Its characteristics are: the sample does not need pre-treatment, the sample consumption is small, the operation is simple, and the measurement is convenient and fast.It can comprehensively evaluate the taste information of the sample to identify the sample, and also quantitatively analyze some components. The main research progress in the field of sensor arrays is as follows.Tian et al. used a multi-frequency pulse Sensor array to detect and analyze wines of dif- ferent ages to predict the age of wine, using principal component analysis (PCA) and partial least squares-discrimination analysis (PLS-DA) for pattern recognition to distinguish between different age samples [11].Rudnitskaya et al. used electronic tongue combined with high performance liquid chromatography (HPLC) to predict and analyze the age of wine and the acids and esters contained in it.PCA regression model was used to show that the electronic tongue can be compared.Goodly detect its concentration [12].Winquist et al. established an online detection system for milk using an electronic tongue, using PCA to distinguish milk of different quality and origin [13].Du Hongfu et al. used the electronic tongue combined with HPLC to analyze the fermentation process of vinegar, and used BP neural network to establish a prediction model for acetic acid and lactic acid, the main components of vinegar [14].At present, the electronic tongue has been widely used in the food industry, such as Chinese vinegar, red wines, and organic acids [15] [16] [17], but there are few studies on the parameter detection and analysis of yellow water. Instruments and Equipment During the fermentation of liquor, the dissolved oxygen is gradually reduced, and the starch in the wine cell is converted into glucose, yeast, lactic acid bacteria and the like into alcohol and acid by the mold, and the ionization degree of these products is different.In this study, based on the change of dissolved oxygen in the fermentation process and the change of the conductivity of yellow water caused by the product, a sensor array composed of an oxygen electrode and a conductivity electrode was designed.Figure 1 shows the PCB diagram of the device. Materials and Reagents The experimental sample is a sample of yellow water collected from a winery in Yibin, Sichuan.The main experimental reagents are sodium hydroxide, copper sulfate, and phenolphthalein. Sensor Array Measurement The sensor array was preheated in a 3.5 mol/L KCl solution prior to detection. The pretreated sample was then tested and the electrode was washed with 0.01 mol/L KCl solution before each measurement.The sensor array detection parameter setting is set to start from +1 V, the step-down voltage is 0.2 V until −1 V; the precision is set to 10 −3 ; and the parallel measurement is performed at 3 frequencies of 1 Hz, 10 Hz, 100 Hz, repeated three times, recorded and saved data.2) Find the covariance matrix C. Analysis Method 3) Find the eigenvalue of the covariance matrix C, λ. Combined with the maximum variance theory, the eigenvector corresponding to the largest eigenvalues is the projection direction containing the most signals. The largest pre-k bits are selected, the eigenvectors are calculated and normalized, and k eigenvectors are obtained as eigenvector matrices composed of column vectors Eigencector (n • k). 4) Mapping the original data to obtain dimensionally reduced data. Discriminant Function Analysis Discriminant function analysis (DFA) is a statistical method for discriminating the type of sample.The data obtained by the sensor detection samples are recombined to maximize the difference between the components while keeping the difference within the group small, so that the distance between the centers of the groups is maximized, thereby establishing a discriminant function for discriminating and distinguishing the samples.DFA classification is good and easy to implement.It is one of the commonly used pattern recognition methods for electronic nose and sensor arrays. Commonly used discriminating methods are: distance discriminant method, Fisher discriminant method, Bayes discriminant method, stepwise discriminant analysis, and the like.The discriminant analysis method of distance has no spe-B.Chen et al. cific requirements for various types of distribution, and is judged according to various center of gravity (average of each group).For a given observation, if it is closest to the center of the i-th class, it is determined to be from the i-th class. The discriminant function analysis based on the distance from the sample to each parent has the advantages of less discriminant function and simple calculation.And there is no special requirement for the data as a whole, so this paper chooses the distance discriminant function for sample analysis. Test Results Samples with numbers 1 to 12 were trained, and samples 13 to 16 were used as prediction sets.Physical and chemical measurements are obtained as shown in Table 1. Principal Component Analysis Data collection was performed on 16 samples, and after the bad points were removed, the original data was obtained.The data of each sample was composed of 6 sensors × 3 frequencies × 40 samples per frequency.The 48 sets of data obtained by repeating the first 12 samples each were used as a training set, and the remaining 4 sets were repeated 4 times to obtain 16 sets of data as a test set.In the SPSS, the principal component analysis is used to reduce the dimension to obtain 11-dimensional data.The KMO (Kaiser-Meyer-Olkin) and Bartlett tests show that the KMO value is greater than 0.6 and the sig value is less than 0.05, indicating that the results obtained by principal component analysis meet the requirements.The cumulative variance contribution rate of the first 4 dimensions is 86%, and the front three-dimensional principal component analysis chart is shown in Figure 2. It can be seen from the figure that the principal component obtained by dimensionality reduction is significantly representative of the original sample data. Discriminant Function Analysis In the SPSS, the linear discrimination analysis (LDA) is performed on the first 12 sample data, and the discriminant function analysis graph is obtained by using the stepwise discriminant method, as shown in Figure 3. Multiple Linear Regression Analysis Linear multivariate regression analysis was performed in SPSS.The obtained principal component factors were used as independent variables, and total sugar, total acid, starch and alcohol were sequentially used as dependent variables, and stepwise regression was selected.The regression obtained shows that the adjusted R2 reaches 70%, the linear equation reflects 70% of the actual data, and the DW (Durbin-Watson) statistic is 1.69 close to 2, indicating that there is no sequence correlation in the obtained data; the sig data are less than 0.05, indicating that the significant influence of the independent variable on the dependent variable meets the requirements; VIF (Variance inflation factor) is less than 10, indicating that there is no collinearity between the variables; in addition, the residual diagnosis is basically consistent with the positive distribution. The fit of the model and the simulated sample is as follows: Figure 4 is the acidity fitting pattern. Figure 5 is the starch fitting pattern. Figure 6 is the sugar fitting pattern. Figure 7 is the alcohol fitting pattern. As can be seen from the figure, the fit of each model data is sufficient, and the model can accurately represent the sample data. Sample Prediction The prediction result unknown to the model is evaluated by calculating the prediction standard deviation. Conclusions and Prospects From the experimental results of PCA and DFA, the sensor array can distinguish different samples of yellow water.The correction decision coefficient of the model has a more complete explanation.The final prediction results show that in addition to the slightly larger prediction error of starch, the sensor array's prediction of alcohol, acidity and reducing sugar basically meets the needs of actual production testing.This shows that the sensor array can be applied to the yellow water detection, especially the online detection in the fermentation process, and the important parameters of the fermentation can be obtained in real time. 2. 3 . 1 . Determination of Physical and Chemical IndicatorsThe total acid content was determined by potentiometer; the ethanol content was obtained by alcohol; the reducing sugar content was determined by Ferien reagent titration; the starch was first hydrolyzed to reducing sugar, and then the Ferien reagent titration method was used (DB34T 1728-2012, GBT 10345-2007)[15] [16], the final calculation of starch content. Figure 1 . Figure 1.PCB schematic of the sensor array. 2. 4 . 1 . Principal Component Analysis Principal component analysis (PCA) is the linear transformation of multiple variables into a small number of integrated variables to represent the raw data.The advantage is to eliminate the impact of correlation and fully extract effective information and reduce workload.The disadvantage is that the accuracy of the expression is reduced, and only the main component dimension is significantly reduced and a large amount of original information is retained to reflect the advantages of PCA.Principles and steps of PCA: Assume that the number of samples is m, each with n features, DATA (m • n). 1) Standardization: Find the mean and standard deviation of each feature separately.The normalized matrix DATA adjust (m • n) is obtained by subtracting the average value of each data in DATA and dividing by each standard deviation, and the origin is located at the center of each sample point. The discrimination index (DI) in the figure reached 99.9%.It can be seen that the clustering trend of the sample on the discriminant classification map is obvious, and the distance between different sample spaces is far, indicating that the discriminant function can effectively classify the yellow pulp water samples of different fermentation conditions according to their different components.Moreover, the discrimination results of 12 different samples were consistent with the actual ones, indicating that no error determination occurred. Table 1 . Content of substances in yellow water.
2019-02-12T08:34:28.767Z
2019-01-07T00:00:00.000
{ "year": 2019, "sha1": "96359210e4edc8df48972b19c68a9f406a64a46b", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=89719", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "96359210e4edc8df48972b19c68a9f406a64a46b", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
247851101
pes2o/s2orc
v3-fos-license
Sarcopenic Obesity: An Emerging Public Health Problem Population aging and the obesity epidemic are important global public health problems that pose an unprecedented threat to the physical and mental health of the elderly and health systems worldwide. Sarcopenic obesity (SO) is a new category of obesity and a high-risk geriatric syndrome in the elderly. SO is associated with many adverse health consequences such as frailty, falls, disability, and increased morbidity and mortality. The core mechanism of SO is the vicious circle between myocytes and adipocytes. In order to implement effective prevention and treatment strategies and reduce adverse clinical outcomes, it is essential to further our understanding of SO in the elderly. Herein, we reviewed the definition, diagnosis, epidemiology, pathogenesis, and treatment of SO in older adults. . The potential etiology and pathogenesis of sarcopenic obesity (SO). In the process of aging, an unhealthy diet, sedentary habits, changes in body composition, hormone changes and a variety of chronic diseases in elderly cause chronic lowgrade inflammation, oxidative stress, and insulin resistance, resulting in SO. The core mechanism of SO is the vicious circle between myocytes and adipocytes. SKM, skeletal muscle; MPS, skeletal muscle protein synthesis; MPB, muscle protein breakdown; GLUT4, glucose transporter type 4; IGF-1, insulin-like growth factor 1; IL, interleukin; MCP-1, monocyte chemoattractant protein 1. Definition and measurement Baumgartner first proposed the concept of "Sarcopenic Obesity" in 2000 and defined it as a phenotype of copresence of sarcopenia and obesity [18]. This definition was supported by a more recent critical appraisal of the definition and diagnostic criteria of SO based on a systematic review which noted that most existing studies defined SO based on the co-existence of obesity and sarcopenia [19] (Table 1). SO is a complex geriatric syndrome characterized by an aged-associated reduced muscle mass and dysfunction and excess adiposity [11,20]. Therefore, individuals with SO have a double burden of malnutrition and are at an increased risk of frailty, disability, morbidity, and mortality. Due to the lack of consensus on the definition and diagnostic criteria for SO, accurate diagnostic assessment of SO is extremely challenging. Further, previous studies were characterized by significant differences in the measurement methods used to define sarcopenia and obesity ( Table 2). A systematic review reported that there are 19 methods to evaluate sarcopenia and 10 methods to evaluate obesity. The most used methods to define sarcopenia and obesity are appendicular skeletal muscle (ASM) divided by weight (ASM/wt) or adjusted by height in meters squared (ASM/h 2 ) and body mass index (BMI) or percentage of body fat (PBF), respectively [19]. Rough indicators such as weight and BMI are not recommended for the assessment of body composition in the elderly as these cannot distinguish between fat and muscle mass. Dual-energy X-ray absorptiometry (DXA) is a reliable technique for body composition analysis owing to its safety, repeatability, and accuracy; however, it is associated with a risk of radiation exposure [31]. Bioimpedance analysis (BIA) is a quick and portable technique for measuring body composition [32], which is suitable for large-scale epidemiological investigations and can replace DXA. Although computed tomography and magnetic resonance imaging are more accurate body composition analysis methods, they have limited clinical applications because of their high cost, radiation exposure, and need for qualified personnel [33]. Definition of sarcopenia Sarcopenia was first defined by Rosenberg in 1989 [34]. Sarcopenia refers to a group of age-related syndromes of decreased skeletal muscle content, decreased muscle strength, and muscle dysfunction, which can cause weakness, disability, and falls [35]. The ICD-10 code for sarcopenia was introduced in 2016 (M62.84), facilitating the assessment, diagnosis, and treatment of sarcopenia [36]. Diagnosis of sarcopenia Sarcopenia is defined by a variety of variables such as skeletal muscle mass (SMM), muscle strength, and physical performance ( Table 2). SMM can be calculated using the following measurements: 1) ASM / ht 2 [24]; 2) ASM/wt [37]; 3) based on residual height correction and total fat muscle mass [38]; 4) ASM adjusted by BMI [26]; 5) unadjusted or absolute appendicular lean muscle [26]; and 6) unadjusted or adjusted body mass, height, or BMI [24]. Studies have shown that SMM is not linearly related to muscle strength. Muscle strength decays faster than SMM and is a more valuable indicator of the overall health of the elderly [39]. Muscle strength can be evaluated by measuring grip strength using a hand dynamometer, or by measuring knee extension strength [40]. The assessment measures of physical performance include gait speed, short physical performance battery, and timed up-and-go [5,19]. Diagnostic criteria for sarcopenia have been proposed by different international working groups. The International Sarcopenia Working Group defined sarcopenia as a decrease in lean tissue and physical function of the whole body or limbs (walking speed ≤ 1/s) [25]. The European Working Group for the study of Sarcopenia (EWGSOP) defined sarcopenia based on the combination of SMM, assessed using DXA or BIA, and muscle function, indicated by muscle strength or performance [24]. In the clinical setting, EWGSOP recommends the assessment of walking speed to evaluate frailty, with a threshold of < 0.8 m/s; patients with sarcopenia and impaired physical performance (gait speed ≤ 0.8 m/s) are considered to have severe sarcopenia. Regarding grip strength, the EWGSOP has proposed different cut-offs based on an individual's BMI. The 2019 updated version of the EWGSOP2 consensus recommends that muscle strength should be measured before SMM, and that sarcopenia should be suspected in patients with reduced muscle strength [29]. The Foundation for the National Institutes of Health (FNIH) Sarcopenia Project proposed to define sarcopenia as low SMM, low muscle strength, and physical decline. The FHIH recommends the use of DXA to measure SMM, with corrections for BMI [26]. The Asian Working Group for Sarcopenia (AWGS) [27] proposed diagnostic criteria for sarcopenia applicable to Asian populations, using grip strength and physical function for preliminary screening. The diagnostic process for sarcopenia proposed by the AWGS includes a detailed protocol that involves self-assessment, preliminary screening, diagnosis, and severity assessment. In the primary care setting and hospital setting, preliminary screening of sarcopenia can be based on a measurement of calf circumference (<34 cm in men, < 33 cm in women), the SARC-F scale (≥4), or SARC-Calf scale (≥11). DXA or BIA can be used to improve SMM measurements during hospitalization. The AWGS defines sarcopenia as a decrease in SMM and muscle strength or physical activity. The patients with reduced SMM and decreased muscle strength and reduced physical activity are considered to have severe sarcopenia [30]. Diagnostic criteria for obesity In the SO study, obesity was defined as BMI ≥ 30 kg/m 2 [4], increased PBF (men ≥ 27% or 28%, women ≥ 35%, 38%, or 40%, depending on specific study criteria) [5,6], and waist circumference higher than the populationspecific quartile [41] or higher than the World Health Organization (WHO)-recommended waist circumference (male ≥ 102 cm, female ≥ 88 cm) [42]. The American Association of Clinical Endocrinologists (AACE) proposed the use of PBF to define obesity, where PBF > 25% and > 35% represent obesity in males and females, respectively [25]. So far, there are no cut-off points for BMI, PBF, and waist circumference for obesity in the elderly. Adipose tissue can be separated into subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT). Currently, there is a lack of relevant diagnostic guidelines to define obesity based on SAT and VAT. Indeed, some scholars have proposed that future studies should focus on distinguishing between sarcopenic subcutaneous obesity and sarcopenic visceral obesity and use the standardized VAT/SAT ratio to diagnose SO [43]. Prevalence Due to the heterogeneity of the definition of SO, the reported prevalence of SO is variable and ranges from 2.75% to 20% or more [19]. Further, the prevalence of SO differs according to gender, race, and age because of the different standards adopted by different countries. A systematic review reported that the global prevalence of SO in the elderly was 11% [3]. It also showed that the overall morbidity rate of SO in the elderly aged 75 and older was 23%, indicating that the prevalence of SO increases with age. The potential causes include the changes in hormones and body composition (muscle atrophy and adipose tissue accumulation) caused by aging. There was no sex difference in the prevalence of SO among the elderly, suggesting that both women and men are at a high risk [5,6]. The prevalence of SO was higher in South America and North America, and the pooled prevalence of SO was higher in inpatients than in community residents, indicating that malnutrition and immobility are linked to the development of SO in the elderly in the hospital. Etiology and Pathogenesis The etiology and pathogenesis of SO are intertwined and intricate. The core biological factors leading to SO are changes in body composition related to aging, hormonal changes, the interplay between metabolism and inflammation, environmental factors (unhealthy diet and lack of exercise), and chronic diseases [5,6,11,44]. Aging and obesity cause atrophy of fast type II muscle fibers and a switch to slow type I muscle fibers and neurodegeneration, leading to loss of muscle neurotrophic effects and promotion of intramyocellular lipid (IMCL) deposition. A prominent manifestation of SO is anabolic resistance (AR), which is characterized by reduced skeletal muscle protein synthesis rates and increased muscle protein degradation rates [11,45]. The key pathophysiological mechanism of SO is a vicious cycle between myocytes and adipocytes [6]. Obesity is characterized by the expansion of adipose tissue, which leads to adipose tissue inflammation and dysfunction, leading to the over-production of fatty acids. When the number of fatty acids exceeds the oxidation capacity of skeletal muscle, IMCL [46] is formed, and this affects the function of the GLUT4 transporter. This subsequently leads to reduced glucose utilization and increased fatty acid oxidation in mitochondria, which leads to impaired insulin sensitivity of skeletal muscle, inhibition of mitochondrial respiration, reactive oxygen species formation, muscle cytotoxicity, catabolism, and inflammation. Muscle intercellular adipose tissue and IMCL are characterized by dysregulation of adipokines and cytokines (↑TNF-α, ↑IL-6, ↑leptin, ↑IL-1β, ↑MCP-1, ↓adiponectin), which induce IR and lipotoxicity, and eventually lead to sarcopenia [47][48][49][50]. At the same time, adipose tissue enhances the secretion of pro-inflammatory actin in muscle tissue. On the other hand, myocytokines (↓IL-15, ↓irisin, ↓IGF-1, ↑myostatin, impaired IL-6 secretion) may lead to muscle atrophy and dysfunction, they may play an endocrine role to aggravate fatty tissue inflammation and propagate a pro-inflammatory state between myocytes and adipocytes [5,[49][50][51][52]. A summary of the possible mechanisms is shown in Figure 1. Age-related changes in body composition Under the influence of lifestyle factors and hormone levels, body composition changes significantly with age. The main changes are an increase in total fat mass, which peaks between 60 and 75 years old, and a decrease in peripheral subcutaneous fat, preferential accumulation of visceral fat, and ectopic fat infiltration in various organs. By comparison, SMM and muscle strength start to decrease from approximately 30 years of age, and the rate of decline of muscle mass accelerates significantly in adults over 60 years old [53]. Therefore, the body weight of older people is mainly composed of adipose tissue rather than lean tissue [5,6]. Hormonal changes Hormonal changes related to aging include insulin resistance, decreased thyroid hormone level, and increased levels of cortisol, growth hormones, insulin-like growth factor (IGF-1), sex hormones, and dehydroisoandrosterone sulfate, which all contribute to SO. In postmenopausal women, body composition changes result in increased adipose tissue, visceral fat infiltration, and decreased SMM [5]. In men, the decline in testosterone levels with aging has an adverse effect on the distribution of muscle and adipose tissue [5]. Inflammation and metabolism SO is considered to represent a sub-acute, chronic proinflammatory state, which hinders metabolic processes (oxidative stress and insulin resistance), destroys the function of adipose and muscle, and increases the risk of chronic disease [11,53]. Recent studies have shown that there is a key crosstalk between metabolism and inflammation, which has led to increased focus on the concept of metabolic inflammation [11]. In SO, adipocytes accumulate in muscle tissue and other organs (heart, liver and pancreas and so on) and secrete proinflammatory cytokines (TNF-α, IL-6, IL-1 and leptin), thus leading to the infiltration of inflammatory cells and inducing insulin resistance and lipotoxicity, which directly affects skeletal muscle and accelerates muscle protein degradation and apoptosis, and promotes muscle tissue reduction and adipose tissue accumulation through inflammation and oxidative stress [5,6,54,55]. The levels of IL-6 and TNF are increased by leptin, thereby reducing the anabolism of IGF-1 [56]. The decrease in IGF-1 and age-related testosterone levels increases the incidence of frailty [57]. Adiponectin is inversely correlated with age and obesity and counteracts the effect of leptin. The increase in TNF can directly inhibit the effect of adiponectin and inhibit the synthesis of muscle proteins and mitochondrial function. Obesity can also cause leptin resistance, resulting in reduced breakdown of lipid oxidative products and ectopic fat deposition [57]. Myocyte mechanism Numerous molecules (TNF-α, IL-6, IL-1, adiponectin, leptin, muscle somatostatin, sex hormones (testosterone and estrogen), growth hormone, insulin and glucocorticoid, and irisin) have been implicated in the pathogenesis of SO [50]. Aging stimulates fat to infiltrate muscles, and obesity promotes fatty infiltration of other organs such as the liver, pancreas, and heart. Lipid deposition in muscle cells promotes lipotoxicity and inflammation and induces the de-differentiation of mesenchymal progenitor cells expressing adipose tissue genes. Impaired muscle regeneration capacity may lead to fibrosis of muscle tissue, impaired mitochondrial function, increased production of reactive oxygen species, upregulation of myostatin expression, impaired fatty acid oxidation, and reduced lipolysis, thereby promoting insulin resistance and impairing muscle function [5]. Influence of environmental and chronic diseases The onset of SO is influenced by several lifestyle factors, of which the most important are dietary changes and lack of physical activity. Aging itself leads to obesity and reduced physical activity. Further, the dietary pattern of elderly people, which is often characterized by insufficient protein intake combined with excess dietary calorie intake that is rich in saturated fatty acids, coupled with a sedentary lifestyle promotes the occurrence of sarcopenic obesity [11]. SO shares a common pathogenic mechanism with a variety of chronic diseases such as diabetes, cardiovascular disease, and cancer, among others [7,8,49]. SO can lead to a variety of pathophysiological changes, such as excessive secretion of pro-inflammatory cytokines by adipose tissue, changes in the expression of adipocytokines by adipocytes, and fat accumulation in muscle [6,12]. Skeletal muscle cell atrophy reduces the expression of GLUT4 in muscle tissue and decreases the demand for insulin-dependent glucose uptake [58]. The pro-inflammatory state and lipid accumulation in muscle fibers induce phosphorylation and deactivate insulin receptors and their substrates [19], resulting in insulin resistance and AR. Insulin resistance is the core mechanism of SO associated with cardiovascular metabolic diseases and cancer [49,50]. Preventive and therapeutic strategies At present, the optimal treatment of SO has not been established. Nutritional interventions, such as a hypocaloric diet, and exercise training or physical therapy are the mainstay of SO prevention and treatment to achieve changes in body composition (muscle gain and fat reduction) and improve the functional status and quality of life of elderly patients. However, solely focusing on weight loss per se is not desirable for the elderly because weight loss may actually pose health risks such as loss of muscle and bone mass. Diet intervention strategies The nutritional strategies for the prevention of SO in the elderly include hypocaloric diets and high protein and micronutrients supplementation [5,6]. Extremely lowcalorie diets and rapid calorie restriction for the management of sarcopenic obese older adults are strongly discouraged, because they can have harmful effects on SMM, bone mineral density, and the micronutrient status, and increase the risk of hypovolemia and electrolyte disorders. [5,6]. Instead, the optimal and safe range of calorie restriction is about 200-750 kcal per day [59]. It is recommended that elderly people should consume larger amounts of high-quality protein (aiming for 1-1.2 g/kg/d) [5], with an even higher intake (1.2-1.5 g/kg/d) [60] recommended for elderly patients with sarcopenia or other chronic diseases; however, patients with renal insufficiency should monitor their protein intake. Intake of dietary essential amino acids (EAAs), and especially high leucine content, promotes muscle protein synthesis [5]. Ensuring sufficient intake of trace elements could improve several sarcopenic parameters and physical frailty, with most guidelines recommending supplementation with 1200 mg of calcium and vitamin D (800 to 1000 IU daily) [61]. The American Academy of Geriatrics recommends a daily intake of vitamin D3 (1,000 IU) and calcium in the elderly non-hospitalized population older than 65 years to maintain serum vitamin D levels ≥ 30 ng/ml [62]. A recent review proposed a new approach to dietary recommendations based on the gut microbiota profile in patients with SO based on the finding that a high-protein diet with an elevated concentration of EAA and increased dietary fiber intake may promote the eugenics of the intestinal microbiota [63]. Exercise interventions Physical activity (aerobic exercise, resistance exercise, and combination training) is a powerful treatment strategy to counteract one or more of the biological effects of SO and has been demonstrated to promote insulin sensitivity [7,64], reduce oxidative stress [65], induce mitochondrial biosynthesis, ameliorate inflammation, and eliminate muscle cell apoptosis, among other positive effects [5,58,[63][64][65][66]. Aerobic exercise can ameliorate cardiopulmonary function and reduce mortality [5,67], and resistance training is effective in enhancing muscle function and strength in the elderly [68]. Because the elderly often suffer from various chronic diseases, a tailored exercise program that considers these comorbidities and associated physical limitations is recommended by most guidelines. Aerobic exercise should aim for a peak heart rate of 65% with a target heart rate zone of 70%-85% of the peak. On the other hand, resistance training should focus on 1-2 muscle groups and include 8-12 repetitions, with an initial intensity of 65% of 1 repetition maximum (1RM), aiming to reach 2-3 repetitions with an intensity of 75% 1RM. Resistance training should aim to achieve fatigue rather than exhaustion to prevent musculoskeletal injury [60,69]. Emerging treatments for SO Although many emerging pharmacological interventions have been studied, such as testosterone supplementation [13], selective androgen receptor modulators [14], myostatin inhibitors [15], and anti-obesity drugs [5,6], there are no approved drugs for the treatment of SO in the elderly. A recent systematic review of treatment strategies for SO showed that electrical acupuncture and whole-body electromyostimulation associated with nutritional supplementation are new and effective strategies to induce changes in body composition [16]. Whole body vibration therapy has become a safe and convenient technique to cause neuromuscular activation and simulate the contraction of skeletal muscle [70]. A randomized controlled clinical trial of 90 elderly men found that whole-body vibration therapy significantly increased muscle strength and physical function in older people [71]. This therapy has the potential to replace traditional exercise for the treatment of SO among elderly people as it is better tolerated and can reduce fat mass (FM) and increase muscle strength. Nevertheless, this therapy is still in the research stage and there is a need for further clinical studies to verify its efficacy in routine clinical practice. A recent study demonstrated that the adenosine A2B receptor (A2B) is highly expressed in muscle tissue and brown adipose tissue (BAT) and may be a target for SO [17]. Adenosine / A2B signaling pathway display the core function in maintaining the quality and function of skeletal muscle. Stimulation of A2B could play a key role in anti-aging and anti-obesity effects and restore skeletal muscle function and quality to adolescent levels. At the same time, A2B activation also reduced the impaired BAT function and induced white adipose tissue browning associated with age and obesity. However, the current evidence regarding the role of adenosine/A2B signaling is limited to animal studies, and more research is needed to verify the role of this signaling pathway in humans. Conclusions and perspectives The prevalence of SO increases with age and it is estimated that more than one-tenth of the elderly population suffer from SO globally. This has important public health consequences as SO is associated with frailty, falls, disability, and increased morbidity and mortality, and places a heavy burden on individuals, society, and the medical system. To further our understanding of SO, it is essential that clinicians and researchers establish a universal consensus for the definition and diagnosis of SO and focus on SO screening to identify susceptible individuals early. Additional studies are warranted to clarify the pathogenesis of SO and formulate the best diet and exercise intervention scheme to provide tailored treatments and promote healthy aging. In conclusion, establishing an accurate definition and diagnostic criteria for SO, and introducing effective preventative and treatment options have become an urgent task for researchers and clinicians.
2022-04-02T05:17:44.777Z
2022-04-01T00:00:00.000
{ "year": 2022, "sha1": "77f2ca47bb9e846b47a1b875d80a492535c6228e", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "77f2ca47bb9e846b47a1b875d80a492535c6228e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
42995510
pes2o/s2orc
v3-fos-license
Does being born big confer advantages? Health disparities are evident in Canada when comparing Aboriginal communities to the majority population, as Wassimi and colleagues point out in their study of perinatal and postneonatal mortality by birthweight.[1][1] In a step toward understanding the higher infant mortality among First Nations H ealth disparities are evident in Canada when comparing Aboriginal communities to the majority population, as Wassimi and colleagues point out in their study of perinatal and postneonatal mortality by birthweight. 1 In a step toward understanding the higher infant mortality among First Nations communities, the authors tackled the interesting question of whether there are survival advantages to large-for-gestational-age birthweight. In other words, are big babies better survivors? The study focused on First Nations people living in Quebec. Only those who reported speaking their First Nations language were included because they were identified in this manner. This population was compared with infants of Quebecers whose mother tongue is French. The Cree of northern Quebec have very large infants owing to high pregravid body weights and high incidence of gestational diabetes. 2 The birthweight curve is shifted to the right compared with that for non-First Nations women. Whether this is a recent phenomenon is not clear. More importantly, the consequences for infant health in the first year of life are not well documented. In their follow-up analysis linking data on births and deaths up to one year of age, Wassimi and colleagues report very large differences in birthweight distribution between groups. Small-for-gestational-age births made up 4% in the First Nations group and 11% in the French-language group, and large-for-gestational-age births made up 28% in the First Nations group and 8% in the French-language group. On close examination of the relative death rates in the two populations, one sees that, among appropriate-for-gestational-age infants, the relative risks of mortality are much higher in the First Nations group for both perinatal death (RR 1.77) and postneonatal death (RR 4.28). The First Nations group is clearly disadvantaged, with higher death rates in the small-for-gestational-age group as well. Interestingly, however, there is no such disadvantage when perinatal mortality of large-forgestational-age infants is compared between the First Nations group and French-language control group. With only seven deaths in 10 years in the First Nations group, the effect is nonsignificant, and there is certainly no excess mortality. In fact, the lowest perinatal mortality within the First Nations group appears to be in the heaviest group. In contrast, large-for-gestational-age birthweight is a clear risk factor for perinatal mortality among infants of mothers whose mother tongue is French. The findings for postneonatal mortality are complex and particularly intriguing. In the French-language group, there is a clear protective effect of large-for-gestational-age birthweight against postneonatal mortality, and this effect is even more evident when analysis is restricted to deaths caused by sudden infant death syndrome (SIDS). This advantage of large-for-gestational-age birthweight in regard to SIDS has also been observed in the United States, as reported by a US study linking infant birth and death certificates. 3 Among the First Nations infants, no such risk reduction for postneonatal mortality was seen. When one compares the First Nations outcomes by birthweight (within their own group), there is a very modest increase in postneonatal mortality and SIDS in the large-for-gestational-age group. These two trends going in opposite directions would lead us to believe that First Nations infants who are born large for gestational age have a serious risk of postneonatal death. Certainly, First Nations infants are at higher risk of postneonatal mortality, but being heavy does not greatly exacerbate their high rates as one might conclude when comparing them to the large-for- Does being born big confer advantages? Katherine Gray-Donald PhD Commentary CMAJ • Infants of First Nations mothers in Quebec have higher perinatal and postneonatal death rates compared to those of women whose mother tongue is French. • Postneonatal death rates are particularly high among First Nations infants. • Different patterns of risk for perinatal and postneonatal mortality are evident among large-for-gestational-age infants when comparing births to First Nations women versus women whose mother tongue is French. gestational-age infants in the French-language group who are at reduced risk. How do we wish to make comparisons? First Nations groups might be interested in comparing mortality by birthweight within their own communities, and one would then have to argue that largefor-gestational-age babies are at minimal extra risk. By contrast, among mothers whose mother tongue is French, large-for-gestational-age birthweight confers a risk for perinatal death but is protective for postneonatal mortality and specifically SIDS. There is no doubt that First Nations infants are at higher risk for infant death, but the mortality patterns vary by differences in birthweight across populations in ways that we do not yet understand. Despite the intriguing results concerning possible protective effects of large-for-gestationalage birthweight on infant mortality, the factors leading women to have large-for-gestational-age infants (pregravid obesity, weight gain during pregnancy and gestational diabetes) are serious for their health. For infants, high birthweight poses its own risks in relation to obesity. Good nutrition and healthy weight gains in pregnancy, food security, encouragement of breastfeeding and many other conditions must be met to close the health disparities between infants of First Nations families and those of other Canadians. Pr PRISTIQ ® is indicated for the symptomatic relief of major depressive disorder. The short-term efficacy efficacy of PRISTIQ (desvenlafaxine p (desvenlafaxine succinate extended-release tablets) has been demonstrated in placebo-controlled trials of up to 8 weeks. The most commonly t observed adverse events associated with the use of PRISTIQ (at an t incidence ³5% and at least t twice t the rate of placebo) f were nausea (22%), dizziness (13%), hyperhidrosis (10%), constipation (9%), and decreased appetite (5%). PRISTIQ is not indicated t for use in children under the age of 18. PRISTIQ is contraindicated in patients taking monoamine oxidase inhibitors (MAOIs), including linezolid, an antibiotic, g antibiotic, g methylene blue, a dye used in certain surgeries, or in patients who have taken MAOIs within the preceding 14 days days due to risk of serious, sometimes fatal, drug interactions with selective serotonin reuptake inhibitor (SSRI) or serotonin norepinephrine reuptake inhibitor (SNRI) treatment or
2017-10-01T08:18:00.797Z
2011-02-22T00:00:00.000
{ "year": 2011, "sha1": "3c10c359ae58611270e2a014fd9cb2bfcd8ebb7c", "oa_license": null, "oa_url": "http://www.cmaj.ca/content/183/3/295.full.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3c10c359ae58611270e2a014fd9cb2bfcd8ebb7c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
269309889
pes2o/s2orc
v3-fos-license
A comprehensive study of modified three-month pediatrics training curriculum at Shahid Beheshti University of Medical Sciences and its impact on student satisfaction Objectives Continuous curriculum improvements reveal the dedication of policy-makers to raising the quality of education and student learning. This study aims to report the impact of curriculum changes to the three-month pediatric course curriculum at Shahid Beheshti University of Medical Sciences (SBMU) on the satisfaction levels of medical students. Methods One hundred eighteen 4th-5th years medical students, who had completed their pediatric clinical rotation in SBMU-affiliated teaching hospitals including Mofid Children Hospital, Loghman Hakim Hospital, Shohada-e-Tajrish Hospital, and Imam Hossein Hospital from January to December 2022 were included in this cross-sectional study. After obtaining informed consent, a questionnaire was sent out to all participants, that included 27 statements about the impact of the modified curriculum on their satisfaction with their learning and performance. SPSS version 22 was used to analyze the data. Results The level of satisfaction of trainees from attending clinics was 82-56%, prior introduction to the course was about 82%, and attending general hospitals (all hospitals except Mofid Children hospital, which is the only children hospital affiliated to SBMU) was 82-97%. The quality of patients-based learning was reported in terms of attendance at morning report sessions which was 92.3%, attendance at ward rounds, which was 71.8%, and attendance at clinics, which was 62.4%. The satisfaction rate from the senior attending mentor was 96.5%. The satisfaction rate of the pathology course was 67.2%, and the radiology was 82.4%. The satisfaction level of medical students from the infectious disease department was 70% and the gastroenterology department was 83.8%. The level of satisfaction with the implementation of the twelve-week program was 68.7%, with the expressiveness and usability of the presentation of materials was 53.9%, with the compatibility of the exams with the presented materials was 92%, and withholding weekly exams was 86.8%. The satisfaction rate of using the materials presented in the final exam in the digestive department and the infectious department was 85% and 68%, respectively. The overall satisfaction rate of the training course was 76.66%. Conclusion The results provide vital insights for improving medical education. According to this study, medical student satisfaction with the pediatric curriculum after its recent revisions was in a satisfactory range. Attendance at clinics, information sharing, patient-based learning, practical training, attending mentorship, curriculum clarity, and alignment with student expectations all contributed to participants' high levels of satisfaction. Supplementary Information The online version contains supplementary material available at 10.1186/s12909-024-05408-z. Introduction Every educational system exists to achieve certain educational aims [1,2].It will be hard to move and activate precisely and eventually obtain that system's educational aims if the targeted goals are not appropriately settled and evaluated, and the priorities are not described and clarified vividly.Thus, based on the educational goals, curriculum design, evaluation, and intersystem educational activities are being planned [1].Following the establishment of the Ministry of Health and Medical Education, the authorities and officials responsible for medical education have recognized the improvement of educational quality as a major priority, particularly in the context of not compromising patients' safety at educational hospitals [3]. Numerous measures have been taken by the individual universities to adjust the curricula, particularly in medical schools, to be consistent with those of worldwide standards.These changes include the implementation of an improved teaching and evaluation system.To ascertain whether the new methods are gaining superior results, a continuous evaluation system is necessary [4,5].The interconnection of the administrative structures of the medical school and the teaching hospitals, the increased responsibilities of instructors and administrators, and the complexity of the curriculum as a system of interconnected components result in significant impacts from every new alteration [5]. The clinical phases of continuing medical education (in Iran the clinical phase of medical education is presented in two phases, 2 years as medical students or externs and two years as medical interns) might be regarded as the most crucial ones since they allow students to translate their academic knowledge into a variety of clinical skills.However, unfortunately, medical students mostly highlight a higher level of dissatisfaction in these years, when compared to the preclinical stages [3].Students' educational experiences and opinions about the course subject, organization, structure, and overall quality are unquestionably significant in determining the effectiveness of the curriculum.As a result, they can be considered a valuable resource in the process of curriculum formation and evaluation.In this regard, evaluating the opinions of the students regarding the acquisition of clinical skills can be one of the learning-facilitating activities in the clinical setting [6].Thus, taking into account their perspectives is one of the techniques to evaluate educational methods, and clinical training systems, and subsequently improve educational quality [3]. The needs to regularly revise and update the materials and training content of pediatric clinical rotation is particularly important since pediatrics is one of the most major and important clinical rotations.The purpose of the current study was to determine how satisfied medical students were with the revisions made to the pediatric curriculum. Study design and ethical approval This is a cross-sectional study approved by the Shahid Beheshti University of Medical Sciences Ethics Committee with the following approval code: SBMU.MSP.REC.1399.455.Participants provided their consent as necessary and were informed of the confidentiality of their information before data collection. Participants selection The study population was made up of 4 th -5 th years medical students who had completed their three-month pediatric rotation in 2022 at one of the affiliated children's hospitals of the Shahid Beheshti University of Medical Sciences, including Mofid Children's Hospital, Loghman Hakim Hospital, Imam Hussein Hospital, Shohada-e-Tajrish hospital, and Mahsih Daneshvari hospital.All students who were not available at the time of the survey or who declined to participate were excluded (zero participants were excluded). Participants' characteristics The study was conducted on 118 medical students aged 22 to 24 years who had finished their three-month pediatric rotation at Mofid Children's Hospital, Loghman Hakim Hospital, Imam Hussein Hospital, and Shohadae-Tajrish Hospital. This section explores the findings of our study, which attempted to determine how satisfied medical students were with their pediatric rotation.We investigated some aspects of their experiences, including communication efficacy, engagement with clinical settings, satisfaction with mentors, alignment of covered materials with curriculum, and more, using thorough surveys and evaluations. Designing the questionnaire We designed a questionnaire with multiple sections to evaluate the students' satisfaction in several aspects.The questionnaire was made of 3 sections, including Satisfaction with Program Implementation, the Quality of Patient-Based Learning, and Satisfaction with Attendings and Mentors.27 questions were asked in this questionnaire during the process (see Supplementary file). Data collection and analysis The researchers developed checklists that covered components crucial to the study's objectives.These checklists were delivered in person to the participating medical students.Student satisfaction with program introduction, class scheduling, materials, study tactics, study tools, exams and evaluations, and the role of preceptors were only a few of the program execution-related areas covered by the collected data.Information was also gathered from medical students who could not be reached at the time of the in-person survey through phone calls. Implementation phases Assessment and problem identification The difficulties faced throughout the program were discussed in depth during several meetings with medical students.Students expressed concerns about a range of issues, including how poorly teaching subjects corresponded with the curriculum, how much information was covered on final examinations, and how little they were exposed to certain clinical departments.Pediatric attendings and mentors were also asked for their opinions on the program, as well as their experiences and observations.To classify and handle the identified difficulties, a group of professionals got together.They included department chiefs, educational deputies, and pediatric attendings. The noted issues were categorized as follows: i. Theoretical subjects that are mentioned in the curriculum are not covered during the rotations.ii.Students only get a small amount of exposure to certain departments.iii.The presence of students in the pediatric emergency wars and general pediatrics departments was scant.iv.The materials for final exams were inappropriately covered.v. Patient selection for the morning report. vi.The size of the students groups for each clinical rotation and the scarcity of attendings and mentors.vii.The lack of digital resources and restricted access of students to these materials.viii.Theoretical class sessions are scheduled at inconvenient times, requiring students to abandon clinical departments to attend afternoon classes. Intervention and process enhancement Interventions were created to address the issues and improve the educational program based on the problem classification and expert panel talks.The next actions were taken: i. Curriculum Alignment: The curriculum was updated to include all of the Ministry of Health's essential subjects, as well as new subjects pertinent to clinical practice.ii.Faculty Involvement: It was coordinated with faculty members to make sure that teaching subjects were thoroughly covered and that the content of the lessons and the sources used for the educational course were in line.iii.Didactic Sessions: The didactic sessions were structured based on head theme panels (for example, gastrointestinal, and respiratory), scheduled to occur concurrently with departmental meetings, and planned at the beginning of the 12-week course.iv.Clinical Rotations: A patient load of each ward was taken into consideration when allocating clinical rotations.General pediatrics would receive 3 weeks, gastrointestinal and nephrology would receive 2, and others would receive 1. v. Exam format: New sections with thorough coverage of distinct thematic subjects were added to the test format.The examination room at the university used a computer-based system for exams.vi.Morning Reports: The Mofid Children's Hospital first implemented student-based Morning Report Sessions.These meetings took place concurrently with the hospital's morning report, and they were led by the attendings.vii.Student Group Division: To encourage group discussions, students were divided into smaller groups.viii.Resource Accessibility: For unrestricted student access and download, audio lecture slides have been recorded and posted to the pediatric department's website. Adopting the revisions to the curriculum The shortcomings of the educational program were addressed and modifications were adopted to improve the entire educational experience for medical students through systematic assessment, identification of difficulties, and targeted interventions.These actions intended to establish a more thorough and balanced learning environment, harmonize the curriculum, and expand exposure to other clinical areas. Variability in student satisfaction across different clinics During their clinical training phase, student satisfaction was examined across various specialist clinics, revealing a wide range of experiences.The evaluation covered a variety of clinics, each linked to a particular medical specialty.Intriguing differences and new information about the student experience in several clinical settings were revealed by the resulting satisfaction rates (Fig. 1, Table 1): • Rheumatology Clinic: Students expressed a satisfaction rate of 56%, indicating a moderate level of contentment with their experiences in this clinic.• Newborn Care Clinic: Students' satisfaction levels reached 81% within the Newborn Care Clinic, portraying a robust sense of contentment.• Nephrology Clinic: The Nephrology Clinic emerged with the highest reported satisfaction rate of 82%, underscoring an exceptional level of student satisfaction in this clinical domain. Course introduction and curriculum satisfaction Laying the foundation: The degree of participant satisfaction was noticeably high at the start of the program.Their satisfaction with the initial program introduction and communication was clear.94% of respondents said they were satisfied with how well the course contents, study techniques, and reference resources were explained.Furthermore, a commendable 82.1% of people were satisfied with how exam specifics were delivered (Table 2). Comparison of student satisfaction in different hospitals The findings show various tendencies among the hospitals.Loghman Hakim Hospital demonstrated strong faculty involvement in student instruction (97.2%) and awareness (96.3%).Students expressed high levels of satisfaction (95.9%) with contacts with staff, and the hospital showed a high level of preparation for student integration (97%).Shohada-e-Tajrish Hospital showed comparable encouraging trends, with considerable faculty involvement in student instruction (96.06%) and awareness (97.19%).The hospital's readiness for student integration was slightly lower (92.4%)even though student satisfaction with staff interactions remained high (93.03%).Imam Hossein Hospital demonstrated good faculty involvement in student education (90.29%) and faculty awareness (94.86%).The readiness for student integration (83.4%) and student satisfaction with staff interactions (82%) both have space for development (Table 2). Evaluation of patient interaction during the training period • Student engagement in morning report sessions: A noteworthy number of students (92.3%) reported attending more than 10 morning report sessions, demonstrating a strong dedication to the educational process.A small minority (1.7%) reported participation in fewer than 5 sessions, whereas a substantial amount (6%) highlighted their presence in 5 to 10 meetings.This information highlights the importance of students' active participation in the learning environment of morning report talks (Table 2).• Student presence in ward rounds: Examining the presence of students during ward rounds revealed some interesting patterns.The vast majority of stu- dents (71.8%) participated in more than seven ward rounds, demonstrating their commitment to practical clinical experience.A sizable amount (24.8%) attended 4 to 7 rounds, whereas a far lower percentage (3.4%)joined fewer than 4.These findings show the range of clinical encounters that the students had as well as how they struck a balance between active participation and the demands of their training schedule (Table 2).• Student presence in outpatient clinics: In more than 20 outpatient clinic visits, a sizable percentage (62.4%) of students dealt with patients, demonstrating a remarkable exposure to a variety of medical conditions.Furthermore, a sizable portion (36.8%) attended ten to twenty clinics, whereas a negligible portion (0.9%) attended less than ten.These results highlight the extent of the student's involvement with outpatient settings and their growing competency in dealing with patients (Table 2). Evaluation of education and patient interaction in clinical settings • Gastroenterology department: According to the survey, 76.5% of students used patient cases for pres-entations in the gastroenterology wards, illustrating the close relationship between academic learning and practical application.Additionally, an optimistic 82% of respondents indicated that the presentation titles and the department's patient group were in line, highlighting the applicability of education to realworld situations (Fig. 2).• Infectious diseases department: Similar results were shown in the Department of Infectious Diseases, where 70.8% of students included patient cases in their presentations, demonstrating how real-world cases are incorporated into the classroom.The efforts to link curricula with clinical experiences are highlighted by the 68.2% alignment of presentation titles with the patient context (Fig. 1).• Student satisfaction with preceptors: Mentors were highly rated by students for their professionalism (95.2%), presence during clinical interactions (94.6%), and capacity to handle problems (94.9%).These results highlight how beneficial preceptors and mentors are for the educational process (Table 2).• Student satisfaction with senior preceptors: Similar levels of satisfaction were reported with senior preceptors, with 96.5% expressing satisfaction with their professional behavior, 92.7% with their involvement, and 94.6% with their ability to resolve problems.This emphasizes the critical function of knowledgeable mentors in creating a supportive learning environment (Table 2).• Pathology course satisfaction: Student satisfaction in the course of pathology varied.While 67.2% of respondents were satisfied with how courses were related to clinical programs, 74.2% were satisfied with faculty presence, and 68.1% were satisfied with assessment strategies, only 55.4% of respondents said they were satisfied with how reports were used, which suggests room for improvement (Table 2).• Radiology course satisfaction: Satisfaction ratings were comparatively greater in radiology courses.Material alignment was rated as satisfactory by 82.4% Fig. 2 The of medical students gastroenterology and infectious disease wards of respondents, faculty attendance by 88.4%, evaluation techniques by 80.3%, and report utilization by 81.9% of respondents.These findings show that the radiology course has been successfully incorporated into the curriculum (Table 2).• Infectious diseases department satisfaction: In the department of infectious diseases, satisfaction with program introduction was at 70%, complete weekly program presentations were at 62.8%, staff readiness for student integration was at 65.7%, staff interactions were at 70.6%, and the role of clinical.fellowships in student education were at 64.5%, and instructors' awareness of student presence and programs was at 72.2%.These results highlight many facets of departmental satisfaction (Table 2).• Program execution and satisfaction: The outcomes show a high overall degree of satisfaction with the twelve-week course.Only 1.7% of respondents reported low satisfaction, a small minority (29.6%) moderate, and a sizable majority (68.7%) high satisfaction.This shows that the execution and content of the curriculum have been well received (Table 2).• Instructor alignment and awareness: When it came to alignment with specific instructors, the majority (78.8%) rated strong alignment, followed by moderate alignment (17.6%), and low alignment (2.7%).77.4% of respondents said they were well aware of program changes, 18.3% said they were somewhat aware, and 4.3% said they were not at all informed.These results underline the need for instructor transparency and communication about curriculum changes (Table 2).• Effectiveness of presentation materials: Analyzing the present materials revealed several viewpoints.While 53.9% expressed a high level of satisfaction with the materials' applicability and relevance, 40.9% reported a moderate level of satisfaction, and 5.2% a lower level.This shows that there is space to increase the thoroughness and impact of educational materials (Table 2).• Teaching schedule congruence: 71.1% of teachers reported good schedule alignment with the allocated hours, while 28.9% reported moderate alignment.Notably, no participants indicated low alignment, suggesting that teaching schedules and program requirements are frequently well-matched (Table 2).• Examination satisfaction: Exam participant satisfaction also revealed good outcomes.An amazing 92% of respondents expressed satisfaction with how well tests were aligned with the presented materials.Additionally, 86.8% of respondents said they were satisfied with how the weekly exams were administered, indicating that the assessment strategies are usually well-liked (Table 2). • End-of-section examination satisfaction: The Gastroenterology department scored an 85% satisfaction rating for end-of-section exams, whereas the Infectious Diseases section received a 68% rating.The different effects of evaluation procedures across various sections are highlighted here (Table 2).• Overall satisfaction: The study's findings show a 76.66% total satisfaction rate.Despite differences in certain aspects, the participants' overall satisfaction with the educational experience is highlighted by this cumulative measure (Table 2). Discussion The study's findings provided insight into how medical students and trainees saw and felt about participating in our revised twelve-week clinical rotation in the pediatric wards among educational hospitals affiliated to SBMU.The majority of participants expressed a favorable propensity for taking part in this clinical rotation, showing a high degree of satisfaction.This satisfaction was extended to their encounters with patients inside a professional environment as well as the caliber of training they had received.These findings have important careerrelated ramifications for medical professionals, affecting both their working environments and their contacts with other healthcare workers.Creating a good work atmosphere while in training can help practitioners become well-rounded professionals who not only have the medical knowledge needed to provide quality patient care but also the requisite interpersonal skills.The study's attention to the patient group, which primarily seeks treatment in outpatient clinics, is significant.Students and trainees are exposed to a variety of health disorders due to the prevalence and range of diseases seen in these clinics, which enables them to connect their academic learning with the community's predominant health challenges.It should be noted, though, that some rare cases, frequently chronic and uncommon, remain outside the purview of these clinics, which could result in a bias in the types of cases seen. The result is consistent with other studies, especially in pediatric departments, where outpatient clinics act as active learning environments because of their duration and the built-in patient-physician connections [7,8].In these clinics, the benefit of face-to-face patient engagement makes clinical examinations, patient conversations, and the sharing of ideas easier. The study's findings also demonstrate how crucial it is to plan for and prioritize medical education in a variety of clinical settings, especially outpatient clinics.By considering a larger spectrum of the community's prevalent health challenges, such planning should encourage a well-rounded medical education.Encouraging trainees to take part in these activities in collaboration with other healthcare units could lead to a deeper understanding of patient assessment, case management, complaint reporting, and follow-up.It is important to be aware of the limitations of this study, including potential biases in selfreported data and the focus on a specific healthcare environment.The conclusions drawn from the data are also dependent on the writers' perceptions and experiences, which can fluctuate based on the situation.The study does, however, emphasize the significance of practical experience in assessing the readiness and competency of prospective medical professionals. Because every type of planning to improve clinical training quality depends on the identification of problems, inadequacies, and deficiencies existing in the educational system based on the target group's perspective, the current study sought to ascertain medical students' satisfaction with clinical training in the teaching hospitals in Tehran.The results showed that most participants were satisfied with clinical training techniques, clinical competence levels of faculties, and clinical training quality.They also voiced their displeasure with the clinical training goals, clinical evaluation procedures, clinical training tools and resources, and students' clinical competence levels.In this regard, Masic's research in Bosnia suggested that the students were dissatisfied with the issues that were present in their clinical skills training [9].Only 28.4% of the students in the study by Jalili et al. were overall happy with the instruction they had received [10].However, Guarino et al. also showed that even if there is room for improvement, overall student satisfaction with teaching is excellent [11]. Conclusion The study underlines the importance of improving the medical education curriculum.In addition to offering opportunities for skill development, these environments promote awareness of the demands for cooperative healthcare efforts and community health needs.Taking note of the lessons acquired from this research by medical education planners and policymakers to help create a more efficient and flexible healthcare workforce that is better prepared to address the issues posed by diverse patient populations and changing healthcare systems. Fig. Fig. The satisfaction of medical students in clinics Table Student across specialized clinics Table 2 Evaluation of education and patient interaction in clinical settings
2024-04-24T15:26:39.513Z
2024-04-22T00:00:00.000
{ "year": 2024, "sha1": "6f86a937edc676de8d13ee85a9629217389a91d3", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "8de2780f1ac3d6065fb2bb303ea4d63ef86ae274", "s2fieldsofstudy": [ "Medicine", "Education" ], "extfieldsofstudy": [ "Medicine" ] }
18910559
pes2o/s2orc
v3-fos-license
Critical Behavior of a Chiral Condensate with a Meron Cluster Algorithm A new meron cluster algorithm is constructed to study the finite temperature critical behavior of the chiral condensate in a $(3+1)$ dimensional model of interacting staggered fermions. Using finite size scaling analysis the infinite volume condensate is shown to be consistent with the behavior of the form $(T_c-T)^{0.314(7)}$ for temperatures less than the critical temperature and $m^{1/4.87(10)}$ at the critical temperature confirming that the critical behavior belongs to the 3-d Ising universality class within one to two sigma deviation. The new method, along with improvements in the implementation of the algorithm, allows the determination of the critical temperature $T_c$ more accurately than was possible in a previous study. Motivation The construction of Monte Carlo algorithms to solve problems in many body quantum mechanics involving fermions is notoriously difficult. This difficulty is reflected in our inability to perform precise quantitative calculations in strongly interacting fermionic models which are necessary to understand a variety of phenomena including high temperature superconductivity and the physics of strongly interacting matter. The essential problem arises due to the Pauli principle which can produce negative Boltzmann weights when the quantum partition function is rewritten as a path integral in a convenient basis. As a result, the probability distribution that should be used for importance sampling is unclear. The conventional approach to problems involving fermions is to integrate them out in favor of a fermion determinant. In cases where this determinant is positive it is often possible to use known sampling methods for a bosonic problem to devise an algorithm [1][2][3][4]. Some of these methods are inexact since they involve discretization of a differential equation and require some care and study before they can be applied to a new problem. Others can suffer from truncation errors that approximate the original partition function. Worst of all, these algorithms suffer from the usual problems of critical slowing which makes it difficult to study phase transitions using them. The study of phase transitions, especially in the context of fermionic models, is of interest in a variety of fields. In condensed matter physics strong correlations between electrons can lead to many interesting critical effects like high T c superconductivity [5] and quantum phase transitions [6]. In high energy physics, existence of new phases and fixed points have been predicted [7][8][9] which may lead to novel formulations of quantum field theories beyond perturbation theory. Additionally, exotic phases arise in dense nuclear matter due to strong interactions among quarks [10]. Since fermions acquire a screening mass on the order of the temperature T , one expects that the finite temperature critical behavior close to a second order phase transition in a (d + 1) dimensional theory is governed by a d dimensional low energy effective theory that is purely bosonic [11]. A few years ago, this conventional wisdom was questioned based on a large N calculation [12] in a Gross-Neveu model. It was shown that the finite temperature phase transition in the (2+1) dimensional model reproduced mean field exponents instead of the expected 2-d Ising exponents. This claim was later backed by numerical evidence [13] using the hybrid Monte Carlo algorithm. Although, the reason for the unexpected critical behavior was later attributed to the narrowness of the Ginsburg region in the large N limit [14], the numerical evidence provided to substantiate the earlier claims remains disturbing and perhaps shows the inadequacy of the numerical methods used. Recently, the chiral transition in two flavor QCD with an additional four-fermion interaction was studied using the hybrid molecular dynamics algorithm which showed evidence for non-mean field critical exponents [15]. However, again the expectations based on simple dimensional reduction were not observed, instead the data appeared to be consistent with tricritical behavior. The inability to provide conclusive answers to questions related to the critical behavior in fermionic theories is closely related to the lack of efficient fermion algorithms. The fermion cluster algorithms, recently proposed in [16,17], provide a novel approach to the problem. The new method, referred to as the meron cluster algorithm, uses well known quantum cluster algorithm techniques [18] to solve the fermion sign problem completely. The first applications of the meron algorithm emerged last year when it was used to study the critical behavior in a relativistic system of interacting staggered fermions with a discrete chiral symmetry [19,20]. The results indicated that the symmetry is broken by the ground state but is restored by thermal fluctuations at high temperatures. However, in order to avoid the complications that arise in the algorithm due to the addition of a mass term, the previous study focused on massless fermions. Since the chiral condensate vanishes in this case the scaling of the chiral susceptibility with the volume was used to find the critical temperature and the critical exponents ν and γ. In this article a new meron algorithm is proposed and applied to study the staggered fermion model in the presence of the mass term 1 . This makes it possible to study the critical behavior of the chiral condensate. The main result of the article is that in (3 + 1) dimensions the chiral condensate behaves like ψ ψ = A(T c −T ) β just below T c with β = 0.314(7) and vanishes for higher temperatures. The corresponding exponent in the 3-d Ising model is known to be 0.32648 (18). As a bonus the critical temperature is also determined more accurately than previous studies. Furthermore at T c the mass dependence of the condensate also behaves as expected, ψ ψ = Bm 1/δ , where δ = 4.87(10) as compared to 4.7893(22) of the Ising model. Conventional fermionic algorithms have never been able to confirm the predictions of universality in a strongly interacting fermionic model with such precision. The Model The Hamilton operator for staggered fermions hopping on a 3-d cubic spatial lattice with V = L 3 sites (L even) and anti-periodic spatial boundary conditions, considered here, is given by The term couples the fermion operators at the nearest neighbor sites x and x +î whereî is a unit-vector in the positive idirection and the mass term is a single site operator. The fermion creation and annihilation operators c † x and c x satisfy the canonical anticommutation relations and n x = c † x c x is the number operator. The phase factors η x,1 = 1, η x,2 = (−1) x1 and 1 Preliminary results of this work was presented in [22] η x,3 = (−1) x1+x2 are well known in the staggered fermion formulation [21]. This model was originally studied in the chiral limit (m = 0) in [19]. In this limit the Hamilton operator is invariant under shifts in the x 3 direction. The mass term breaks this symmetry up to shifts by an even number of lattice units, thus breaking a Z 2 symmetry which can be related to a subgroup of the well known chiral symmetry of relativistic massless fermions. This symmetry is broken spontaneously at zero temperatures, while thermal fluctuations restore it at some high temperatures [19]. In this article the critical behavior near the second order transition is studied using the chiral condensate ψ ψ ≡ − x s x /V using the algorithm presented in [22] and briefly sketched here. The construction of the path integral for the partition function is standard. First, the Hamilton operator is rewritten as for i = 1, 2, 3 and H 7 = m x s x . The partition function is then approximated by where the inverse temperature has been divided into M slices such that M ǫ = 1/T . At a fixed temperature, the above approximation becomes exact in the limit M → ∞ and ǫ → 0. On the other hand for any fixed M , the approximation defines a new theory with a phase structure and critical behavior that can be identical to the M = ∞ theory. For simplicity this article focuses on the theory with M = 4. Meron Cluster Algorithm In order to solve the model discussed in the previous section using a meron cluster algorithm, the partition function should first be written in terms of fermion occupation numbers n and bond variables b so that it can be represented by partition function (5) have been discussed in [22,19,24]. The final results can be represented as a set of simple rules. W [n, b] turns out to be a product of magnitudes of the transfer matrix elements and Sign[n, b] represents the product of their signs. The non-zero matrix elements are shown in figure 2 along with their weights. When compared to [19] the only difference is in the mass term. Assuming m ≥ 0 in (1) leads to two types of single site interactions. The one with the solid bond is always positive, and the one with the dotted bond is negative on filled even sites and empty odd sites. This extra negative sign must be included in Sign[n, b] along with sign Σ that arises due to fermion hops, staggered fermion phase factors and anti-periodic spatial boundary conditions as discussed in [19]. Bonds connect lattice points into clusters. A flip of a cluster is defined as the change in the configuration of fermion occupation numbers at the sites belonging to the cluster, such that an occupied site is emptied and viceversa. For a given model to be solvable using a meron cluster algorithm, the weights W As has been discussed in [25], the change in the sign of the configuration due to a cluster flip depends on the topology of the cluster. If we refer to the clusters whose flip changes the sign of the configuration as merons, then it is easy to classify a configuration [n, b] based on the number of meron clusters it contains. Using the above three properties it is then easy to check that the partition function gets contributions only from the zero meron sector, i.e., where Sign[n, b] = δ N,0 and N denotes the number of merons in the configuration. For the present model it is also easy to show that the chiral condensate is given by which means that to measure the condensate one is interested only in the zero and one meron number sectors. A typical sweep in a meron cluster algorithm consists of two steps exactly like any other cluster algorithm. Starting from a configuration the bonds are updated first based on the weights W [n, b]. After each local bond update the meron number of the configuration can potentially change. In order not to generate more than the necessary number of merons every bond update is followed by a Metropolis decision. While measuring the chiral condensate for example one rejects all bond updates that generate more than one meron. This requires one to reanalyze the topology of clusters that are connected to the local bonds being updated. A major improvement in the implementation allows this to be done in at most log(Size(C)) steps [26]. Once all the bonds are updated it is possible to update the occupation numbers by flipping each cluster with a probability half. This algorithm produces the configuration [n, b] with weight (δ N,0 + δ N,1 )W [n, b]. Thus, the condensate can be calculated using where . . . refers to a simple average over the generated configurations. Numerical Results The critical behavior of the model was studied through the measurement of the chiral condensate at several values of the temperature around T c using the algorithm discussed above. For each temperature simulations were performed on different spatial lattice sizes and at various masses. Each simulation produced 10 6 configurations, except for the largest lattices of sizes ranging from 32 3 up to 48 3 which contained only 10 5 configurations. All runs included at least ten thousand thermalization sweeps in addition to the above measurements. The autocorrelation times typically ranged from two to five sweeps. However, errors were evaluated from fluctuations in the averages of data over blocks of 1000 configurations each. For future reference we give the values of the chiral condensate on a 32 3 lattice obtained at m = 0.001 and various temperatures in the table below. In order to confirm the spontaneous breaking of the Z 2 chiral symmetry the condensate needs to be evaluated in the infinite volume limit followed by the chiral limit. This can be done precisely by a finite size scaling analysis. In the broken phase the theory undergoes a first order phase transition as a function of the mass at m = 0 where the condensate exhibits a jump. In a large but finite volume this discontinuity is smoothened out to an analytic curve whose functional form is given by [27] ψ ψ = Σ 0 tanh(mV Σ 0 /T ) + χ 0 m , which is valid when By fitting the available data at each simulation temperature to the formula (10) it is possible to extract Σ 0 , the desired limiting value of the condensate. The minimum volume necessary for the formula to work can be determined by systematically removing the smallest volume data from the fit, as required to obtain a χ 2 /DOF of about one. Finally, it is important to check if Σ 0 and χ 0 obtained through the fit and the masses used are consistent with the condition (11). The above finite size scaling analysis works exceptionally well, as can be seen from figure 3, which shows a plot of the condensate as a function of the mass at a fixed T = 1.0471 for three different lattice sizes. The solid lines represent the function (10) for a fixed value of Σ 0 and χ 0 obtained from a single fit to all of the shown data and more. The χ 2 /DOF for the fit is around 0.8. Since T = 1.0471 is very close to the critical temperature it was necessary to go to spatial volumes as large as 48 3 in order to determine Σ 0 with reasonable precision. Interestingly, the above fitting procedure failed to yield an acceptable chi-squared for data above a certain temperature. This was essentially due to the fact that it was impossible to find a small enough mass to ensure that the condition (11) was satisfied. Removing the larger mass data resulted in a smaller Σ 0 which in turn lowered the bound in (11). This observation is consistent with a vanishing condensate in the chiral limit for higher temperatures. The non-zero values of Σ 0 extracted from the fits at different temperatures are shown in figure 4. The solid line represents a fit to the expected form A(T c − T ) β using the data for temperatures between 1.0050 and 1.0471. The χ 2 /DOF for the fit was 0.4. The critical exponent β was found to be 0.314 (7) which is consistent with the measured value of 0.32648 (18) in the Ising model [28] within two sigma. The fit also determines the critical temperature very accurately to be T c = 1.0518(3) which agrees with the previously obtained result [19] within errors. The data for L = 32 and m = 0.001 is shown for comparison and to demonstrate the necessity of using the finite size scaling formula (10) to extract the infinite volume chiral limit. Furthermore, the errors on Σ 0 are typically much smaller than those for any single data point due to the constraints that arise from fitting over a wide range of masses and volumes. At T c the finite size scaling behavior of the condensate as a function of the mass is very different from (10) and can be shown to be of the form (12). The value of y m used is obtained from a fit to the small x data as explained in the text. The solid lines are fits to (13) for small x and (14) for large x data. using renormalization group arguments that are valid close to a second order phase transition. For any Z 2 phase transition the function f (x) is universal and depends only on the lattice geometry, boundary conditions etc., and not on the type of lattice, irrelevant operators and such. The exponent y m is hence universal. The behavior of f (x) is known in the two limits: where 1/δ = (d − y m )/y m . Based on the estimate of T c obtained above, additional simulations at T = 1.0518 were performed to confirm this critical behavior as a function of the mass. Using the data with L ≥ 8 and L ym m ≤ 0.1, the chiral condensate is fit to the form based on the expectation from (13). Using this value of y m , we can calculate δ = 4.87(10) (16) which compares favorably with the known value for the Ising model of 4.7893(22) [28]. Using the fitted value of y m , the quantity L d−ym ψ ψ is plotted in figure 5 for all available data as a function of L ym m. Clearly the data collapses onto a single curve within errors, as expected from (12) . To confirm this picture the data in figure 5 with L ≥ 12 and 10 < L ym m < 90 is fit to the form (14). This yields the value δ = 4.89 (19) which agrees with the earlier determination and further demonstrates that the critical behavior in the present model is consistent with the 3-d Ising universality class. Conclusions The meron cluster algorithm has allowed a direct simulation of a strongly interacting fermionic theory in the vicinity of a Z 2 chiral phase transition on lattices with spatial volumes of up to 48 3 using common workstation computers. The scaling behavior near the critical point, as a function of both the temperature and mass, was determined within errors of about 2% and the results are consistent with a second order phase transition. Allowing for 1-2 sigma deviations the universality class of the transition matches with that of the 3-d Ising model. This result strongly supports the scenario where a fermionic theory in d + 1 dimensions undergoes a dimensional reduction to be described by a d dimensional bosonic theory in the critical region. Comparing similar results with the hybrid Monte Carlo algorithm, the meron algorithm provides great improvement over such methods.
2014-10-01T00:00:00.000Z
2000-10-20T00:00:00.000
{ "year": 2000, "sha1": "b28d0cfef1d1f30c5aadaf01690b5f0df170457f", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-lat/0010036", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "3b5159aadfc21d1e8072e6e505478c8bdb3706cc", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
253901097
pes2o/s2orc
v3-fos-license
Kono-S Anastomosis in Crohn’s Disease: A Retrospective Study on Postoperative Morbidity and Disease Recurrence in Comparison to the Conventional Side-To-Side Anastomosis Introduction: The rates of postoperative recurrence following ileocecal resection due to Crohn’s disease remain highly relevant. Despite this fact, while the Kono-S anastomosis technique initially demonstrated promising results, robust evidence is still lacking. This study aimed to analyze the short- and long-term outcomes of the Kono-S versus side-to-side anastomosis. Methods: A retrospective single-center study was performed including all patients who received an ileocecal resection between 1 January 2019 and 31 December 2021 at the Department of Surgery at the University Hospital of Wuerzburg. Patients who underwent conventional a side-to-side anastomosis were compared to those who received a Kono-S anastomosis. The short- and long-term outcomes were analyzed for all patients. Results: Here, 29 patients who underwent a conventional side-to-side anastomosis and 22 patients who underwent a Kono-S anastomosis were included. No differences were observed regarding short-term postoperative outcomes. The disease recurrence rate postoperatively was numerically lower following the Kono-S anastomosis (median Rutgeert score of 1.7 versus 2.5), with a relevantly increased rate of patients in remission (17.2% versus 31.8%); however, neither of these results reached statistical significance. Conclusion: The Kono-S anastomosis method is safe and feasible and potentially decreases the severity of postoperative disease remission. Introduction Crohn's disease (CD) represents a major socioeconomic burden and challenges health care systems worldwide. Despite the great advances made in medical treatment, including the introduction of biologics, CD remains incurable and the rates of surgery remain high. The current evidence shows that many patients with CD need surgery at least once during their lifetime [1,2]. Previously, the indications for surgery have focused on CD-related complications such as fistulas or stenosis. However, the current studies and updated guidelines consider ileocecal resection (ICR) in cases of localized terminal ileitis already at early time points [3,4]. This adaptation is mainly based on data from the LIR!C trial by Ponsioen et al., which demonstrated an improved quality of life and decreased need for medical treatment after ICR in patients with localized terminal ileitis compared to patients receiving infliximab [5]. These positive results were not only confirmed in the short-but also long-term followup and were underlined by other studies that demonstrated similar results in terms of quality of life and medical therapy [6,7]. However, while the advantage for laparoscopic ICR has been clearly shown in many studies in comparison to open surgery, the optimal technique for the creation of the anastomosis is still controversial [8][9][10]. Despite the significant progress made on CD therapy by demonstrating the potential of early ICR, with its implementation in the current guidelines, the rates of postoperative recurrence remain a major issue for many patients. Many theories suggest that the anastomosis has a central role in postoperative disease recurrence [11,12]. Thus, the perfect technique for the anastomosis has been an ongoing matter of debate [13,14]. To overcome this issue, Kono et al. introduced a novel technique (Kono-S anastomosis), which is based on the idea that the inflammation in CD originates from the mesentery. In line with this, Kono et al. proposed that the anastomosis should be created away from the mesentery (an anti-mesenteric handsewn anastomosis). The initial limited data from his group demonstrated impressive results in terms of postoperative recurrence [15,16]. However, further evidence about the technical feasibility of the Kono-S anastomosis, including the short-term morbidity and postoperative recurrence rates, especially during implementation and in a non-selective cohort, is lacking [17]. As such, there is currently no clear recommendation for a technique to create the anastomosis following ICR. Therefore, the goal of our study was to investigate the feasibility and potential of the Kono-S anastomosis in comparison with the conventional side-to-side anastomosis in patients who received ICR due to ileitis terminalis in a nonselective cohort. Study Population In this single-center retrospective study, all patients with ileocecal resection (ICR) due to Crohn's disease who were operated on between 1 January 2019 and 31 December 2021 at the Department of Surgery at the University Hospital of Wuerzburg were evaluated. The included patients were suffering from terminal inflammatory (Montreal classification L1 and L3), penetrating, or stricturing ileitis (Montreal classification B1-B3), while the patients who needed extended resection or strictureplasty, had received a diversion, or had a history of ulcerative colitis were excluded. Preoperatively, the extent of inflammation was assessed via endoscopy and an MRI scan. The indication for an operation was discussed by a multidisciplinary team, including a gastroenterologist, IBD surgeon, and radiologist. All patients were divided into two subgroups depending on the type of anastomosis. After the introduction of the Kono-S anastomosis at our hospital in 2021, almost all patients received the handsewn Kono-S technique, while all patients who were operated on before bowel reconstruction received the conventional (anisoperistaltic) side-to-side stapler anastomosis. By using those distribution criteria, an evaluation of the safety and feasibility of the Kono-S anastomotis during a potential learning curve was possible. The patients were usually operated on laparoscopically with extracorporal creation of the anastomosis. The Kono-S anastomosis was performed as described by Kono as a functional end-to-end handsewn anastomosis [15]. For the analysis, sociodemographic and clinicopathological data, including on the time of diagnosis, history of the disease, as well as the immunosuppressive or anti-inflammatory medication history, were collected for each patient from their patient records. In addition, an evaluation of the preoperative disease extent and a postoperative histopathological analysis of the resected specimen were performed and included the level of inflammation (mucosal damage, immune cell inflammation), as well as the extent at the resection margins (positive resection margins for inflammation). Outcome The primary endpoint was the Rutgeert score in the first endoscopy postoperatively, usually after 6-12 months. The Rutgeert score is captured endoscopically and represents an established method to define the extent of inflammation at the anastomosis site following ICR. The secondary endpoints included the rates of conversion and surgical and nonsurgical postoperative complications, as well as the length of hospital stay. Statistical Analysis The descriptive data are presented as medians with the range or total numbers with the percentages. The differences in patient characteristics were assessed using a t-test, Fisher's exact test, or ANOVA test in accordance with the data scale and distribution. A p-value of <0.05 was considered statistically significant. In addition, the effect size was included by calculating the value of Cohen's d, with values > 0.45 being considered a relevant effect size. The statistical analysis was done using GraphPad Prism (Version 8.0.0 for Windows, GraphPad Software, San Diego, CA, USA). Ethical Approval Ethical approval for this study was obtained from the Ethics Committee of the University of Wuerzburg, Germany. Patient Cohort In this retrospective single-center study, 78 patients received ileocecal resection at the Department of Surgery at the University Hospital of Wuerzburg between 2019 and 2021. Of those, 27 patients were excluded from the study due to loss during follow-up, participation in a study, or delayed postoperative endoscopy. Therefore, 51 patients were finally included in the study ( Figure 1). Twenty-nine patients from our cohort received a conventional sideto-side stapler anastomosis, whereas 22 patients were reconstructed with the functional end-to-end, handsewn Kono-S anastomosis. As presented in Table 1, the groups did not differ in terms of patient characteristics such as age, BMI, co-morbidities, or smoking habits, or in their preoperative levels of hemoglobin and albumin. In addition, both groups were comparable regarding their disease history, including for previous CD-associated surgery (n = 6/group, p = 0.59), whereas the rates of preoperative immunosuppressive medication trended higher for patients who received the side-to-side anastomosis (21 versus 10, p = 0.05). However, the rates of postoperative immunosuppression were comparable between both groups (p = 0.45). In total, 63% of the patients were operated on laparoscopically, without statistical differences between the two groups (21 versus 11 patients, p = 0.11), while the rates of conversion were comparable and low in both group (3.4% vs. 4.5%, p = 0.67) ( Table 1). The operating times were also not significantly different between the groups (169 min versus 161 min, p = 0.53). Histopathological Analyses Postoperatively, the histopathological analyses revealed that the inflammatory activity levels of the resected specimens were similar between both groups (p = 0.15). The specimens demonstrated predominantly medium or high inflammation activity. Regarding the resection margins, the rates of positive margins for inflammation were also comparable (13 versus 11, p = 0.97), without any differences in relation to the region of the positive margin (oral (proximal), aboral (distal), or both) ( Table 2). Short-Term Postoperative Outcome When analyzing the short-term postoperative outcome of patients following ICR, no differences were seen regarding the length of hospital stay (8. Disease Recurrence The inflammatory activity was evaluated and described using the Rutgeert score during endoscopy following ICR (time lapse 5-15 months, median 8.8 months) according to international guidelines. The Rutgeert score was detected in all patients and tended to be decreased in patients who received the Kono-S anastomosis in comparison to the conventional side-to-side anastomosis, without reaching statistical significance (1.7 versus 2.5, p = 0.11, Cohen's d = 0.47). However, the rates for patients without any endoscopic signs of inflammatory activity (Rutgeert score i0) tended to be higher six to twelve months after the Kono-S reconstruction (17.2 versus 31.8%, p = 0.23, Cohen's d = −0.35), whereas patients receiving the conventional side-to-side anastomosis demonstrated increased rates of more severe inflammation (Rutgeert score > i2) (44.8 versus 31.8%, p = 0.36, Cohen's d = 0.26). Discussion The implementation of optimal therapeutic regimens for patients suffering from Crohn's disease is challenging due to the complexity and heterogeneity of the disease, with the rates of surgery remaining relevant despite the introduction of novel antibody-based medications. Since disease recurrence following bowel resection is a major issue, with some patients potentially needing further surgeries, novel surgical strategies with a focus on the creation of an anastomosis might be crucial to decrease rates of clinical and endoscopic disease recurrence [10,18]. By introducing the Kono-S anastomosis, Kono et al. addressed the role of the mesentery on intestinal inflammation by relocating the anastomosis away from it (anti-mesenteric). While the initial data on this modern technique were impressive in selected patients, further evidence about the Kono-S anastomosis in comparison to the conventional side-to-side anastomosis is still lacking, especially in non-selective patient cohorts, despite the great relevance of the issue [8,17]. Therefore, we analyzed the feasibility and safety of the Kono-S anastomosis during its novel implementation at our department and evaluated the rates of disease recurrence. Based on our data of a non-selective patient cohort, the creation of the Kono-S anastomosis following ICR resulted in decreased rates of disease recurrence postoperatively with comparable rates of complications. After the implementation of the Kono-S anastomosis in our department, the endoscopic recurrence rates as evaluated and described by the Rutgeert score trended to be decreased during postoperative follow-up for patients receiving the Kono-S anastomosis in comparison to those patients treated with the conventional side-to-side anastomosis (1.7 versus 2.5, p = 0.11, relevant effect size). With these results being close and not statistically significant, this seemed to be mainly due to the size of the patient cohort. In addition, no patient selection process was performed preoperatively and the follow-up period was limited, which could further affect and explain the moderate statistical differences. However, our results have great clinical relevance, since further calculations revealed that the number of patients without any signs of endoscopic disease recurrence (Rutgeert score i0) increased by 14.6% with the Kono-S anastomosis (17.2 versus 31.8%, p = 0.23, medium effect size). Moreover, performing the conventional side-to-side anastomosis resulted in higher rates of severe disease recurrence (Rutgeert score > i2) in comparison to the Kono-S anastomosis (44.8 versus 31.8%, p = 0.36, low effect size) ( Table 2). Importantly, no differences were observed between the groups in regard to serious postoperative complications, underlining the feasibility and safety of the Kono-S anastomosis during its implementation. In line with this, we identified a learning curve of approximately 20 procedures in our department. While the rates of wound infection tended to be increased for the Kono-S anastomosis technique (6 versus 10, p = 0.06, relevant effect size), this might be related to the different access route used to perform the anastomosis (suprapubic for the side-to-side anastomosis versus periumbilical incision for the Kono-S anastomosis). However, no patient selection process was performed in our cohort, as demonstrated by the comparable patient characteristics for their co-morbidity and disease histories, including for previous operations as well as medical treatments (Table 1), which reflects the clinical routine. Importantly, our study is one of the first studies to investigate the effect of the Kono-S anastomosis technique in patients receiving a re-operation. Based on the results, our analysis supports the further evaluation of the Kono-S anastomosis technique following ICR in patients with Crohn's disease in clinical routine in non-selective patient cohorts. In line with previous small studies focusing mainly on the perioperative morbidity of the Kono-S anastomosis [19][20][21], the rates of complications in our cohort were comparable between both groups, including the operating time and length of hospital stay (Table 2). Furthermore, while our data demonstrated a clear trend towards decreased rates of disease recurrence, a multicenter study even demonstrated a five-year surgical recurrence-free survival rate of 98.6% in Japan [22]. In addition, another study from Japan also demonstrated decreased rates of anastomotic surgical recurrence following a Kono-S anastomosis after a one-year follow-up [23]. However, no endoscopic disease assessment was performed in this study, which is considered the standard of care for disease monitoring and management in the current guidelines. Following the standardized follow-up of the operated patients, Kono et al. showed significantly lower numbers of patients suffering from disease recurrence in their initial cohort, with a mean Rutgeert score of 0.78 during a follow-up of more than a year [16]. The only small prospective randomized trial (SuPREMe trial), which included 36 patients who received a Kono-S anastomosis, as well as 43 patients who received a conventional side-to-side anastomosis, demonstrated significantly decreased rates of endoscopic recurrence following the creation of the functional end-to-end handsewn anastomosis (Kono-S) [24]. Importantly, and in line with our study, in the SuPREMe trial, the rates of severe endoscopic recurrence (Rutgeert score > i2) were also relevantly lower for patients who received the Kono-S anastomosis, while the numbers of postoperative complications were comparable between both groups. A systematic review by Alshantti et al. confirmed the positive results of the Kono-S anastomosis on postoperative disease recurrence and morbidity [17]. A major aspect and potential explanation for the reduced rates of postoperative recurrence in patients who received the Kono-S anastomosis is the exclusion of the mesentery by the anti-mesenteric creation of the anastomosis. While the role of the mesentery in Crohn's disease is still controversial [25], Coffey et al. demonstrated that the inclusion of the mesentery in ileocolic resections might further reduce the incidence rates of disease recurrence following surgery [26]. To further address this aspect, future studies such as the SPICY trial will analyze the ongoing discussion about the role of the mesentery in Crohn's disease [27,28]. Furthermore, another relevant and well-discussed issue is the effect of the positive resection margins on the rates of postoperative disease recurrence [10]. The current evidence is heterogenous on the question of whether a positive resection margin results in a higher risk of disease recurrence [29][30][31]. While the current guidelines do not recommend inflammation-free margins due to the lack of robust evidence and the importance of bowel-sparing resections, future strategies could focus on the relevance of intraoperative diagnostics to avoid positive resection margins, as this is the state-of-the-art approach in surgical oncology. However, while we did not observe differences in resection margins in our cohort between the groups, no conclusion could be drawn from our analysis on the role of positive margins, since the cohort was too small for a subgroup analysis. Randomized studies are necessary with a focus on the mesentery as well as resection margins to further improve and optimize the surgical techniques used in Crohn's disease. Our study has several limitations, including its retrospective character, as well as the single-center design. In addition, several patients were lost during the follow-up due to the organization of the German health care system, having a large private practice sector, resulting in a smaller number of included patients, which limited the statistical analyses. However, our group sizes are in line with other published studies on the Kono-S anastomosis technique, as well as surgery in Crohn's disease in general, since CD is highly heterogenous, including the location of the inflammation. Furthermore, no patient selection process was performed in almost any patients receiving the Kono-S anastomosis following the introduction of it at our department. In addition, we also included patients who had undergone a previous Crohn's-disease-associated surgery, which explains the primarily open approach in many of our patients (n = 17). Therefore, our cohort represents the clinical routine, without any modification. Conclusions In conclusion, we demonstrated in our single-center study that the Kono-S anastomosis technique is feasible and safe during its implementation. While some aspects of our study limit our ability to make a final conclusion about the role of the Kono-S anastomosis in postoperative disease recurrence in non-selected patients, our study supports the need for further investigations of the technique in patients with localized CD. Future randomized trials are necessary to confirm and extend our data and to further improve the surgical strategies to optimize patient care and decrease rates of postoperative disease recurrence. Institutional Review Board Statement: The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee of the University of Wuerzburg, Germany. Informed Consent Statement: Patient consent was waived due to the retrospective study design.
2022-11-26T17:29:59.503Z
2022-11-23T00:00:00.000
{ "year": 2022, "sha1": "7b23d59ec68f940f80f17266553d949eabeea0ac", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0383/11/23/6915/pdf?version=1669281371", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1c8518c0a810ab7c7ca4255ab9b975b704e5f9bd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
245132070
pes2o/s2orc
v3-fos-license
Exploring Specialist Palliative Care Practitioner Perspectives on the Face Validity of the Attitude to Health Change Scales in Assessing the Impact of Life-limiting Illness on Patients and Carers Background: Identifying and assessing vulnerability and resilience through reflexive reactions and conscious coping responses to life-limiting illness is an important, but rarely assessed, component of care. The novel Attitude to Health Change scales can contribute to this, but require fuller development and testing. Objectives: Exploring face validity of the Attitude to Health Change Scales (patient and carer versions) from the perspective of specialist palliative care professionals. Design: A two-stage study: (i) focus groups to explore experiences of scale use and wording, (ii) online survey to gather preferences on possible scale modifications. Focus group data were analysed using framework analysis. A hermeneutic approach was used to modify the wording of the scales, ensuring adherence to the underpinning concepts used in the design of the scale, congruence with the palliative care context, and simplicity of language. Setting/Subjects: Specialist palliative care practitioners in UK hospice settings who had been involved in pilot use of the scales in clinical practice. Results: 21 practitioners participated in 3 focus groups across 3 UK hospice sites, 9 of those participants responded to the survey. Four themes are presented: the importance and distinctiveness of the scales; maintaining conceptual integrity; ensuring a palliative care focus; and ensuring linguistic clarity. New iterations of the patient and carer versions of the Attitude to Health Change scales were developed. Conclusion: The scales appear to reflect the intended theoretical constructs, and are worded in a way which is congruent with the experience of specialist palliative care practitioners. Introduction Some patients and carers meet the challenges of life-limiting illness with the resilient qualities of courage, perseverance, optimism, and a capacity to make sense of their experience, with adequate support from their network of family and friends. 1 Others, in the absence of these qualities, are vulnerable to the experience of their unfolding illness. 2 Differentiating between those patients and carers who are vulnerable will help identify those for whom all aspects of care are likely to be more complex. 3ttention to the psychological effect of serious illness is important, but is relatively poorly represented as a core concept in many existing tools. 4Some common tools ask questions about care related concerns and symptoms eg IPOS 5 for patients, or support needs eg CSNAT 6 for carers.These measures seek to assess the patient's current health circumstances in order to determine an approach to treatment options, care and support.Tools are also available that assess depression, anxiety, distress, and psychological response to cancer. 4,7,8ome palliative care practitioners have found that these measures do not go far enough in identifying the underlying factors that determine how well a patient or carer is able to cope with life-limiting illness and its consequences.A distinctive new approach to assessment, not based on pinpointing physical or psychological symptoms, is the Attitude to Health Change self-report scales, one for patients and one for carers.These scales look at the pre-existing, cumulative and invisible factors that shape perspectives on serious illness 9 such as life experience and personality, 10 and see these complex interactive personal factors as important for understanding the coping capacity of a patient and their relative degree of resilience and vulnerability.Recognition of the impact of the patient's illness on a family carer is the focus of the carer scale and the implications this has for the complex interaction between patient and carer and the wellbeing of both. 11These scales differ from existing palliative care tools in their focus on the underlying individual personal factors which contribute to vulnerability and resilience in both patients and carers.This difference provides insight into the potential capacity to cope effectively, or not, with life-limiitng illness and has implications for providing support where that coping ability is limited or absent.p. 489 ) The concepts that underpin the Attitude to Health Change scales are based on the Range of Response to Loss model, 13 a new paradigm for conceptualising the nature of loss and its manifestations, crucial for understanding the impact of lifelimiting illness for both patients and carers.The model is made up of two interacting dimensions (see table 1 for a description of these dimensions that underpin the tools based on this model, and Figure 1 demonstrating the core reactions and coping responses in the model).The two dimensions interact and "through the scoring system" provide a quantitative measure of vulnerability.The Range of Response to Loss Model also underpins the existing, validated, Adult Attitude to Grief bereavement measure, 14 which has found traction with practitioners, 15 and which forms the basis for the development of the new Attitude to Heath Change measures. Characteristics of the two dimensions in the Range of Response to Loss model are represented in the 9 items of the Attitude to Health Change scales: (a) Overwhelmed reactions, are characterised by disturbingly intrusive thoughts, persistently painful emotions and a sense of life losing its meaning 16 .(b) Controlled reactions are characterised by a belief in stoicism, avoidance of expression of distress and diverting attention away from what has been or is being lost. 16) Resilient coping responses are characterised by an ability to face the feelings of loss, a sense of personal resourcefulness to cope with the consequences of loss, and a hopeful and positive sense of being able to accept the loss.1,17 The impetus for the development of the Attitude to Health Change scales came from specialist practitioners in palliative care settings, who had successfully used the related Adult Attitude to Grief bereavement measure, and believed that a comparable tool for use with patients and carers would add to the effectiveness of psychosocial assessment and the person-centred support of people facing life limiting illness.An initial 'Attitude to Health Change' scale was developed based on the wording of the validated bereavement measure, and used developmentally in practice by a small cohort of specialist palliative care practitioners.They found this intuitively helpful, and identified important factors in its use such as practitioner personal comfort and training; patient and family carer willingness to engage with the scales and having a practitioner "champion" within the organisation.18 As part of a planned staged approach to scale development following COSMIN guidelines the aim in this study was to explore the face validity and refine the wording of the Attitude to Health Change scales from the perspective of these specialist palliative care practitioners who have experience of using the emergent scales with patients and family carers Purpose and Design Validity is important in scale development, and face validity, a subset of content validity, is defined as the degree to which items of an instrument reflect the constructs to be measured. 19OSMIN guidance recommends that professionals should be asked about the relevance of items, as it is important that the scale has 'buy-in' from all stakeholders such that included items are important to clinicians and consistent with the underpinning theory. 20The practice context in which a measure is used is an important aspect of developing an understanding of validity21.This perspective was central to the study's purpose and design, drawing from the direct experience of specialist palliative care practitioners use of the developing Attitude to Health Change scales in practice with patients and family carers to explore face validity and suggest refinements to the scales' wording.Two study methods were chosen: a) qualitative focus groups with hospice practitioners specialising in providing psychosocial support followed by b) an online survey. Initial Attitude to Health Change Scales The items in the Attitude to Health Change scales are theoretically determined, reflecting the two dimensions of the Range of Response to Loss model presented earlier. 13The methods used to calculate vulnerability have been validated for use with those who are bereaved in the Adult Attitude to Grief scale which was shown to have construct and discriminative validity. 14Firstly, at an instinctive and spontaneous level, reactions learned and acquired formally and informally shape the experience and expression of emotion and thoughts.The characteristics of these reactions are described on a spectrum that at one end sees people overwhelmed by their loss and at the other people controlling of their feelings and focused on functioning.Secondly, at a conscious level people respond to the impact of their loss by a) attempting to balance their feelings and thoughts effectively and b) managing the wider implications of their loss, for example, practical, social, spiritual.These are described as coping responses on a spectrum from vulnerable to resilient. The proposed 9-item scale covers three categories; controlled functioning, overwhelmed emotion/thinking and resilient coping.Responses to the scales are scored on a five-point Likert scale, from strongly agree to strongly disagree.Vulnerability is calculated quantitatively by combining the overwhelmed and controlled scores with the reversed order of the resilient scores.internal consistency in the three subscales, overwhelmed, controlled and resilient, and that the interconnection between the subscales, support a calculation of vulnerability. Population and Setting Specialist psychosocial palliative care practitioners within UK hospices.Participants were eligible if they were involved in using the Attitude to Health Change scales in practice and/or who worked or volunteered within the hospice in a role which primarily or partly encompassed psychosocial support of patients and their family carers, and where they had experienced others using and discussing the Attitude to Health Change scales. Sample A purposive approach to sampling, focused on known users of the related Adult Attitude to Grief Scale and those using the developmental version of the Attitude to Health Change scale.Hospices known to be using the scales were invited to participate.Following organisational agreement, all those believed to have had experience of using the scales were invited to take part. Recruitment A key contact in each hospice acted as gatekeeper and sent study information to eligible participants.Reply slips were returned to the research team.Written consent was taken from each participant.Those who participated in the focus groups were subsequently invited to participate in the online survey. Data Collection Separate qualitative focus groups were held at each participating hospice. 21A topic guide was used to guide but not constrain the discussion, which could iteratively develop.(see Supplemental material one) Participants discussed the underlying theoretical concepts in the Range of Response to Loss model and how far the specific constructs could best be reflected in the Attitude to Health Change scales.Focus groups were audiorecorded and fully transcribed. Following completion and analysis of focus group(s) data, an online survey was constructed using Qualtrics. 22The survey (see Supplemental material two) invited choices based on: • the scales original wording • specific suggestions made in the focus groups • qualitative reflection on retaining theoretically consistency with the Range of Response to Loss model • simple to understand language • factors pertinent to life-limiting illness Participants were asked to state their preferences and give freetext comments on proposed changes. Data Analysis Focus group data were analysed using Framework Analysis, following the process of identifying a framework; indexing; charting; and mapping and interpretation. 23The coding framework was iteratively developed through independent coding of transcripts by (L.D.) and (L.M.), with differences resolved through discussion.NVivo was used to develop the framework and manage coding of transcripts.Charts were used to compare and contrast across and between focus groups. The formulation of survey questions followed the analysis of focus group data and was based on a hermeneutic approach. 24hree elements were seen as crucial components in the process of determining wording options presented to respondents and interpreting responses: Ensuring theoretical integrity with the Range of Response to Loss model by consideration of how alignment of the items in the scale to the concepts in the model could be maintained. 2. Identification of wording which was seen as appropriate to variable patient circumstances eg a new diagnosis, changes/deterioration in health, and end of life care. 3. Maximising linguistic clarity by using simple sentences and commonly used words to make the meaning of the items in the scale clear. Respondents were presented with a number of wording options, generated from the focus group analysis, and with wording considerations based on the principles above, for each of the nine scale items.They were asked to rank their preference for each for both the carer and patient versions of the scale, with an option for free text comment and feedback on wording options presented.Simple numerical preferences were used to determine the preferred options. Research Ethics The study was approved by the Faculty of Health and Medicine Research Ethics Committee at (name removed) (FHMREC18009; 5/10/2018).Each participating organisation gave research governance approval for the study. Results There were 21 participants across three focus groups (see table 2), nine of these participants responded anonymously to the follow up online survey. Four themes are presented: the importance and distinctiveness of the scales; maintaining conceptual integrity; ensuring a palliative care focus; and ensuring linguistic clarity.These are followed by more detailed results focused on the precise wording of the scale items. Practitioners' Understanding of the Importance and Distinctiveness of the Attitude to Health Change Scales Practitioners affirmed the scales relevance to practice and unique nature: 'there is more opportunity to really explore at a deeper level as well which maybe you don't have with other tools'.(site 2) Other participants saw the scales providing a framework for holding the diverse agendas of patients and carers: 'I think it just asks some very specific questions about all number of things, so that's quite a useful framework to have.It starts a conversation about how they're managing…whether they're controlling the situation…how they're responding to it…gives you a bit more insight into their personal response to illness.' (site1) The effective use of the scales implied positive engagement by patients and carers: '(if) they're struggling with the language to describe how they're feeling ……….. (it) gives them a voice in a contained way, ' (site 1) 'I handed it to a patient in the first session and we talked about it and she asked if she could take it home, and then she came back to the second session and she really used it amazingly well, she processed a lot of stuff'.(site 1) 'With patients it's (use of the AHC) a really good way of sorting out the nub of what's going on'. (site 3) There was clear affirmation that the scales fulfil an important function in quantitatively assessing patients and carers attitudes towards the patient's illness and qualitatively in prompting conversation about those perspectives. Maintaining Conceptual Integrity of the Scales It is sometimes necessary for practitioners to look for words or ideas to which the patient or carer might more readily relate, while retaining the underlying theoretical concepts in the scales.The following gives an example of how a practitioner was able to make an intended meaning clearer to the patient.Item 3. 'When thinking about patients being unclear about "inner strength'' I used an example of when he had to go back to the oncologist to hear more bad news, did he feel he had the strength in himself to go and do that or did he feel the bad news had dented that?(site1) This provided the patient with a way of understanding the concept of 'inner strength', from his own experience. The cultural acquisition of notions of 'being brave' (item 4) was picked up as important ie how emotional reactions and the meanings attached to them are derived from social learning.Item 4: 'This is quite a therapeutic question because often it's where people's belief system is clashing with what they're able to do, and so actually drawing out some of the discord in themselves is because they're not able to live by their beliefs anymore or you know they can't function, is often it's quite a good clinical question.' (site3) Focus group discussion identified particularly the concepts of 'inner strength', 'being brave' and 'making sense of life' in items 3, 4, and 7 as most likely to need clarification. Ensuring a Palliative Care Focus Practitioners were able to combine their working knowledge of the Adult Attitude to Grief scale and palliative care expertise to offer views about ways to improve the face validity of the Attitude to Health Change scales. 'The challenge is also to make the wording here fit the health change context rather than just transpose from the bereavement ones'.(site2) An important revision was to replace the repeated use of 'changes/deterioration in health' in each item, which participants found 'a bit convoluted' (site 3) or 'very cumbersome' (site 2), with the more generic use of illness/health.This is more simply applicable at any phase of a life-limiting illness. Two specific items were challenging in the palliative care context.Two focus groups noted that in Item 3 the use of the term "inner strength" could be misunderstood and confused with a physical state rather than a psychological one. 'when you have a patient who is quite poorly through treatment and other things, the word 'strength' for them is just not, they can't comprehend it.'(site2) Item 8 and the concept of "getting on with life" was seen as problematic in two of the study sites: 'if you're ill you may not have the physical capacity to get on with life'.(site2) While for a carer there may be no choice but to 'get on with life'. Ensuring Linguistic Clarity The original scales were seen to contain a number of items where clarification would add to their practice usefulness.In Item 1, there was some debate about the word "face" in relation to 'facing feelings'.There was a suggestion that "cope" might be a better word but the counter to this was, 'You could cope with something but not necessarily confront and face it I suppose'.(site2) 1.It may not always feel like it but I do believe that I will come to accept the changes/deterioration in my health and its consequences 2 3 2. I do believe I will come to accept my illness and its consequences The use of the word "constant" relating to sadness, in item 5 was not liked by several participants.Items 5 and 9 were identified as being too long and introducing several issues that could be confusing.The ideas discussed in the focus groups shaped the options provided in the online survey. Survey Results The wording options derived from the focus group analysis and preference scoring for each of the nine scale items are presented in table 3. Some clear wording preferences for items 1, 2 and 9 emerged from the survey but for other items, respondents expressed a range of views.It was necessary to view the survey responses alongside the focus group discussions to provide a synthesis of the best fit with the theoretical concepts inherent in the scales, the palliative care context and linguistic clarity.Table 4 shows the original scale wording and the final revised version, alongside the Range of Response to Loss concepts.The items are grouped to reflect the three conceptual dimensions in the scale.The carer version of the scale uses equivalent wording eg Question 1.I am able to face up to the feelings I have about … 's illness.(revised version). Discussion The Attitude to Heath Change scales were developed by articulating the conceptual dimensions in the Range of Response to Loss theoretical model 13 around the overwhelmed, controlled and resilient constructs, and by reframing the wording used in the validated Adult Attitude to Grief bereavement scale, 14 to represent the range of emotional and cognitive reactions and coping responses to life-changing illness.Specialist practitioners with experience of using the scales affirmed their relevance to palliative care practice, 18 with respondents noting the engagement of patients and carers with both the scales quantitative function of assessment and the qualitative function of a facilitatedtherapeutic conversation.In this study practitioners reflected on the scales wording, exploring issues of face validity, which stimulated ideas about how items might be clarified and simplified while maintaining theoretical congruence with the underpinning model.The outcomes of this research are revised Attitude to Health Change scales (patient and family carer versions) that make sense to expert practitioners, ready for the next stages of validation with patients, and psychometric testing. A large number of assessment tools are used in palliative care 4,25 and some psychosocial elements may form part of a multidimensional assessment or be implied within generic psychological measures.More recent research has moved beyond psychiatric classifcation to identify more specific psychosocial factors in palliative care. 26A small number of scales provide helpful comparisons with the Attitude to Health Change scales 27 and have conceptual parallels eg between the Range of Response to Loss concept of being overwhelmed, and demoralisation, helplessness-hopelessness, but there are still notable distinctions.In contrast to scales which are symptom based and/or psychological extrapolations, the Attitude to Health Change scales constructs are theoretically based.The two 3.I feel emotionally strong enough to cope with my illness and its consequences.Hopefulness/ positivity 9.It may not always feel like it but I do believe that I will come to accept the changes/deterioration in my health and its consequences. 9. I believe that I will come to accept my illness and its consequences. dimensions of the Range of Response to Loss model provide the theoretical link between instinctive expressions of loss and conscious coping with loss.Support for this link is taken from the theoretical work of Mikulincer and Florian 16 who connect attachment styles, the inheritantly acquired characteristics accrued from learning and experience, with emotional and cognitive reactions to stressful events.The specific manifestations of these characteristics are represented in the overwhelmed and controlled items in the scales along with the characteristics of resilience articulated by Greene 17 in his study of Holocaust survivors and Seligman 1 on the psychology of building human strength (See Table 1).The connectedness of all the items in the scales through the Range of Response to Loss model to these wider concepts of loss suggest their pertinence to palliative care.The emphasis on resilience also makes the scales distinctive by recognising that even with evidence of distressing reactions to life limiting illness, resilience can be a mediating factor.While there has been increasing literature on promoting resilience in palliative care practitioners, 30 and addressing the support needs of carers from a resilient perpsective, 33 there has been less focus on resilience in patients. 34efining the scales wording has made their constructs clearer for the practitioners who would be guiding the use of these scales in clinical practice and potentially more user-friendly for patients and carers.However, the focus groups highlighted other practitioner issues that need addressing including a focused Attitude to Health Change scales protocol and training. 35These need to be in place to ensure the scales are used ethically and proficiently in practice.Clinically, it is likely that the scales will be used for initial assessment and for ongoing review during illness progression The discursive function provides an engagement between patient/carer and practitioner, potentially deepening the insights both have about vulnerabilities in coping.This may show the need for, and focus required, for appropriate supportive interventions. Strengths and Limitations The study sample was small and was restricted to practitioners who already had experience of using the Adult Attitude to Grief scale and who worked predominantly in specialist psychosocial care.Future work needs to explore the potential wider use of the scale by palliative care practitioners not specialist in psychosocial care.The study did not include the perspectives of patients and carers and exploring these is the next planned stage of scale development, which may result in further changes to the scales wording prior to full psychometric testing. Conclusion The scales provide an opportunity to explore the impact of current life-changing illness, and all its consequent circumstances, alongside the predisposing factors which shape a person's attitudes and propensity for vulnerability and resilience.Patient and carer versions of the scale gives importance to this relationship dynamic and its effect on the different or even conflicting attitudes each bring to illness and palliative care.Using the scales as a framework for revealing the dynamic between patient and carers adds extra potential for person centred working.This work provides a sound base for future testing with patients and carers and for the scales' psychometric validation. Figure 1 . Figure 1.The range of response to loss model showing the intersecting core reactions and coping responses.And the concepts derived from the model represented in the Attitude to Health Change scales. Table 1 . The 2 Interacting Dimensions of the Range of Response to Loss Model. Table 3 . Attitude to Health Change Scale Wording Options and Responses. Table 4 . Comparison Between the Original and Revised Versions of the Attitude to Health Change Scale, Patient Version.
2021-12-15T08:23:38.398Z
2021-12-13T00:00:00.000
{ "year": 2021, "sha1": "e55ead47c198dc6c77571d10f0a9f5e0d27388f3", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1177/08258597211064016", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "8480ca09201880c9cf54620e22b2d8cc7876dfa3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }